* input validation at ingestion time vs processing time
* access control via a proper IAM system with defined roles as opposed to granting access to individual users
* various multi-tenancy, multi-region, and multi-regulatory-regime concerns
* relying on standard frameworks/platforms which provide rollouts, monitoring, test harnesses, etc. as opposed to rolling your own
Some of the things were simply "we know this is important, but we have to hit this deadline so we're going to cut corners", resulting in rework later to do things properly in production
It's a business decision which will make them more money.
In Universe A they sell only the most powerful version of the product for 1x price and them make some money.
In Universe B they sell the most powerful for 1x AND a slightly less powerful version for 0.9x price and they make MORE overall profit.
I'm not saying that's a bad thing. There are customers who will want to pay less and not need the most powerful product.
I fail to see how the company in Universe B is morally worse than those in Universe A. One could argue they are superior, in that they offer more choices.
Exactly right. It is acceptable to argue that the pricing is too high and unfair. But to say the structure itself is wrong isn't correct.
Breaking up the total price of a product by it's component features and then creating variants of the sold product with certain features enabled vs disabled and hence having different total prices is one fair way of create products that are tailored to customers needs.
When pricing individual features, the prices may not correlate relative to each other. There may be factors like which features are most used by which segment of customers and how valuable the feature is to that segment customer and hence how much they are willing to pay.
Demanding that all the features should be sold at the lowest total cost doesn't make sense. We don't do this in any other domain.
"Moral" is a loaded word to throw in here. I wouldn't use that term. Customer-unfriendly etc. I think fit better. And if we're positing hypothetical universes, let's a Universe C: They give customers the faster version that doesn't cost them any more, and as a result they build more brand loyalty and good will, leading to more sales, market share, etc.
But profits are somewhat besides the point: I'm saying simply that some policies are not customer friendly. Whether or not you consider that to be good or bad may very well depend on whether you are a customer or a shareholder.
Sure, but then whoever gets to populate the index chooses the winners and losers, because you could just stuff it with different versions of the content or links you wanted to win and the random ranking would should those more often, because they appear in the pool of possible results more often.
Could you explain how you figure that in more detail?
I mean, to my untrained eye, it sounds like it wouldn't be so, since in every time slice of the game (equiv. of a turn in Go?) you can have hundreds of points of control - hundreds of levers to choose to pull - and of course that's in every frame (or whatever interval the UI actually allows you input).
The argument is that Didi made it easier for this alleged killer to get his victim into a vulnerable position (as a presumably backseat passenger in his car).
Didi allegedly did not respond to multiple negative comments about this guy's behavior, nor did they expose these comments to future passengers, which might have flagged him as a threat.
If not for Didi, or if they had been more safety-minded as a company, then this guy would not have had the opportunity to rape and murder this woman, allegedly. That's the argument.
I'm not sure about this particular person in question, but it seems to me one never knows who will stick by the principle instead of "their side" until real-world challenges come along to "test" those principles. I'd wager even the persons being "tested" don't really know what they'd do under such circumstances.
My feeling is that it's rare to stick to the principles at hand compared to protecting your own (and I mean for any group/tribe not any specific one).
I think of this theory like how lead poisoning works, where the lead can pass through the blood–brain barrier and substitute in (poorly) for calcium, causing all sorts of toxicity effects. Or how drinking sea water can fool your throat for a second into thinking you've taken care of your thirst.
Online social interactions are plentiful and easy and often don't as quickly or sharply trigger social anxiety. But they are a toxic replacement if you don't get any real interaction with your fellow humans.
The other day I raised my hand to high five a stranger in a dance event and she didn’t notice and passed the other way. I remarked how I haven’t felt something like that in awhile. Imagine if you had to watch everyone left swipe you in tinder or reject your resume personally.
at least lead has no intent, but western culture (and now even more than western) is all about making things easier and easier. That's the only metric. It's turning us into sick vegetables. And we pay for it.
> The Great Filter, in the context of the Fermi paradox, is whatever prevents "dead matter" from giving rise, in time, to "expanding lasting life". The concept originates in Robin Hanson's argument that the failure to find any extraterrestrial civilizations in the observable universe implies the possibility something is wrong with one or more of the arguments from various scientific disciplines that the appearance of advanced intelligent life is probable; this observation is conceptualized in terms of a "Great Filter" which acts to reduce the great number of sites where intelligent life might arise to the tiny number of intelligent species with advanced civilizations actually observed (currently just one: human). This probability threshold, which could lie behind us (in our past) or in front of us (in our future), might work as a barrier to the evolution of intelligent life, or as a high probability of self-destruction. The main counter-intuitive conclusion of this observation is that the easier it was for life to evolve to our stage, the bleaker our future chances probably are.
So ZeroBugBounce is wondering if "not agreeing on facts/truth" is the thing (or one of the things) that technologically advanced civilizations experience that filters them out of the running for expanding out into space. E.g. people can't agree on facts, which causes unrest, riots, etc; civilization eventually collapses from it all, and people never make it into space.
In design and development, most processes are about sequences of diversion and conversion, and not being able to diverge, or only through violence, seems to be a major flaw of real political systems, there's a toxic fanatism about conversion and unity at all cost, which seems contrary to human evolution.
The last major diversion happened with the creation of the US, where a part that had different ideals split from its european core. the same thing happened throughout our history, starting in Africa from where people spread into the middle east and then beyond.
The options to handle this drive for diversion are either diversion into space, and/or inwards, reformation to gain time until we can spread into space.
Tech shouldn't be used to enforce conversion, but to enable peaceful diversion. Diversion, and diversion of opinion makes sense if there's a free flow of information. Political systems just haven't cought up.
Hm, I'd say not agreeing on facts/truth is more of a byproduct of not agreeing on what we want.
As a civilization, we still have the fatal problem that people want what's best for themselves, and we don't have great ways of making them work towards a common good. Worse still, as issues become more complex there are more ways to twist policy towards self-interest while maintaining a facade of group-interest. Evolution has beautifully primed us to do this subconsciously.
In this model, peoples' differing self interests leads to them constructing plausible but different arguments over "facts/truth". The conflict resolution mechanisms (argument) that used to work start breaking down. Society polarizes, stops agreeing on things, and eventually reverts to violence.
I don't think this is The Great Filter though. More likely our civilization disintegrates until it reaches a level of complexity we can handle again, but it's not going to be totally wiped out.
> I don't think this is The Great Filter though. More likely our civilization disintegrates until it reaches a level of complexity we can handle again, but it's not going to be totally wiped out.
I think people assume it often, but I don't think the Great Filter actually requires destruction of the species/civilization. Maybe it _is_ nukes, or bioweapons; or maybe, the Great Filter (or _a_ Great Filter) is #7 on Hanson's list[0], "Tool-using animals with big brains". So there could be plenty of species out there that use tools and even have civilization, but not enough or the right kind of brainpower for developing space travel. So the Great Filter could be a rigid wall that prevents further progress towards space travel, or an elastic trampoline that keeps pushing civilizations back whenever they get too close to it (what you said). Or it could be a wood chipper. We can only speculate.
[note] I say "space travel", but this also includes remotely-detectable signs of intelligence (e.g. signals, or Dyson spheres), and if our remote sensing gets good enough, of life.
It seems to me it's the mere fact that humanity's rise was borne on the back of its ruthless drive and selfish greed, and the same things that made us the best hunters also influence society in myriad ways.
In my eyes, greed, hierarchy, and limited compassion / an inability to share are the filter, and all society's diseases are the symptoms.
I'm really disappointed when I hear this kind of thing. It's a sentiment that so many people believe, and want to believe, and choose to believe even in the face of overwhelming evidence to the contrary.
It's almost like a pernicious, psychological cancer, except it can also spread virulently to other susceptible hosts.
Humanity's rise is due to lots of factors, especially the countless individuals who have worked to improve the health, well-being, safety, and wealth of the people around them. Greed is but one aspect of humanity, and unbounded greed destroys societies.
Yeah, there are problems in 2018, but to sit here today, in the relative comfort of modern society, and say "we only have all this because we're terrible animals" is hugely disrespectful to the work of a lot of people.
I'm certainly not saying that we have no redeeming factors. As you say, that would be an absurd mindset in this world we live in.
However, I don't know how you can deny that we are greedy and have limited ability for compassion. Do you deny Dunbar's number? If not the specific selection, the concept?
Do you deny that some men have 10 figures of wealth while others have nothing to their name?
Do you deny that we have, as a society, enacted a plague upon the planet, and are causing extinction of other species at a rate only witnessed within other extinction events?
Just because one has virtue does not mean one doesn't have vice, and certainly the same applies to the "one" that is human society. While we certainly have accomplished great feats and have seen great individuals, as a collective, we do not know how to relinquish the individual pleasures in order to facilitate comfort and health for all.
As the article I saw here recently that said something along the lines of "people are not stupid; life is just hard", your energies are limited, and there are those who take advantage of the splintered and uncoordinated thoughts of society to gain great wealth and power, and those who use their great wealth and power to splinter and confuse the thoughts of society.
That is the cancer, not my thought that these people exist; do not ignore that their existence is all but guaranteed by the very nature of the creature we must have been to find ourselves the dominant force of this planet.
Yes, you can look back at our animal past and at Chimpanzees and say we're a brutal, greedy, what-ever. But that is only a tiny sliver of conscious existence. What makes us human is the constant progression towards humanity - compassion, love, and society-wide safety and abundance.
See Hans Rosling for some promising trends rooted in global facts over the past 100 years. Yes, there are still bad things "left over" from being animals, but these are declining over the decades ::across the board::
You're right, but the problem is it it only takes a very small number of excessively greedy and power-hungry people to make the world worse for everyone.
If you don't take into account cooperation, complex communication, and the ability to abstract (e.g. a rock isn't just something to step over, it's also a potential weapon and when many are combined a dwelling, bridge, etc.), then your observation has merit.
We're a relatively weak, slow animal. We don't see well at night. Our teeth and nails are fairly useless from a hunting or even defensive perspective. We don't procreate quickly and our young mature slowly. Our survival advantage is working in coordinated teams and having flexible minds with respect to the world around us. Cats are closer to what you described.
Lying is as old as language. Monkeys lie to each other[0]. Lying using video isn't going to be fundamentally new.
Our ancestors have solved this problem for millions of years through context, trust and reputation systems and a bunch of other heuristics. We're in a historically unusual time where we've developed a medium of communication that's cheap to use and very expensive to fake. It's only the transition period back that's ripe for exploitation.
Deepfakes aren't that easy to make and they only do one specific thing and do it quiet poorly if your job is to actually fool people. The handful that I've seen in the news are obviously fake. This is not the general purpose video faking tool I think we're really talking about.
"The Great Filter" and the probability of alien "civilizations" or life in general are massively overstated because of a bias for imagining cool stuff. Existential threats are real, no need to bring extraterrestrials into the discussion.
"So is not agreeing on facts/truth The Great Filter?"
It's a possibility I've considered before. To put some math meat on the idea, consider that as education increases, the number of thoughts you can conceivably think goes up exponentially, but your ability to discriminate probably goes up only polynomially, and possibly as badly as somewhere around linearly. So the smarter you get, the more bad and wrong idea come within your reach, and the less resources you have proportionally to disprove each one. And this is assuming a very idealized situation in which you are a perfectly rational observer attempting to dispassionately filter truth from untruth; any deviations from purely rational make it even worse. Note there's no reference to "humanity" in there; it's a problem everyone in this universe will have. (It's basically a restatement of the impossibility of Solomonoff induction.)
As an example of what I mean, consider the claim "the quadratic equation is x = (-b +/- Sqrt(b^2 - 4bc)) / 2a". My children are still too young to have any clue what that means. It requires substantial education to even begin to think this wrong thought, or any of the many related surrounding wrong thoughts; there's who knows how many ways I can mutate the right answer to be just slightly wrong, to say nothing of simply making something up. All of these wrong possibilities are available to you, once you are educated.
On the one hand, I use math here because it allows me to give you a clear example of an unambiguously wrong thought, on the other hand, it also betrays me for the very reason that it is unambiguously wrong. Math gives us tools to raise our confidence in the true quadratic equation even above the exponential noise of possible wrong answers. But the real world has much bigger problems than that where we lack such tools; how shall we structure our society to obtain a given goal? How shall we even determine what those goals are? How can we be sure that we are not actually putting ourselves on the path to inevitable destruction, even with possibly every participant trying to avoid that in all earnestness?
You probably just had some sort of thing leap into your head, perhaps about the environment or war or social inequality or something... how can you prove that you are not in fact 100% wrong? What if the only way we can still be alive in 10,000 years is precisely that we must unambiguously destroy our environment in order to be forced to learn to deal with it, because it's going to happen anyhow later (supervolcano, asteroid, etc.), so better to learn to deal with it in slow motion? What if space is empty because all the other species did learn to live in "harmony" with their environment, until it blew up and they couldn't deal with it? What if war is a necessary component of survival because without such evolutionary pressures, intelligent species inevitably just fade out of existence as intelligence is selected away, Idiocracy-style? What if the only way for anyone to survive the next 10,000 years is to create a rich upper class that at some point will be the only ones to survive some inflection point crisis? (What if the only way through the Singularity is for some rich person or set of people to be the one powerful enough to hold back the first rampaging AI, so all the egalitarian species keeled over dead because the AI trivially economically overpowered any given individual before extinguishing its own spark?) I'm not "seriously" proposing these ideas, but when it really gets down to it... you don't know. Neither do I.
I like this kind of "meta-problem" and would be interested in known how to get people more interested in ideas that I intuitively know are useful.