I really appreciate that one of the richest (sometimes the richest) humans on the planet takes the time to read about the experience of poverty in depth. Hopefully that continues to influence his philanthropy, and his influence on what other wealthy individuals and other policy makers do.
Why should we condemn a person for good acts based on previous poor actions? Or are you just so cynical that you can't take his good actions in good faith?
The phrase "good faith" implies you can see into the mind of someone else. Whenever it appears, someone is trying to bias the jury.
The Gates Foundation is dedicated to "tactical philanthropy" which maximizes investment "leverage" by allying with existing commercial/political interests whenever possible, often with strings attached. Gates has stated clearly that he prioritizes programs that are revenue neutral -- that generate enough income to sustain themselves. While that's better than a stick in the eye, that's not exactly "giving your money away".
And in terms of atonement, exactly what remedial actions are sufficient to turn a gangster into a saint? Martin Shkreli and Al Capone would like to know.
I love that movie, but I don't get the impression he cheated anyone from it other than the guy who wrote DOS. Even then, it's not cheating to buy someone's product that you see potential in, companies do that every day. You can't really say he cheated Apple because they were both "stealing" ideas from Xerox.
To me the biggest problem isn't 'evil' AI (SkyNet/Matrix scenarios) since I don't think it is possible to predict the intrinsic motivations of non-biological intelligence (no evolved neurochemical dependencies like we have/infinite capacity to grow beyond our limitations). That makes way too many assumptions about AI continuing to remain like us (curious, jealous, afraid, etc.) when all our behaviors evolved in us and are limited by biology. I'm more worried about something emerging unintentionally that does harm (as in the overused 'paperclip maximizer'). In that instance, not about being on 'our side'.
Shane Legg isn't worried about "evil" AI, but rather an AGI/SI that simultaneously (a) has the ability to achieve complex goals in complex environments, and (b) not perfectly aligned with operators' goals [1].
We can also take a strong guess about some aspects of how sufficiently advanced cognition look from the outside, regardless of internal implementation, because of coherence theorems [2][3].
Even the most optimistic AI scenario seems pretty depressing to me. Ideally we would be able to merge or at least fully cooperate with a vastly more intelligent entity. At that point, its consciousness would dwarf our own and we would be pretty meaningless. Maybe our conscious experience would live on as some vestigial relic, but I doubt it (we?) would even bother. It would be like a droplet of water landing in the ocean.
> Even the most optimistic AI scenario seems pretty depressing to me... its consciousness would dwarf our own and we would be pretty meaningless
This seems like a classic optimist/pessimist situation, like when we find out the world/universe is bigger than we imagined. I think our brains are poor at dealing with absolute values, so we use relative ones instead: when we discover some unfathomable new landscape, our minds "zoom out" to encompass them, which makes the previous stuff look small in comparison.
That's just (another) limitation of the meat in our heads though. The world doesn't actually get smaller when our transportation improves: it's just as vast as it was for any explorer; the Earth didn't become less special when we discovered that the planets were worlds; Sol didn't become less immense and powerful when we discovered that the stars are other suns; our solar system didn't become less intriguing when we found exoplanets; the Milky Way didn't lose significance when we discovered other galaxies; baryons didn't lose their complexity when we discovered dark matter; etc.
Right, but there is a commonality in our ability to perceive those different contexts.
The example that I heard years ago (I think from Kurzweil?) was framing the difference of us versus future AIs as comparable to an ant's perception of its surroundings versus a human's. In his example, the human has no regard for an ant, we'd just as soon step on them ("evil" AI). To re-frame that, if you could magically turn an ant into a human, why would it even matter if it was once an ant? The transition would be a complete disconnect. Maybe if you told the person that they were once an ant they would have some sort of strange reverence for ants, but it wouldn't be very rational.
That is interesting. I agree that things along this line are most worrisome. My thought process always goes to an AI recognizing that a lot of human activities are self destructive or destructive to others/nature and prohibits those activities. Meaning not necessarily actively deciding all humans should die but deciding to not allow us to do much of anything because it is a net loss. I think this points to having AIs that are specialized and isolated. Very thought provoking for sure....
Education is a great long-term solution, but we need a PSA/Marketing solution for rational thought today. Similar to the TRUTH anti-tobacco campaign. Make it 'cool' to use logic and base your decisions on rational thought. I'm not advocating any particular policy issues, but a lot of problems can arise if people can't tell the difference between real and fake news, and don't bother to. They don't bother because thinking rationally is not something we encourage.
TRUTH make me, quit for 10 years now, want to light up. That kind of half-truth fearmongering to kids is the posterchild of ineffectiveness. Moreover is sows distrust of the very channels to use to provide valuable public info. Calling it 'truth' was just the hypocritical icing on the cake.
I can attest this problem is insidious and pervasive through-out society. Not just in the U.S., but globally. Separate from the moral issue of children growing up in disadvantaged environments, this is an economic problem, and a problem for the advancement of humanity. There is so much hidden value in masses of people that have no way deliver this value due to circumstances. I know anecdotally, you can point to individuals who made it despite these setbacks, but this is not the norm statistically by far.
What would happen if we found a way to lift these people up?
How much better off would everyone be economically by the value these hidden 'diamonds' create? (and by everyone I mean even the wealthy)
How much faster could humanity be advancing in general?
From a practical and factual perspective, ignoring this, or just waving it off as 'the best rise no matter what', is so damaging and counterproductive that it is actually holding back everything.
While they may one day be 'creative', I feel this will be the last bastion of human capability beyond AI. Luckily, the interesting thing about creative endeavors is that they are often a unique synthesis of many things. AI may create wonderful art, music, entertainment, etc., but this does not mean things created by humans won't still be valuable to other humans. Things would just be created in parallel and in communion with AI. That being said, the percentage of the population that can 'create' for income/profit may be very small. Hopefully by that point, we will create for the intrinsic value of creating and sharing, and not for money.
The best possible brand connotations are an interesting way of putting it. When you are trying to get a company off the ground (figuratively) that is experimental in nature and will require a lot of capital investment, why hinder yourself with this 'bad' name right out of the gate? There are any number of edgy names they could have picked that people didn't associate with the last thing you want to have happen on an airplane. It is just unnecessary, and not well thought out. Even if the intent was to get attention, I wonder how many people at Boom talked to the founders and said: "Are we sure about 'Boom'?"
I didn't associate it with anything negative the first time when I saw the headline. Didn't even think about it being a bad name till I read the comment.
Not sure as many people will think it is negative as you think.
Well, what they think when they first see it, and what they think after it's made the rounds of the late shows (Kimmel, Colbert, Fallon, etc) might be entirely different. This is low hanging fruit.
I agree. Of all the options, quite possibly one of the worst names you could have picked. I'm surprised the negative associations weren't obvious to them.
Search is not a product for Google since they do not sell it. No matter how much everyone in the world might (including Google) categorize it as such. Products are sold. The only product they sell in relation to their search engine is an advertising platform. That is by definition the product.
Correct me if I'm wrong, but I believe a product is something produced. They produce a search engine, they serve ads on the search engine. Advertisers produce the ads. The ads are a product other than googles unless it is a Google ad.
Advertising "slots" are google's product, just like physical billboard installations are a product of ClearChannel etc. AdWords is the mechanism Google uses to sell this product.
Search is one of Google's products and not all products are sold directly as a revenue source.
A product is anything that can be offered to a market that might satisfy a want or need. That nobody actually pulls out their checkbook and pays for a Google search doesn't mean it's not a product and that there's not a market for search.
A TV network's product is television content, their monetization strategy is advertising. The delivery mechanism is cable / airwaves.
Sure there are secondary economic effects and markets that complicate the picture. But the basic business model of a television network isn't all that different than a regular company that sells their products directly.
If that's true, then what's their programming called?
Programming is their product, and customers buy it by paying attention (or to netflix, hulu, comcast, etc more directly). Attention is the currency, and ads let them exchange attention for USD.
So AMC and HBO are making different things? No, of course not. They both produce content and distribute it through cable networks. One of them charges advertisers to deliver the product to viewers, while the other one charges the viewers directly.
To be fair, if you're an advertiser, then the product is eyeballs. But if you're a pair of eyeballs, then the product is the content. They're not mutually exclusive. Same with google. If you're a searcher, then the product is results. If you're an advertiser, then the product is relevant searchers.
Okay folks, Google's one and only product is TRAFFIC.
Search, Maps, Mail, Android are given away free as their competitive strategy of generating huge volume of traffic.
Google's business model is to charge their customers the possibility to redirect some of that traffic. And because advertising is speculative they can reach high profit ceilings.
It's an old-school dot-com point of view. Own eye balls, monetize clicks.
Facebook went and ate Google's lunch by following their playbook and improved upon it by focusing their product around SOCIAL ACTIONS.
Facebook produces huge volumes of interactions. Every like, every heart, every comment, every follow. By the time Google got around moving the battleship they've already lost.
You want to be the next big thing?
What is that one thing everyone does all the time? Catch that.
Yeah, I've heard that line before, but it's about as useful as insisting that movie theaters' product is overpriced popcorn and soda. The movies just happen to be there to drive traffic.
Yeah, all of that is true, but it misses the point entirely. The product that is 100% responsible for people going to the movie theater is movies. The product that drives everyone to Google is search. The fact that the revenue isn't directly related to the product doesn't make the product not the product.
The market could be for families with children in the house who want self defense, but haven't yet taken the plunge for safety reasons. I'm not sure how large this market would be.
I have kids and guns and I don't trust this technology to work when it needs to work, both in a defensive scenario and with my kids. Keep-It-Simple-Stupid.
Basic gun safety dictates that a gun is always loaded, even if your fingerprints don't match.
There's a theory behind handguns like Glocks that manual safety switches are too complicated when you're experiencing the intense response of a defensive shooting. Taking gloves and jewelry off, adjusting your grip, or the nightmare of having to screw with electronics is just asking for problems.
Keeping guns stored in a vault with children present is generally a good idea if you have one or more weapons that you occasionally use. However for self defense purposes, you typically don't want to be fiddling around with a vault if someone is invading your home. I'm not commenting on the likelihood of that scenario, but it is something people consider when they own firearms.
A surprising number of people like the idea of fingerprint scanning, biometric sensing, bluetooth bracelet detecting ... gun vaults. Inside of which is a 100% reliable non-smart firearm.
Everything is a tradeoff. When I have kids, I'll probably have a small bedside safe of some kind... and just be willing to accept that it makes it a little slower to get the gun out.
I was actually referring to a specific product called a Gunvault. It isn't great for preventing a burglary, since it can just be carried off, but it opens quickly and is a decent way of keeping children's hands off of a gun.
Don't people have guns for self defense? What if someone invades your home in the middle of the night - do you want to have to start fiddling around with your gun vault?