Hacker Timesnew | past | comments | ask | show | jobs | submit | SuperThread's commentslogin

There are way more people around now than there were then—the number of people alive today is comparable to the number of people who have died in the last millennium.


Which should amount to an Einstein every millenium or so.


Why am I the only person who's terrified rather than excited by every new incremental progression of AI? Do you guys actually look forward to the day when humans are made obsolete?

PS: This scientist seems to be exponentiating incorrectly. In ten years he'll have 32-ish times as much processing power; he needs 10 million times as much to get to the level of the entire brain.


Apprehension about progress in artificial intelligence is entirely natural. In fact, I think apprehension and denial will lead people to continually redefine their notion of intelligent behavior so that current computers are always excluded.

Not so long ago many people said a computer could not beat a grandmaster at chess without being intelligent. Enter Big Blue. Others have stated computers will never compose music that is emotionally meaningful to humans without being intelligent. Enter Experiments in Musical Intelligence and other widely-acclaimed composition programs.

Until the Turing test is passed people will be able to plausibly deny any advances in artificial intelligence. No matter how advanced such "brain in a box" models becomes, they won't pass the Turing test without being embedded in a rich environment with which they can interact.


Can you link to a single good composition program? I have heard Wolfram's and I thought it was crap, and the only decent one I have heard was one which could only emulate old composers.


Although you characterize it as only being able to "emulate old composers", I may have been referring to the program you are thinking of. Although it is true that it learns and extrapolates from musical input, so do human composers. Its algorithms can learn from anything, and by feeding it a mixture of styles, it can actually generate some fairly compelling and new sounding works. The programmer is also a composer and has trained the algorithm on some of his own works and the output sounds nothing like old composers.

See http://arts.ucsc.edu/faculty/cope/mp3page.htm for audio and http://arts.ucsc.edu/faculty/cope/experiments.htm for a description of the project.


Every time mankind makes a major breakthrough it makes entire families of professions obsolete. This gives us opportunities to move onto more advanced forms of keeping ourselves busy.

Most of the time when people ask about "obsolete humans" in context of AI, they really mean "programmers" or "doctors" or whatever. Yes, it might. It simply means that there are professions to be invented that will be to programmers as programmers are to janitors today.


But once AI becomes more intelligent than humans, every conceivable role will definitionally be better filled by that AI than by actual humans.


Great. In simple terms it means that machines will do all the work and we'll drink all the beer.


Are you looking forward to drinking beer for the rest of your life?


No. I'll get a very smart computer from IBM to do it for me.


Seriously, though, there will be no use for you (or any other human) whatsoever post-AI.

And that's the best-case scenario—when AI is being used for the common good. If the people with early access to it try to use it to rule the world and enslave humanity, they'll probably succeed.

I'm not trying to be an irrational doomsday predictor, here; these are just the conclusions that I come to when I work off of the premise "humanity will have access to cheap human-level intelligences".


This sounds like nothing more than promoting the status quo.

Why do you automatically assume that "no use" in this context would turn out to be a bad thing? The way I see this, if we happen to have greater intelligences working for our common good, we would be able to solve any problem better than a human could - including a possible problem of feeling useless. This would be the best-case scenario, and IMHO it would be much better than the world we have today. Possible solutions to the problem, from my limited human brain, could be bringing the human brains up to the level of the greater intelligence and hence finding new problems to solve, altering human drives so it isn't a problem any longer, abolishing AGI entirely or partially, etc. Any problem could in such a best-case scenario be solved better and faster than humans could.

I agree with you on the worst or worse-case scenario point. There are huge ethical risks and implications in creating very powerful and capable machines. This means that we need to go into this situation with our eyes open, and make sure that we discuss ethics, transparency and consequences from day one.

Enslaving humanity is a basic human drive, and I like to believe that we can do better than that if we try hard. In a pure cost-benefit analysis, it is obvious that it would be best to use such technology to help out everyone.


>The way I see this, if we happen to have greater intelligences working for our common good, we would be able to solve any problem better than a human could - including a possible problem of feeling useless.

That's a logical error: the existence of a superhuman intelligence might cause more problems than that superhuman intelligence can solve.

When humans solve the personal problem of "feeling useless" they almost without exception do it outside of a vacuum. Their feeling of usefulness tends to stem from the impact that they have on humanity.


We are good in having human experiences. I think AIs will really like sites like ycombinator or reddit. People sharing their view on the world in a format which is very accessible for computers.

Some guy (maybe Kurzweil?) said something along the line that he was always more afraid of stupidity than of intelligence. And I can agree with that a lot. I'm way more afraid that humans are destroying the world out of stupidy than of hyperintelligences which we created & teached trying to get rid of us.


Do you have a pet? Do you think it cares about the fact that you are, supposedly, the "intelligent" one?

My point is: what reason do you have to believe that the only lives worth living are the ones where people are on the top of the pyramid?


Your argument assumes that greater intelligence implies greater fitness for all jobs. I'm pretty sure there are jobs for which intelligence is not the primary requisite.


Anyone watching the new TV show "Terminator: Sarah Connor Chronicles"?

It seems to me, whether or not to be scared about the progress of AI, depends on a very simple question.

Should we treat AI like our slaves, or should we treat AI like our children?


> Should we treat AI like our slaves, or should we treat AI like our children?

more likely scenario would probably be, how _we_ would be treated by _them_.



Human bodies may be made obsolete but I am keeping my fingers crossed. If we have computers which can simulate human brains (and presumably do it more efficiently/with more capabilities), we should have methods of transferring consciousness back and forth eventually at which point, I can implant myself into the computer and have all the advantages. For greater detail on this, see Space Odyssey 2001.


I think that you have to distinguish between wars in which the outcome is uncertain—that is, wars in which real risk to both sides is present—and wars that are essentially a big guy beating up a little guy.

I agree that the former kind of war would happen with the same frequency if this pen pal idea were implemented—the motivation for such wars need to be large enough to overcome the personal danger that they produce, and so would easily overshadow semipersonal connections with citizens of the opposite side. I think that the latter would very much be deterred, though, because the motivation for such wars can be miniscule, to the point where even the humanization of the "enemy" could be a significant deterrent.


I must admit finding it hard to make the distinction you suggest.

Would the American Civil War be a big guy beating up on a little guy? After all, the North was ten times as populous, and many, many times as rich. Nobody on either side thought the war would last very long.

How about the Romans and the Germanic tribes? Nobody in their right mind thought those Barbarians could stand up against the full might of the Roman Empire very long.

Wars are fought until one side decides to stop fighting. This means that "big guy/little guy" wars and "outcome is uncertain" wars don't seem to be that different. In fact the determining factor would seem to be how easy it is for one side to quit. But how would you know how much it would take to make the other side stop fighting until it actually happens? From the history I've seen, when both sides are very intimate with each other's language and culture -- that's when some of the deadliest, nastiest conflicts take place.


Then "immorality" is so prevalent that calling something immoral is pointless.


you have to include the effects on yourself as well.so in the case of the bottled water you don't have to drink poissoned water instead of bottled water in order to act morally.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: