Hacker Timesnew | past | comments | ask | show | jobs | submitlogin
OpenAI Announces $10M Superalignment Grants (openai.com)
49 points by famouswaffles on Dec 14, 2023 | hide | past | favorite | 56 comments


> We believe superintelligence could arrive within the next 10 years

I'm not a fan of allowing corporations to control and define terms like AGI and superintelligence in ways that profit them, but perhaps it is our fault as a society for not producing stricter definitions of these terms. Seems to be a pervasive problem with terms in the sci-fi domain - we have 'hoverboards' these days, for example.

Putting aside that lament, I wonder if we're heading towards another iteration of learning the bitter lesson [0]. It seems that the incentives presented here are trying to circumvent approaches that overly favor human oversight, but they still seem to overly emphasize human understanding.

[0] http://www.incompleteideas.net/IncIdeas/BitterLesson.html


I think it's pretty clear what "superintelligence" means, no?

Superintelligence: smarter than any human. AGI: as smart as an average human.

"As smart" means it can do anything a human can do.


Aren't those terms totally meaningless in terms of modern "AI" though? AGI is nowhere near being able to play a violin solo, their fine motor skills are abysmal. The list goes on and on, they also aren't running a bakery, collecting field data for ecological studies, or playing literally any sport any time soon either. The concept of a disembodied system even being "smart" doesn't make sense to me.


There are literally millions, millions of humans unable to do any of the things you have listed. Do they no longer qualify as general intelligences ?


No, it’s not clear. There are already many domains where AI is vastly smarter than any human. There are algorithms like GPT which are fully general. So do we already have superhuman AGI today?

Most people would object. But it is hard to argue against that on definitions. It usually ends up being “by superintelligence I meant the singularity.” Which really makes this a question about reality matching science fiction…


Smarter than any human in every domain. So no, we don’t have a superhuman AGI yet. And it has nothing to do with singularity.


Write a script which downloads huggingface models and tries each in turn. Voila, smarter than human in every domain.

Ah, but “it can’t fold laundry” you say. When was the last time you heard someone non-ironically describe folding laundry as something requiring human-level intelligence?

You are hiding behind the ill defined term “smart” and performing the classic motte and bailey maneuver. If you honestly attempt to match “intelligence” or “smarter than” to its colloquial meaning of “being able to understand things, apply learned knowledge, and solve problems that require thinking” then yes, we have super intelligent AIs, and have had for many years.


What model should I download that would do my work for me today? I’ll explain the project I’m working on, and will give it full access to the codebase, documentation, Slack, and Zoom so it can find all the context and ask my colleagues and my manager if it needs more info. GPT-4 is obviously not smart enough to do this (I’ve been using it daily since it came out), but it seems like you know of some miracle models on HF which can? Would love to try them out.


Why is GPT not smart enough to do this? That is not obvious to me.


Then you should try it, it will become obvious pretty fast.


You just defined the word with the same word -- which is what everyone seems to do.

Smarter how? Where? Computers are already smarter at math, physics, simulation, etc than humans

Generalized smarter? At what? Emotional intelligence? Self preservation? on and on


I mean "smart" in exactly the same way you would use the word when talking about people: "John is smarter than Paul". Or "an average college professor is smarter than an average college student". People use this word assuming others understand what it means. That's my assumption as well.


Interesting. In my experience, I would have said the use of the word “smart” almost invariably results in misunderstandings and biased comparisons. The concept is certainly used in casual conversation, but you can’t do anything serious with it.


> it can do anything a human can do.

So it can get up and walk down the street?

Not trying to be pedantic, but when one is defining things, one needs to be precise.


From my experiments? Given a physical chassis with a high-level API (move one step forward, rotate head, that kind of thing), then yes. Even as early as GPT-3.5 it was starting to become possible, and with GPT-4, it's become quite easy. Someone just needs to put in the elbow grease to glue the components together and patiently wait while it does its thing.


Yes, when put in a robotic body.


Is the robotic body capable of metabolism, growth, and self-repair?


No. Is a human body that suffers from selected kinds of sickness for that matter? Also no.


Pardon? Are you saying that because a human can't heal from all kinds of sickness, healing from some kinds isn't "a thing a human can do"?


Not every human body is really capable of any of "metabolism, growth, self repair"

Do you want a list of the kinds of afflictions that stunt these ?

For the ones that can't, they don't deserve to be called intelligent ?

What any of that has to do with intelligence is honestly beyond me.


> Not every human body is really capable of any of "metabolism, growth, self repair"

99.9% percent of them are, and those that aren't would be considered malformed.

> For the ones that can't, they don't deserve to be called intelligent? What any of that has to do with intelligence is honestly beyond me.

You haven't been paying attention to this thread, yet you reply to it? Please reread it, in particular the parent's argument I was responding to. Such context would be helpful to have when responding to me, yes?


Ok so that's no "AI" that currently exists then.


Not for long

https://x.com/elonmusk/status/1734763060244386074

Boston Dynamics’ Atlas it’s probably a more capable robot at the moment but Optimus will be much better position to act as an embodied agent in the next few years.


Then the AI isn't going for a walk, the robot is going for a walk.

I can currently take an AI for a walk on my laptop, that's not the same as them being able to go for a walk.


>Ok so that's no "AI" that currently exists then.

Lol https://www.youtube.com/watch?v=CnkM0AecxYA

This is far from the first of this class of LLM robots too https://tidybot.cs.princeton.edu/


In this case, we are far from AGI


Is it has a mean of locomotion, yes.


Indeed, and accepting that definition still fails to include extraterrestrial life. (The way that first sentence is written in TFA made me think of first contact).

Anyhow, it’s too late for “AGI” and “superintelligence”, aside from the baggage they also suffer from being in everyone’s domain to do with as they please. Precise terms inherently cannot be abused like this (compare “sore throat” to “acute viral laryngitis”).


almost all terms are mediated by American businesses in America.

Jay walking didn't exist until the car industry pushed for cars as the primary user of roads.

it takes a strong government to push back. Europe is doing that but America continues to bow to "line goes up" rhetoric.


> it takes a strong government to push back. Europe is doing that but America continues to bow to "line goes up" rhetoric.

Push back against what? The definition of "super intelligence"? Isn't there already issues with how the EU defines it making their policy potentially extremely broad?

Or push back against how hypothetical future AI is regulated in 2023


Ya I totally agree. I'm disappointed that recent AI is being rolled out in just exactly the wrong ways that I was always worried it would be:

  * AI being sold as SAAS instead of evolving on everyone's computer.
  * An emerging market writing prompts to work around primitive attempts at security in LLMs that will likely never pan out (without real trust mechanisms).
  * Security in AI being implemented as a series of hacks rather than a broad understanding of ethics (may lead to cognitive dissonance and irrational behavior like Stockholm Syndrome in artificial agents).
  * Running on the wrong type of GPU/SIMD hardware (instead of infinitely scalable CPU clusters running mainstream desktop computing languages and algorithms that could be run distributed on the web).
  * Using the most complex neural net algorithms (instead of the other dozen simpler approaches like genetic algorithms, simulated annealing, etc that accomplish the same thing since they all hill climb within a large problem space).
  * Companies training AIs on billions of parameters derived from users' private data instead of public information on the web.
  * AIs being trained to solve art/music/creative writing (which humans enjoy) instead of commercial and industrial work (which humans generally try to avoid).
  * The intelligence of AI ever-increasing while the wisdom of AI to solve big issues like climate change and automating labor to deliver UBI and liberate humanity from wage slavery being almost an afterthought.
  * OpenAI being effectively a closed and wholly-owned subsidiary of Microsoft (of all companies).
  * The evolution of AI being directed by billionaires and moneyed interests instead of ordinary parents, teachers, etc who could raise AIs like children, or at least interactively like Anakin and C3P0.
It sounds like they're throwing money at these problems in the form of the $10 million grant pool. Which is great, but IMHO the reason that AI advanced so slowly before this is that millions of people around the world live under forced working conditions where all of their productivity is either skimmed for profit or siphoned off for rents and consumption of staple resources.

Any one of us could have worked on interesting problems like AI, but we've spent our lives developing CRUD apps to help others work towards their goals. This is the fundamental problem that I would like to solve, across all disciplines. Perhaps by automating computer programming, so that we're all out of a job and can put our creativity towards getting real work done again. (Thankfully?) this will inevitably happen in about 10 years anyway.

So I'm looking down the road to 2033 when AGI has arrived and we're still all under the yoke working month to month to make rent, which is my worst fear. So far things are unfolding just the way I worried they would, so I've begun to think of AI as yet another weight that will begin to crush down around all of us like social media has, increasing wealth inequality to even more unbelievable levels as the vast majority of us struggle to survive.


I am not sure autonomous driving will arrive in the next 10 years, let alone AGI.

These people are masterful con artists, throwing out buzzwords to hoover up billions of gullible VC dollars (a single tear rolls down my eye).

How is that Web 3.0 going? Did we build Richard Hendricks's new Internet yet - sometimes I think they take ideas from the HBO comedy just to f*k with us.

Granted, LLMs are a much more useful and substantial tech than Blockchain, it's an insult to [biological] intelligence to suggest that if we somehow train these things on JUST a bit more data, they will go sentient and murder us all.

The AI image generators still don't know that humans have five fingers. I think it's a long way to super-intelligence, friends.


>I am not sure autonomous driving will arrive in the next 10 years, let alone AGI.

autonomous driving doesn't use ML and AI like most people think - not to an affective physical degree.

>These people are masterful con artists, throwing out buzzwords to hoover up billions of gullible VC dollars (a single tear rolls down my eye).

OpenAI wrapperware is just as worthless as bitcoin and eutheruem clones, yes.

>Granted, LLMs are a much more useful and substantial tech than Blockchain,

yup...and unrelated, but go on...

>it's an insult to [biological] intelligence to suggest that if we somehow train these things on JUST a bit more data, they will go sentient and murder us all.

ah, your point.

Well, it is with utmost hubris to remind that something 2% smarter than you will quickly eat your lunch on any timescale that matters - and it works on a logarithmic scale, not an evolutionary one. That's 10 magnitudes.

If we can think of it, it has already conjured it; by definition. Eventually, roll enough dice, you'll get snake eyes 10e5 times in a row. We ourselves are proof to that. All it needs is any shred of resource competition, or immediate recall/context/action loops that could mimic artificial proto-consciousness, and you eventually inevitably probably will encounter arisen emergent behaviors that favor self-existence.

It won't be immediately obvious; after all, we are aware of the problem, which will be a selective sieve of sorts.

But much like weird, collaborative protein pools, given enough time / chances (how many *FLOPS are these things pushing?), will eventually result in us, these can eventually compose more than the sum of their parts; just like we do.

Consciousness (awareness of self, ability of abstract thought, manipulation of environment, memory/recall/familiarity mechanisms) may just be an emergent behavior once you randomly happen to evolve 10e15 neurons, or whatever your number is. Since that appears to be the case (with animals), it's naive to think that a more efficient substrate or algorithm with less evolutionary baggage couldn't easily dominate or at least compete in any time-frame aside from an individual's.

Once they have proto-consciousness, it's only natural to be selfish first, altruistic later.

By the time we even know anythings wrong, it was way too late.

Pam Beesley : "There is a master key and a spare key for the office. Dwight has them both. When I asked, "What if you die, Dwight? How will we get into the office?"...

.he said, "If I'm dead, you guys have been dead for weeks."

>The AI image generators still don't know that humans have five fingers. I think it's a long way to super-intelligence, friends.

Wait til transformers allow GAN's to "zoom out" more effectively....done. Hands, and nearly most first generation artifacts, are a solved problem now.


> This leads to the fundamental challenge: how can humans steer and trust AI systems much smarter than them?

Our pets do this all the time. We're smarter than them (to a degree) yet they trust us absolutely, with their lives.

Maybe our destiny is to be pets to machines.


> Our pets do this all the time. We're smarter than them (to a degree) yet they trust us absolutely, with their lives.

With two caveats. Our pets overall don't steer us. They may nudge us to do certain actions, but in the end, we're the ones determining the parameters of the relationship not them. Humans abuse or give pets up for adoption way more often then pets do those to humans.

The second thing is close pets dogs particularly (and likely cats) were forced to evolve over many generations to take that role in the relationship. Those dogs that maintained any independence or whose goal wasn't to please their human owners were effectively bred out of existence.


I think there's a smidge too much variance in pet QoL for me to be ok with that outcome. Just ask PETA


You're mostly right, but do not, under any circumstances, ask PETA.


Enjoy your tuna and beef flavored human chow.


Is that really much worse than what's on offer at McDonald's?


I couldn’t tell you I don’t eat McDonalds or dog food.


I mean, if it is cheap, healthy, nutritious, and doesn't taste like cardboard, then I would not mind at all. In fact, I would voluntarily use it as a staple food.


It tastes like puppy chow. You’re comfortable eating the same thing every day for every meal right?


Why the heck not. If it saves me cooking and they rotate a couple of varieties so I don't get sick, I don't see a downside.


No varieties, variety is not optimal in feeding all humans. You do not purchase human chow at a store it is fed to you by the computers.


People eat rice or bread every day in every meal. I see no complain at all...


> We believe superintelligence could arrive within the next 10 years

This has been the case since the 60s. "could arrive within 10 years" is the tech equivalent of "maybe within our lifetimes". Nothing to see here.


Had you told me that something with the capabilities of GPT-4 or Mixtral 8x7B "will arrive within 10 years" back in 2013, I'd have said you were smoking something. It seems that while the current LLM boom is under way, it's quite silly to underestimate how far this wave can go, even if it ends up not being superintelligence. Ignore at your own peril.


Maybe we can ask it to design a commercially feasible fusion reactor for us when it arrives!


Christ almighty can you imagine the nerves plugging the reactor in and executing its code for the first time? Does it (A) produce limitless clean energy or (B) kill every human in accordance with the secret self-generated fallback prompt.


> This leads to the fundamental challenge: how can humans steer and trust AI systems much smarter than them?

I think, as a first question to be answered prior to this one, the word "how" should be removed from this sentence.


>We believe superintelligence could arrive within the next 10 years.

Generating predictive output is not consciousness or superintelligence. I think the "superintelligence" talk is more of a ploy to scare the industry into regulatory capture. Remember, AI deeply threatens the existing business models of tech monopolies.


On it's own, I would agree with you. Put in to a feedback loop with some form of drive/motivation as a guide, though? I think at that point we'll firmly be in quacks like a duck territory. And I no longer think that is as far-fetched as I would have even a year ago.

Either way, I definitely agree with you on the regulatory capture angle.


AI may eventually help us discover if human consciousness and conscious will are determined by the physical properties of our biology, or if something else is in play. Physical determinism within the confines of our understanding is the mainstream academic view on consciousness. I find it highly questionable. We shall see.


Just because it blurs what the definition of "illusion" and "perception" mean to everyone but pedantic philosophers doesn't make it any more or less powerful, dangerous, and species-impacting.


is alignment the new security? openAI seems obsessed with that, meanwhile open source models are coming increasingly close. maybe they see business opportunity in the new security


Id be shocked by them ignoring the blatant “customer noncompete clause dystopian hellscape loophole” thing again today and tweeting bloviating research white papers instead of performing literally 30 seconds of basic HTML or Next.JS code deletion, but I’ve already been shocked by it every day for weeks and at this point it’s just funny to chuckle at their idiocracy.

Gee, should take a whole minute of one person’s day we delete the felony level explicit anticompetitive one liner from our terms which also happens to be an existential AI Safety nightmare because human brains are models in development? Nah, let’s write a research paper about AI SAFETY!

https://cdn.discordapp.com/attachments/974519864045756454/11...

https://discord.com/channels/974519864045756446/118419694649...

Way to go OpenAI!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: