Hacker Timesnew | past | comments | ask | show | jobs | submitlogin
Eric Schmidt plans to give A.I. researchers $125M (cnbc.com)
52 points by EMM_386 on Feb 17, 2022 | hide | past | favorite | 51 comments


It's worth noting that Eric Schmidt was the chair of the (US) National Security Commission on AI and their recommendation was that we give them[0] hundreds of billions of dollars or so[1], as soon as possible. to ensure the USA doesn't lose an AI arms race.

Full report is here: https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report...

[0] They don't come out and say "Give DeepMind billions of dollars", but in practice a huge amount of the funding will go to the big players with deep ties to the government. This report helps to justify spending all that money. Also, I don't actually disagree that this is a terrible idea, just that it helps to be aware that this whole thing is a massive funding grab in addition to also probably being a reasonable idea.

[1] "The $40 billion we recommend to expand and democratize federal AI research and development (R&D) is a modest down payment on future breakthroughs. We will also need to build secure digital infrastructure across the nation, shared cloud computing access, and smart cities to truly leverage AI for the benefit of all Americans. We envision hundreds of billions in federal spending in the coming years."


Also, in 2020 Schmidt was in the final stages of becoming a citizen of Cyprus. Let's not assume all is done in the interest of the American people.

https://www.vox.com/recode/2020/11/9/21547055/eric-schmidt-g...


I would have loved to see a big grant like this go into creating a public computer of some kind. The amount of compute required to work on cutting edge deep learning is starting to go beyond what any lab can afford. A "public research cloud" or something similar would help equalize the playing field for academics. We already have national supercomputers used by academics for all kinds of disciplines, and I don't see why ML should be any different.


It would be a great move, so many researchers can't apply ideas for lack of resources. We need something like CERN.


I'm guessing almost all of it will be going into deep learning. On an interview with Sean Carroll, Gary Marcus said that >99% of AI funding gets awarded to DL-based projects [0].

Has anyone has tried searching for new basic operations, below the level of neural networks? We've been using these methods for years, and I doubt the first major breakthrough in ML is the most optimal method possible.

Consider the extreme case of searching over all mathematical operations to see if something really novel can be discovered.

How feasible would this be?

[0] https://www.preposterousuniverse.com/podcast/2022/02/14/184-...


The problem is about the structure, not the math operations. We know for a fact how Turning-Completeness is enough for artificial intelligent machines (as these machines are mechanical, not quantum).

Deep-learning is about the structure to learn representations and reinforcement learning is about the curriculum of learning actions in a world space. None of this requires "breakthrough of math operations".

It is IMHO that we have enough operations in our toolbox to build artificial intelligent machines, we don't exactly know the "structure" yet.


And yet there are basic problems with our method of representation. If you take error surfaces and project them into 3 dimensions ... they look smooth. And then you look at the polynomials (because vector multiplication yields polynomials) that we use to approximate those polynomials ... they look anything but smooth. They look like those scenes you got in Doom when you go out of bounds in a level. Very, very different from nature.

It's a miracle it works at all. Making ML models is like making smooth and comfortable sofas by arranging spiked rocks. Now by arranging the right rock juuuuuust right you can get any shape, and indeed you can, but the shapes we're trying to make would be a hell of a lot easier to make using pillows. It's a miracle it ever succeeds at all, and the problem definitely is the basic building blocks.

Polynomials are famous for 2 reasons: 1) they're "spikey". When making more accurate matches what you're doing is adding every "higher frequency components" that do their own spiking. They fix something at one place, and screw lots of stuff up everywhere else 2) at their ends they almost always go off into infinities. Technically zeroes are possible, but you just never see that.

And Neural Networks have challenges that humans don't seem to have: 1) they have sudden, very "weird" ideas. All koalas are koala's, except if you modify pixel 381 down 10%, then it's an elephant. 2) they have ridiculous predictions outside of their training range. You might say "of course they do", but humans don't, and neither do animals ... what does a reasonable human/animal do when confronted with an impossible situation? They respond the same way they respond to the closest reasonable situation they know. A human seeing a tidal wave come for him runs away from the wave. That may be stupid, if they've got little chance to outrun it, but a neural network would just stand there, start shaking, and drop uncontrolled to the floor.

These problems seem related to using polynomials (as opposed to, say, sum-of-10-gaussians) for prediction. Those error surfaces we started with ... they don't quite look like 10 gaussians either ... but they kind of look a lot closer to those gaussians than to polynomials. They look like smooth, slowly sloping curves.


What if it just takes 100B weights to show interesting behavior? Maybe it's not the method that is bad, but the problems that are hard. This recent paper shows how at 10^11 weights the network starts behaving much better, as if going through a phase change.

https://twitter.com/AnthropicAI/status/1494352855972540418


Recently there was an article from Cornell about using arbitrary physical systems for training neural nets, i think a hybrid of a physical system would be more interesting in leading to breakthrough https://news.cornell.edu/stories/2022/01/physical-systems-pe...


That's what the AGI conferences have been exploring for the past two decades:

http://agi-conference.org


Original info here: https://www.schmidtfutures.com/schmidt-futures-launches-ai20...

Looks like more to come on this re application, nomination, etc.


Someone’s really worried about Roko’s basilisk.


AI clearly a national security issue and I think big funding for both defense and science purposes needs to happen.

But wouldn't it be better for govt to act like a VC and fund tons of little startups and crowd source the new ideas instead of funding big-tech (who already have tons of cash from existing revenue streams), or AI shops that are already VC backed?


Does AI need more funding? Every government and the largest companies in the world are focused on it.


How does one apply for this funding?


Step 1: Be a billion dollar corporation.


I know I am a little drunk and that makes my inner socialist pop out, but is there any particular reason why we don't tax Schmidt and then use the 125M to invest in AI, plus other things, with the outcomes being released openly?

It's time for that conversation.


Because it would end up getting spent like the rest of the discretionary budget, so around $65M would go to the military, and $4M would go to science, of which about $80K would be allocated to AI research.


Ok. And then that's a conversation around our spending priorities. Which is another conversation "we should" be having. And by that I mean legally enforcing all media to put a pie chart of each politicians spending plans above their head.

I am now drunk enough to mistake HN for twitter so should stop.


It's a cute idea, but I see two obvious issues. First, spending priorities tend to be pretty similar among politicians, with each differing only in small details, so approximately all of the pie charts would look more or less like this:

https://en.wikipedia.org/wiki/United_States_federal_budget#/...

(Only about a quarter of the pie is even discretionary! The rest is on budgetary auto-pilot.)

Second, how many people are well-informed enough to have an opinion on how much of the budget should be spent on AI research? How many have even thought about the question? There are a kajillion such questions, and it would be unreasonable for most citizens to put in the time to consider even a small fraction of them, considering how little a single person's vote affects them.

https://en.wikipedia.org/wiki/Rational_ignorance


> how many people are well-informed enough to have an opinion on how much of the budget should be spent on AI research?

Unfortunately, there is no super-human, super-political authority who can give us answers. We each get to decide who we trust and to what degree. You are free to trust Eric Schmidt to whatever degree; I might vote otherwise. Certainly nobody gets to assert authority over others by simply claiming to be an expert.


I agree, of course! But I'm trying to get at a different question: what sort of processes can be expected to give better or worse answers to the resource allocation problem?

This probably can't be answered at HN comment length. Or at book length, for that matter. But I will say that I think letting individual rich people unilaterally jump on causes that they believe to be important and neglected seems like the kind of thing that will probably give better net outcomes on the margin under systems like the one we currently have.


There's nothing magical about people who have lots of money that make them any better than you - don't let anyone convince you otherwise. It is a common foundation of dictators, to tell you how badly you need them and how one strong person will save things. And as ordinary mortals, who put on their pants one leg at a time like you do, they are just as corrupt, stupid, and biased as everyone else, including politicians, but we lack transparency or any check on their power. That's why we have democratic government: To provide a way to manage power. It belongs to us. Some powerful people would love for for you to hand them that power.

People are naturally narrowminded, seeing their own experiences and interests as universal and essential and others as fringe and optional - or non-existant, because they can't possibly understand all the different experiences of the world. Wealthy people serve their own interests (which are very well-served already, I'll point out). It's like the old supply-side 'economics' claim from the 1980s: 'If we cut taxes for wealthy people, they will do good things for us' It turns out that it made them wealthier, and that was it.

That's why democracy is so important and function so well (relative to other forms of government): Everyone gets a vote and therefore everyone's interests are considered - even the ones you haven't imagined, don't understand, or don't care about. If you want a reliable way to see who will get screwed, just look for who doesn't have a seat at the table.

We do have systems for allocating funds, which include experts and people who study the issues, and public debate. What evidence is there that it works so poorly or could be so easily improved? I certainly don't agree with many decisions, but as I have to share the decisions with 330 million other people, that is understandable. Certainly I see what seem like more objective imperfections, but underfunding AI isn't one of them.

> letting individual rich people ...

Nobody is stopping anybody.


> Nobody is stopping anybody.

This thread was started by someone suggesting "we" tax Schmidt the exact 125M he decided to donate and then allocate the money as we see fit. How is that not stopping anybody?


> This thread was started by someone suggesting "we" tax Schmidt the exact 125M he decided to donate and then allocate the money as we see fit. How is that not stopping anybody?

Unless you think they mean to waylay the courier carrying the check, taxing Schmidt doesn't stop them from making donations.


I would spend more of it on healthcare and education. Maybe we should take a vote?


> Maybe we should take a vote?

How do you think we decided on the current allocation?


That's my point. We should vote, we did vote, and we legitimately came to the current allocation. If you disagree, feel free to persuade us to vote differently, but don't tell me I have to give up my vote and do what you want.


Whatever argument you're using right now is exactly in support of the current situation.

- Votes (indirectly) set the current allocation

- Votes (indirectly) set the current tax rate

- Votes (indirectly) mean that Schmidt has his current money pile

- Thus, this argument is perfectly in line with Schmidt choosing to take 125M of that money pile and giving it to sources he cares about.

- This argument is not in line with raising his tax rate to fund whatever the voting allocation is.


> Whatever argument you're using right now is exactly in support of the current situation.

It's in support of the current system of decision-making, not the current state of the law (which, as I said, I may disagree with). Part of that system is you and I discussing it here and changing it.


Can't even hire a web dev with that.


> It's time for that conversation.

The idea of taxing rich people is an ongoing conversation. I hear it discussed literally every day.


People talk about it quite a bit. It hasn't amounted to anything material in my lifetime. If anything just the opposite.

I don't think effective tax rates for the uber wealthy have ever been lower.


I would also like to point out the discrepancy between government investment in general as compared to private investment. In particular, risk tolerance and moonshot projects.

I view private investment in domains as a counterbalance to gaps where the slow, grinding machine that is the government misses, or is too entrenched to consider alternative approaches. An example here would be with the disruption of the aerospace industry by SpaceX.


> I am a little drunk and that makes my inner socialist pop out, but is there any particular reason why we don't tax Schmidt and then use the 125M to invest in AI, plus other things, with the outcomes being released openly?

There are two reasons:

1) Mr. Schmidt already pays taxes and the US is not an authoritarian socialist regime that creates arbitrary, retroactive new taxes to confiscate the money of its citizens as soon as they decide to make a donation.

I think that sort of system has been repeatedly shown to be a failure but if it's what you prefer, try N. Korea. Light versions can also be found in other ethno-states founded in the 20th century that put the words like socialist and people's in the names of their institutions.

2) We actually want research results and 125M in a directed investment goes a lot further than 125M in a federal budget with half a dozen intermediaries and overpaid contractors in the districts of enough legislators to swing the vote. This is part of why SpaceX is able to accomplish so much more, more quickly and more cheaply than NASA can.

The government is absolutely crucial for some things, but preventing people from making any decisions about what they want to donate to or spend their money on is a disaster.

A more reasonable solution to solve for what it sounds like you want would be to decrease the number tax loopholes and exemptions, particularly those only the wealthy can take advantage of.


Oh I'm sure we can trust the guy who runs the company that fired a bunch of people for publishing research about the harms of AI to also be the person to fund ethical AI research.


> fired a bunch of people for publishing research about the harms of AI

That paper was activism dressed up as science. It even tried to coin a derogatory term for language models. The authors were full of vitriol on forums against many respected researchers. After calling them out on their perceived ethical problems, they refused to have a dialogue in order not to "offer a platform" to their opponents. Never seen anything like it in 10 years of following the field. It was so sad to see people trying to have a sincere talk and being shut down.

If there is any good outcome from that scandal is that now big model papers devote 50% of their length to harm analysis. A bunch of better papers on harm reduction came out in the last year. The authors of the scandal paper moved on to exploit their new gained notoriety, so it wasn't necessarily a bad career step for them.


> runs the company

Schmidt doesn’t run Google by any definition of the word run. He also has a great track record of funding basic science (Schmidt Science Fellows).


While we're on his track records, his documented privacy stance is not great to say the least.

https://en.wikipedia.org/wiki/Eric_Schmidt#Privacy


> Oh I'm sure we can trust the guy who runs the company

He doesn't.

> that fired a bunch of people for publishing research about the harms of AI

They didn't.

They fired one person for pointing out internal race and gender issues at the company, and particularly management dishonesty in the handling of those issues (part of that firing process was a constructive termination campaign—an effort to make the workplace hostile enough to them that they would quit—that involved, among other things, subjecting her research to review processes not used for other researchers at Google in the same area, which came to light around a piece on the harms of AI, which, sure, it was in the company's interest to suppress, but the only reason that even became an issue is because of the process they had already focussed on her to force her out.)

Then they fired a number of other people for things like pointing out holes in the narrative around firing the first person, including pointing out the unusual research oversight process.

In the process, a number of people also quit the company over those events.


> that fired a bunch of people for publishing research about the harms of AI

You're either extremely misinformed or are intentionally lying, but either way, this is just a flat out incorrect statement on multiple levels. You can just google "Timnit Gebru" to find out all the details of her dismissal. And Eric Schmidt doesn't run Google, and had literally zero to do with Timnit and others getting fired(which - to be clear, was entirely because of their response to Google's response to their paper, not because of the paper itself).


I doubt we taxed him anywhere near his fair share. That effective tax rates decline with wealth floors me. It also bothers me how cut throat capitalists grow into philanthropists as they become old and vulnerable. Years tax dodging, suppressing wages, lobbying, and engaging in veiled class warfare to gain every bit of wealth possible. And then they deign to give a fraction of what was owed the people (for roads, the internet, public safety, etc.) back in bits and spurts to be celebrated as generous. It makes me laugh.

At least your run-of-the-mill Scrooge has the decency to be consistent, to be hated.

While I say all that, I am also glad he's donating.


I have contradictory feelings about this. It's as if the system instilled some anti-taxation, anti-regulation thoughts in anyone participating in a small sized endeavour, thus intending for most people to empathize with corporations.

Having dealt with the exasperations of trying to start a small business from California (years ago), the burden of bureaucracy seemed nowhere proportional to the short-term aspirations... resulting in a quite negative influence to the initial intentions of the projects to be performed.

I guess that my wish is that they'd make at least trivial to have any entrepreneurial activity (well, maybe it boils down to the state your business is in), until it really grows into something (morally?) taxable.


How much is a fair share?


From what I've read the uber wealthy pay an effective tax rate of ~10%. US voters and legislators have set our top tax brackets at ~40%.

What is a fair tax rate? I don't know. Less is more IMO.

But by the terms we've all agreed on, the uber wealthy skimp by ~3/4.


Money you need to spend is different from money you can save or invest. That's why they can do it.


Most Americans don't have money to save or invest. Capital gains functionally rewrites tax brackets.

There's lots of ways to make sure that people so rich they don't have to wage earn pay their fair share. We choose not to.


Another important consideration, is that the complexity of a tax system is itself a regressive tax. If hiring an accountant can decrease the amount of tax you need to pay, the tax rate is effectively higher for those who can afford an accountant than for those who can't.


The free market is usually better at allocating resources than the government.

Plus, the government already has much more than $125M it could use to fund AI research, why do we need to take away Schmidt's funds to do so?


So I wrote this for another reason but it is pretty appropriate here::

Take a billionaire. Any one. At some point they inevitably will have made a public proncement something like "Why do we bother with these useless politicians ? I am a self made man. I can make much better investment decisions than those guys. Let me keep all my money and I will keep making better investment decisions."

Guess what - Maybe. Err, Mostly No.

Look firstly you are not a self-made man. You are an investment from your society that built a school, paid the teachers, injected you with vaccines and fed your growing body and mind and after twenty years of that sent you and millions like you out to make money and give some of it back in tax to pay for the repairs to the roads and schools and hospitals.

And even after that you did not do all the work at your compmay yourself - you hired people (at a presumably fair wage) and arranged things so that more of the profits flowed to you than to them. This was all legal. (I hope!)

But most especially you were not the only visionary in the desert. Every startup that succeeds has competitors - there is even a Sillicon Valley saying "competition is Gods way of saying there is money to be made". You gave a great pitch - but it fell on fertile ground. Investors realised this was a good opportunity and pushed cash into your hands, engineers and sales people realised this was the industry they wanted and shoved CVs into your hands.

You did a great job as an Enteprenuer - well done. But you were not the only one, you are not an island and you are fortunate to have benefitted from a specific corporate structure. Imagine how rich you would be if you had started a co-operative owned solely by employees or like a Vanguard investment fund owned by shareholders (unlike most hedge funds). you would still be rich - just normal rich.

Of course my dear self made billionaire, you probably are a better invester than our politicians, smarter too. But that is not the point. We are asking them to make investments that dwarf even your funds - the President of the USA hands out a 6Trillion dollar budget and that's just the federal bit. And we want a bit more ... accountability and a bit more responsiveness from the guy investing so much of our hard earned.

If you want to invest that (and let's agree you will probably do a better job) you need to persuade us (ie your new investors) that you are the right person to spend it. So is your name on the ballot? No? ok so you are not, in our terms, handing out a prospectus. in tat case you don't get to invest on our behalf. how it works is that people who invest our tax money get elected. That way we can remove them when we don't like it. (it's not a great system i have to admit but no better alternatives seem to exist)

So give us back (most?) of then money you made. we will use it to build schools for the next entrepreneurs, and also lots of other stuff we will all massively argue about.

In short, you are an investment in our massive VC fund, and you have paid off handsomely. Now we want our exit please. Death and taxes are the only two promises we ever make.

Having said that we would be very interested in your manifesto (sorry prospectus). Can you do anything about the potholes round my way?


Citation needed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: