Hacker Timesnew | past | comments | ask | show | jobs | submit | disgruntledphd2's commentslogin

The articles argument is fine, but it takes as an axiom that AI is better right now at much cognitive work. I haven't found that to be true in the tasks I've looked at.

It's certainly cheaper and faster, so there's potential for it to unlock more demand but I'm sceptical that current models will replace a large fraction of knowledge work.


> Consumers have voted over and over and over and they are very clear: The vast majority will choose cheap vs good

Snowflake customers have definitely not made this choice, as Snowflake is good but very, very expensive. They're basically the Oracle of cloud.


Isn’t Oracle the oracle of cloud?

And this still fits, if snowflake is feeling pressure from lower cost entrants or demand from investors for more profit then it would track

I don’t necessarily disagree that it’s a risk but people assume that companies optimize for product stickiness but in fact they don’t, most companies optimize for investor relationships


> It's rather obvious that this AI thing is a transformative event in world history, perhaps more critical than the advent of the internet. Take a look at traffic to established sites such as Stack Overflow to get a glimpse of the radical impact. Even in social media we started to see the dead internet theory put to practice in real time.

It's worth noting that SO was declining well before ChatGPT launched. It seems more likely that the decline of SO was more driven by Google ranking changes to prioritise websites that served Google ads. Certainly I remember having to go down a few results to get SO results for a while, even when the top results were just copypasta from SO.


> It's worth noting that SO was declining well before ChatGPT launched. It seems more likely that the decline of SO was more driven by Google ranking changes to prioritise websites that served Google ads.

I don't think that's it. SO was the go-to page for troubleshooting, whose traffic was not exactly originating from web search. Also, the LLM-correlated drop in traffic is also reported by search engines. Stack Overflow just so happens to be a specialized service with a very specialized audience whose demand is perfectly dominated by LLM chatbots.


I mean the first decline happened well before ChatGPT so it can't just be that.

Speaking as an Irish citizen id be ok with messing up the US at the cost of our economy. I think that you underestimate the resolve of Europeans on this.

It's profoundly depressing, but such is the world we live in now.


You're obviously entitled to your opinion, but if you think it generalises, you're simply wrong. There is extremely strong and consistent polling across the EU in general, and Ireland in particular, showing that while the public supports Ukraine and moderate defence spending, it does not support direct military involvement or major escalation, and has zero tolerance for armed conflict, severe economic self-harm, or escalation against major powers.

In Ireland specifically, the cost of living is the key political issue at this time for Fianna Fáil and Fine Gael. Neither has any mandate or capacity for military or economic conflict with the United States - our recent diplomatic efforts, in spite of the Greenland and Iranian crises, should highlight that.

Ireland is more dependent than ever before on the US. We would veto any EU efforts against them.


> Ireland is more dependent than ever before on the US. We would veto any EU efforts against them.

Unlikely. We'd most likely hum and haw for a while and then go along with it.

Like ultimately, we need both EU membership and US investment to maintain the economy we have at the moment. Losing one or the other would be really bad, but ultimately the only one we can really control is EU membership, and I'm relatively certain that the majority of Irish people, if forced to (and not one second before) would choose the EU.

> There is extremely strong and consistent polling across the EU in general, and Ireland in particular, showing that while the public supports Ukraine and moderate defence spending, it does not support direct military involvement or major escalation, and has zero tolerance for armed conflict, severe economic self-harm, or escalation against major powers.

Can you point to this polling please? I'm definitely not in favour of more wars, but the issue is that the choice may not be up to you or me, rather it will be driven by countries starting said wars (cough cough US threatening to invade Greenland).

> In Ireland specifically, the cost of living is the key political issue at this time for Fianna Fáil and Fine Gael. Neither has any mandate or capacity for military or economic conflict with the United States - our recent diplomatic efforts, in spite of the Greenland and Iranian crises, should highlight that.

While I do agree with your core point around cost of living, honestly, the likelihood of any political party in Ireland (but particularly FFG) doing anything about the cost of living is ludicrously small, depressingly.

The two biggest drivers of inflation in Ireland (and the west more generally) are energy costs and land costs. If you ran on reducing land costs you'd become a pariah in Ireland (again, really unfortunately).

And our planning system makes it unlikely (again, depressingly) that any work will be done on grid modernisation or building energy infrastructure. I mean, I would love to see this happen (I'd even vote for FG or the Shinners to accomplish this), but I find it extremely unlikely.


Literally perfectly relevant xkcd: https://xkcd.com/2501/

Brilliant hahaha

Because the AI act was mostly written to address issues with ML products and services. It was mostly done before ChatGPT happened, so all the foundation model stuff got shoehorned in.

Speaking as someone who's been doing stats and ML for a while now, the AI act is pretty good. The compliance burden falls mostly on the companies big enough to handle it.

The foundation model parts are stupid though.


>Because the AI act was mostly written to address issues with ML products and services. It was mostly done before ChatGPT happened, so all the foundation model stuff got shoehorned in.

It's not an excuse. Anybody with half a working brain should've been able to tell that this was going to happen. You can't regulate a field in its infancy and expect it to ever function.

>The compliance burden falls mostly on the companies big enough to handle it.

You mean it falls on anyone that tries to compete with a model. There's a random 10^25 FLOPS compute rule in there. The B300 does 2500-3750 TFLOPS at fp16. 200 of these can hit that compute number in 6 months, which means that in a few years time pretty much every model is going to hit that.

And if somebody figures out fp8 training then it would only take 10 of these GPUs to hit it in 6 months.

The copyright rule and having to disclose what was trained on also means that it will be impossible to have enough training data for an EU model. And this even applies to people that make the model free and open weights.

I don't see how it is possible for any European AI model to compete. Even if these restrictions were lifted it would still push away investors because of the increased risk of stupid regulation.


> It's not an excuse. Anybody with half a working brain should've been able to tell that this was going to happen. You can't regulate a field in its infancy and expect it to ever function.

As I said, the core of the AI act was written about supervised ML, not generative ML, as generative ML wasn't as big a deal pre Chat GPT.

> You mean it falls on anyone that tries to compete with a model. There's a random 10^25 FLOPS compute rule in there. The B300 does 2500-3750 TFLOPS at fp16. 200 of these can hit that compute number in 6 months, which means that in a few years time pretty much every model is going to hit that.

As I also said, the foundation model stuff (including this flops thing) is incredibly stupid. I agree with you on this, but my point is that the core of the AI act was supposed to cover the ML systems built since approx 2010.

> The copyright rule and having to disclose what was trained on also means that it will be impossible to have enough training data for an EU model. And this even applies to people that make the model free and open weights.

Again, you're talking about generative stuff (makes sense given the absurdly misleading name now) whereas I'm talking about the original AI act, which I read well before ChatGPT happened.

The training data thing is a tradeoff, like copyright is far too invasive (IMO) and it's good to be able to use this information for other purposes. However, I personally would be super worried about an ML team that couldn't tell me what data went into their model. Like, the data is core to all ML/AI approaches so that lack of understanding would make me very sceptical of any performance claims.

Lets be real, the AI companies don't want to say what's in their models because of the rampant copyright infringement, not because of any technical incapability.


> If anyone thinks Cuba is better off in any metric now than they would have been in that alternate reality, I’d love to hear why.

I mean, pre-Castro Cuba was basically a playground for the US rich. Like, the whole revolution was about kicking those people out.

Personally, I think that's morally justified, but I don't agree that what the US has done to them since then is morally justified. Obviously people differ on their opinions of this stuff, but collective punishment (which is what the US embargoes are) is generally regarded as a war crime.


> Obviously people differ on their opinions of this stuff, but collective punishment (which is what the US embargoes are) is generally regarded as a war crime

The definitions really keep mutating on the left don’t they. Economic sanctions are a “war crime,” “silence is violence,” etc.


> 2019, the Assembly of States Parties to the Rome Statute adopted an amendment to the definition of war crimes applicable in NIAC detailed in article 8(2)(e). The new article (8(2)(e)(xix) prohibits the intentional use of starvation of civilians as a method of warfare by depriving them of objects indispensable to their survival, including the deliberate prevention of relief.

Fuel for cooking food and providing heat is necessary for survival; deliberate prevention of this aid from reaching Cuba is a war crime.


> The definitions really keep mutating on the left don’t they. Economic sanctions are a “war crime,” “silence is violence,” etc.

You may have me confused with someone else, as I have never said anything about silence is violence.

Economic sanctions are definitely a method of waging war. The loss falls mostly on the ordinary people of the country, and as such are collective punishment and war crimes.

Now, is it better than bombing the people back to the Stone Age? Definitely in the short-term, but one look at what happened to Iraq after ten years of sanctions (everyone who could left) and the impact this had on post 2003 reconstruction would seem to suggest that it's the difference between acute and chronic illnesses.


This is why starting a war with Iran right now was a bad idea.

Yeah, I emotionally disagreed with this article, because I like the Culture, mostly.

That being said, it's possible that AI is helping here.

Mind you, given the sycophancy of current models, it's also possible that commanders are making worse decisions based on the results of these AI outputs.

Finally, if the US manage to get what they want without completely destroying the balance of power in the Middle East or sending oil to 150 a barrel, then I'd be much more likely to accept the authors speculation.


I think it's safe to say that whatever products the military is using are vastly different from what's available to and designed for everyday consumers. DARPA may be past its heyday and certainly the private sector has caught up in a lot of ways but I don't doubt for a second that they have been investing heavily in weaponizing AI for some time.


> I think they should be allowed for cultural reasons but only if cut by hand like we did when I was a kid :)

Me too! That was a lot of work, and surprisingly hard to stack.


And turning it would cut your fingers to shreds! But it was great if the weather was fine.


Thank you both for the imagery here - quite beautiful, in its way.

This has made me remember having to go out to the coal shed and fill up a brass bucket and then come back in all covered in coal dust.

I've not thought about That Smell in years!


Did you have one of those ubiquitous brass boxes beside the hearth?


No, we had some antique brass bucket thing that I'd invariably have to drag in, accompanied by complaints that I was doing so, because obviously I'd put way too much in, so I didn't have to go out later to get more...

Which it almost never was :/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: