Hacker Timesnew | past | comments | ask | show | jobs | submit | tencentshill's commentslogin

Those startups eventually need legions of fools with which to easily part their money.

Newpipe impedes revenue for an already free video hosting service. Google has less than zero obligation to them.

I remember when Microsoft got in trouble for bundling a web browser with the OS.

Sure, they have no obligation but the way you describe Newpipe to paint it as "obstructive" feels off to me.

When you offer a free service, by definition of it being free, you can't hold consumers of that service accountable for not furthering your revenue. They are impeding revenue only if it's not actually free (or only under false pretenses) which dismantles your first sentence here.


How did the US military get low on ammo? Ukraine surely isn't using that much.

Missiles are expensive, especially US ones. Other military equipment is even more expensive. A THAAD system is between 1 and 2 billions, and some of them were obliterated by Iranian drones. Since the drone wars evolved so fast they may as well be outdated.

It also doesn't necessarily mean they're down to the equivalent of breaking out the T55s like Russia did, more that they've seen an opportunity for more money and went with it.

I had this thought as well. But oil prices are set globally on the exchanges, even oil that never leaves the US is affected.

Then we need regulation to make sure every ebike and scooter sold is safe to charge without exploding. No cheap batteries.

If they control the mass media narrative as alleged, they're not doing a very good job of it.

NIMBY probably don't want a 24/7 tesla charging station right next to their homes. It's like living at a gas station, but people fill up for 1hour+ at a time. Seems like a zoning failure and should never have been approved.

Also, gas stations have bathrooms.

Note this only affects the very first original 2013 "VCR" hardware. Newer revisions and variants are still unaffected.

They're pretty common and cheap on the used market, though. I bought mine from a thrifts store for $30, and the console itself regularly goes for ~$50 on eBay.

“I did it for him first and the vote second,” the president said, adding: “But it was a close second.”

He really doesn't have a filter does he?


So does this cut out Intel/x86 from all the massive new datacenter buildouts entirely? They've already lost Apple as a customer and are not competitive in the consumer space. I don't see how they can realistically grow at all with x86.

Even Apple hardware looks inexpensive compared to Nvidia's huge premium. And never mind the order backlog.

x86 and Apple already sell CPUs with integrated memory and high bandwidth interconnects. And I bet eventually Intel's beancounter board will wake up and allow engineering to make one, too.

But competition is good for the market.


Even with those advantages, Apple can't even sell datacenter hardware to themselves: https://9to5mac.com/2026/03/02/some-apple-ai-servers-are-rep...

"And as the initial crop of Apple Intelligence features hasn’t been used as much as Apple expected"

Nah, as so-called "analysts" expected. The no-effort crybabies deriding Apple for being "behind on AI" have turned out to be, shocker of shockers, wrong. Anyone who even put a few minutes of thought into Apple's business realized that it (and its customers) didn't stand to benefit much from "AI."

It's sad that Apple hurried to pander to these clowns, only to be derided further... and to encounter the appropriate apathy from customers, who were and are doing just fine without asinine "AI" gimmicks.


Apple wouldn't have built the server capacity if they thought it wouldn't be used. It's indeed their own analysis.

In any case, that article is also looking forward to next-gen models like the sparse Gemini model Google trained for Siri. Apple Silicon simply isn't powerful enough to compete for that inference.


Apple went from a high-end PC to a low-end AI provider due to blocking Nvidia on their platform.

Worse, because since they no longer care about workstation market, there are pluggable cards, and no update to the chese grater, which the Studio is not comparable to.

They also dropped the ball on the data center, having left OS X Server behind.

Those markets are now served by Windows or Linux based configurations.


>are not competitive in the consumer space

AFAIK they still dominate on clock rate, which I was surprised to see when doing some back of the envelope calculations regarding core counts.

I felt my 8 core i9 9900K was inadequate, so shopped around for something AMD, and IIRC the core multiplier of the chip I found was dominated by the clock rate multiplier so it’s possible that at full utilization my i9 is still towards the best I can get at the price.

Not sure if I’m the typical consumer in this case however.


Your 9900k at 5ghz does work slower than a Ryzen 9800X3D at 5ghz. A lot slower (1700 single core geekbench vs 3300, and just about any benchmark will tell the same story). Clock speed alone doesn't mean anything.

From the newegg listing:

>8 Cores and 16 processing threads, based on AMD "Zen 5" architecture

which is the same thread geometry as my 9900K.

My main concerns at the time were:

1. More cores for running large workloads on k8s since I had just upgraded to 128G RAM

2. More thread level parallelism for my C++ code

Naively I thought that, ceteris paribus and assuming good L1 cache utilization, having more physical cores with a higher clock rate would be the ticket for 2.

Does the 9800X3D have a wider pipeline or is it some other microarchitectural feature that makes it faster?


Comparing CPUs by clock speed doesn’t work. New CPUs are do more work per clock cycle.

A 9800X3D is twice as fast as your 9900K in benchmarks like GeekBench, despite having similar clock speed and the same core count.

If you could downclock the AMD part to 2.5GHz as an experiment it would still beat your 5GHz 9900K.


You don't even need to go into the pipeline details. The 9800X3D has 8x more L2 cache, 6x more L3 cache, 2x the memory bandwidth than the now 8 years old i9 9900K. 3D V-cache is pretty cool.

I purposely picked a CPU with the same thread geometry as your 9900K to avoid calls of "apples & oranges" or whatever. If you want more threads, the 9950X is right there in the same socket. Or Core Ultra 9 285k. Either of which will run circles around a 9900K in code compilation.

You can research microarchitecture differences if you want, it's a fascinating world, or you can just skip to looking at benchmarks/reviews. Little hard to compare against quite that large of a generation gap, but eg https://gamersnexus.net/cpus/rip-intel-amd-ryzen-7-9800x3d-c... or https://www.phoronix.com/review/amd-ryzen-7-9800x3d-linux/2


The 9800X3D has wider everything. Decoder, execution ports, vectors, cache, memory bandwidth...

I think my i9 was released right after the Spectre and Meltdown mitigations in 2019, but I seem to remember even more recent vulns in that family… so that could also be a factor.

A 9700X is twice the performance of a 9900K and M5 Max is almost 3X the performance. The megahertz myth is a myth.

I replied to the sibling comment: I was making simplifying assumptions for two specific use cases and naively treated physical cores and clock rate as my variables.

But why? That's like trying to determine which car is faster by looking at only at the rpm.

Yes, but core count and clock speed of a nearly 10 year old CPU are meaningless when comparing to current processors.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: