Sure, they have no obligation but the way you describe Newpipe to paint it as "obstructive" feels off to me.
When you offer a free service, by definition of it being free, you can't hold consumers of that service accountable for not furthering your revenue. They are impeding revenue only if it's not actually free (or only under false pretenses) which dismantles your first sentence here.
Missiles are expensive, especially US ones. Other military equipment is even more expensive. A THAAD system is between 1 and 2 billions, and some of them were obliterated by Iranian drones. Since the drone wars evolved so fast they may as well be outdated.
It also doesn't necessarily mean they're down to the equivalent of breaking out the T55s like Russia did, more that they've seen an opportunity for more money and went with it.
NIMBY probably don't want a 24/7 tesla charging station right next to their homes. It's like living at a gas station, but people fill up for 1hour+ at a time. Seems like a zoning failure and should never have been approved.
They're pretty common and cheap on the used market, though. I bought mine from a thrifts store for $30, and the console itself regularly goes for ~$50 on eBay.
So does this cut out Intel/x86 from all the massive new datacenter buildouts entirely? They've already lost Apple as a customer and are not competitive in the consumer space. I don't see how they can realistically grow at all with x86.
Even Apple hardware looks inexpensive compared to Nvidia's huge premium. And never mind the order backlog.
x86 and Apple already sell CPUs with integrated memory and high bandwidth interconnects. And I bet eventually Intel's beancounter board will wake up and allow engineering to make one, too.
"And as the initial crop of Apple Intelligence features hasn’t been used as much as Apple expected"
Nah, as so-called "analysts" expected. The no-effort crybabies deriding Apple for being "behind on AI" have turned out to be, shocker of shockers, wrong. Anyone who even put a few minutes of thought into Apple's business realized that it (and its customers) didn't stand to benefit much from "AI."
It's sad that Apple hurried to pander to these clowns, only to be derided further... and to encounter the appropriate apathy from customers, who were and are doing just fine without asinine "AI" gimmicks.
Apple wouldn't have built the server capacity if they thought it wouldn't be used. It's indeed their own analysis.
In any case, that article is also looking forward to next-gen models like the sparse Gemini model Google trained for Siri. Apple Silicon simply isn't powerful enough to compete for that inference.
Worse, because since they no longer care about workstation market, there are pluggable cards, and no update to the chese grater, which the Studio is not comparable to.
They also dropped the ball on the data center, having left OS X Server behind.
Those markets are now served by Windows or Linux based configurations.
AFAIK they still dominate on clock rate, which I was surprised to see when doing some back of the envelope calculations regarding core counts.
I felt my 8 core i9 9900K was inadequate, so shopped around for something AMD, and IIRC the core multiplier of the chip I found was dominated by the clock rate multiplier so it’s possible that at full utilization my i9 is still towards the best I can get at the price.
Not sure if I’m the typical consumer in this case however.
Your 9900k at 5ghz does work slower than a Ryzen 9800X3D at 5ghz. A lot slower (1700 single core geekbench vs 3300, and just about any benchmark will tell the same story). Clock speed alone doesn't mean anything.
>8 Cores and 16 processing threads, based on AMD "Zen 5" architecture
which is the same thread geometry as my 9900K.
My main concerns at the time were:
1. More cores for running large workloads on k8s since I had just upgraded to 128G RAM
2. More thread level parallelism for my C++ code
Naively I thought that, ceteris paribus and assuming good L1 cache utilization, having more physical cores with a higher clock rate would be the ticket for 2.
Does the 9800X3D have a wider pipeline or is it some other microarchitectural feature that makes it faster?
You don't even need to go into the pipeline details. The 9800X3D has 8x more L2 cache, 6x more L3 cache, 2x the memory bandwidth than the now 8 years old i9 9900K. 3D V-cache is pretty cool.
I purposely picked a CPU with the same thread geometry as your 9900K to avoid calls of "apples & oranges" or whatever. If you want more threads, the 9950X is right there in the same socket. Or Core Ultra 9 285k. Either of which will run circles around a 9900K in code compilation.
I think my i9 was released right after the Spectre and Meltdown mitigations in 2019, but I seem to remember even more recent vulns in that family… so that could also be a factor.
I replied to the sibling comment: I was making simplifying assumptions for two specific use cases and naively treated physical cores and clock rate as my variables.
reply