Hacker Timesnew | past | comments | ask | show | jobs | submit | josephg's commentslogin

There's still plenty of human authored content out there. No need to post slop on HN.

My rule is: If you can't be bothered writing an article, don't expect me to read it.


I'm just saying it's probably better to just flag the article than it is to post "This was written by an LLM" for the 210th time.

Its got this ... cadence:

> Same earnings call. Same margin targets. Same quarterly pressure. The sense that you were choosing between competitors was a fiction that VF Corp had no incentive to correct.

> That threat disciplined every material choice, every stitch count, every zipper spec. Once they all report to the same parent, the discipline evaporates. Nobody needs to outbuild anybody. The only pressure left is the one coming from above

> None of this shows up on the shelf. The colors are right. The logos are crisp. The product photography is excellent. You discover what you actually bought three months in, when the stitching pulls apart at every stress point.

Its thing X. Its thing Y. Its thing Z. And now I'm going to tell you about thing Q in a longer sentence.


More generally it's pure info dump. Everything is lists of things, all given the same weight, even if not literal bullet point lists or numbered lists.

Some other common things (not present in this article) that are dressed up lists are short titled paragraphs, and sequences of sentences that go "blah blah blah: blah blah blah."

Very little opinion added anywhere, but the punchy writing style where everything is given an overdone monotone overimportance masks it a little.

Pure infodump is not terrible for some things but I'd much rather it be less heavily processed by the LLM, and be upfront about the fact that it's a dressed up infodump with an LLM involved.


I don’t see why that would be proof of being written by a LLM.

It quite well can be (and I think it is) stylistic writing, hammering the message home by repetition of blows.


It could be a stylistic choice, except it's rapidly become an extremely popular one for some reason. It's also the default Claude style. So, take what you will from that. Either someone is writing exactly like Claude on purpose, or they just asked Claude to write something, but either way I'm entirely oversaturated on it. At this point I don't think "Claude", I just start skimming and then close the tab.

Its not proof, but its certainly a smoking gun. Even when humans use that literary device, we don't typically do it every other paragraph. It feels like a pretty safe bet that an LLM wrote most of this.

How would you ever prove that it’s by an LLM? There’s no text an LLM can produce that I couldn’t theoretically type myself, too. But the style is strong evidence.

> It quite well can be (and I think it is) stylistic writing

I wish we could bet money on this. This is an LLM and I'd win that bet.

The ability to recognize the style comes from working with them.

It's quite possible the author wrote an outline or rough draft of the article and then asked the LLM to clean it up. But the final result has LLM tells all over.


"Stylistic writing" that just happens to perfectly match Claude's current default codeslopped output style, and the exact same style as the majority of posts that have made it onto the front page of HN in recent months. Just endless streams of short punchy sentences that are really just glorified bulleted lists with no substance to them.

Let's quit the gaslighting and acknowledge that no human actually writes this way consistently across every paragraph, unless they're intentionally trying to write badly.


"It's the smell, if there is such a thing. I feel saturated by it. I can taste LLM stink and every time I do, I fear that I've somehow been infected by it."

The irony is that this is a perfect example of the thing the article complains about. Even writing is now of a lower quality thanks to LLMs. In this case you're paying with your time instead of money for a lower quality product than you'd get 10 years ago.

Yeah, all of that felt a lot like Claude's writing style.

Yeah I have all this data backed up on a couple different drives. IRC and ICQ logs going back to when I was a teenager. Digitised photos from when I was a kid through to the present day. Source code for projects I worked on from when I was 10. Rips of all the cds I used to own. And yes, email exports dating back to about 2003.

I wish I kept more, honestly. It’s a beautiful record.

I think my most treasured possession is videos of myself and my parents from when I was young. I’m thinking of sitting my sisters kids down in front of a camera for 15 minutes and getting them to talk about their life. It’s beautiful to rewatch this stuff decades later. It’s transporting.


It could be massively improved with a special CPU instruction for racing dram reads. That might make it actually useful for real applications. As it is, the threading model she used here would make it incredibly difficult to use this in a real program.

There’s no point racing DRAM reads explicitly. Refreshes are infrequent and the penalty is like 5x on an already fast operation, 1% of the time.

What’s better is to “race” against cache, which is 100x faster than DRAM. CPUs already of do this for independent loads via out-of-order execution. While one load is stalled waiting for DRAM, another can hit the cache and do some compute in parallel. It’s all already handled at the microarchitectural level.


There are already systems that do this in hardware. Any system that has memory mirroring RAS features can do this, notably IBM zEnterprise hardware, you know, the company that this video promoter claims to be one-upping.

I don't think memory mirring features available today allow you to race two DRAM accesses and use the result that returns earlier?

The memory controller sends the read to the DIMM that is not refreshing. It is invisible to software, except for the side-effect of having better performance.

Mirroring is more of a reliability feature though, no? From my understanding it’s like RAID where you keep multiple copies plus parity so uncorrectable errors aren’t catastrophic. Makes sense for mainframes which need to survive hardware failures.

Refresh avoidance is a tangential thing the memory controller happens to be able to do in a scheme like that, but you’d really have to be looking at it in a vacuum to bill it as a benefit.

Like I said, it’s all about cache. You’re not going to DRAM if you actually care about performance fluctuations at the scale of refresh stalls.


Clearly, hitting a cache would be the better outcome. The technique suggested here could only apply to unavoidably cold reads, some kind of table that's massive and randomly accessed. Assume it exists, for whatever reason. To answer your question, refresh avoidance is an advertised benefit of hardware mirroring. Current IBM techno-advertising that you can Google yourself says this:

"IBM z17 implements an enhanced redundant array of independent memory (RAIM) design with the following features: ... Staggered memory refresh: Uses RAIM to mask memory refresh latency."


I can google, thanks. My point is that nobody is buying mainframes with redundant memory to avoid refresh stalls. It’s a mostly irrelevant freebie on hardware you bought for fault tolerance.

Do you have evidence that this is a fact? Have you looked at the computing requirements documents for, for example, stock exchanges? I have it on good evidence that stock exchanges ran on mainframes. They are essentially the counterparty (in a computing sense not a financial sense) in each placed order. If someone is willing to run a fiberoptic cable from Chicago to New York or New Jersey to exploit reduced propagation delay, admittedly much larger than a refresh stall, wouldn't you think that they or someone else would also be interested in predicting computing stalls. An exchange would face at least a significant reputational risk if it could be exploited that way.

The low latency matching engines in colos run Linux these days, and we use microwave instead of fiber. Incoming orders are processed by hardware receive timestamp, so predicting jitter doesn’t give you an advantage. Clearing and settlement I’m not sure about, not latency critical though, mainframes wouldn’t surprise me there.

I hope this approach gets some visibility in the CPU field. It could be obviously improved with a special cpu instruction which simply races two reads and returns the first one which succeeds. She’s doing an insane amount of work, making multiple threads and so on (and burning lots of performance) all to work around the lack of dedicated support for this in silicon.

I actually hope it doesn't!

The results are impressive, but for the vast, vast majority of applications the actual speedup achieved is basically meaningless since it only applies to a tiny fraction of memory accesses.

For the use case Laurie mentioned - i.e. high-frequency trading - then yes, absolutely, it's valuable (if you accept that a technology which doesn't actually achieve anything beyond transmuting energy into money is truly valuable).

For the rest of us, the last thing the world needs is a new way to waste memory, especially given its current availability!


Yes; linux is generally supported better than freebsd. CUDA and Docker work out of the box on linux. Linux has better graphics drivers and steam support. Opensource software (libraries, tools) are much more likely to be tested & work properly on linux. I've also run into several rust crates which don't build on freebsd - particularly crates which depend on C code.

But the comment you're replying to said there weren't many good technical reasons to prefer freebsd over linux. I think that's broadly true.

I still really like freebsd though. Unlike linux, one community is responsible for the kernel and userspace. That makes the whole OS feel much more cohesive. You don't have to worry about supporting 18 different distributions, which all do their own thing.


FreeBSD's development philosophy, it's aversion to design decisions like - we must allow systemd everywhere, stability, zfs and jails, consistent configuration (for decades) are all technical reasons I prefer it over Linux.

How about Ubuntu and snaps? License needed for certain security updates, etc.


IMAP works in outlook. Its just horrible to set up and half broken. Click "Add account". Then type in your email address, click "Choose provider", select IMAP, then click "Sync directly with IMAP" (dark pattern hidden button). If you don't click that last button, outlook uploads your IMAP email credentials to their own MS Cloud instance, and that proxies all your emails via microsoft's cloud servers. Do they read your email messages for advertising? Nobody knows!

In my testing, the local IMAP client implementation quite frequently launches a DoS attack against your IMAP server. It'll send the same query requesting new mail messages in a tight loop, limited by the round-trip latency. But luckily, almost nobody uses IMAP via outlook because its so difficult to set up.


> If you don't click that last button, outlook uploads your IMAP email credentials to their own MS Cloud instance, and that proxies all your emails via microsoft's cloud servers. Do they read your email messages for advertising? Nobody knows!

I've seen cases where people have it set up like that and it's so awfully slow. Minutes to display a single new message. That cloud brings absolutely zero user-benefit.


There's also two different applications which are both "Outlook for Mac".

If you go into the "Outlook" menu in the app, there's a "Legacy Outlook" button, which relaunches outlook using a completely different binary. The two outlook implementations have different bugs and all sorts of different behaviour.

Outlook For Mac is free but "legacy outlook" requires a MS365 subscription for some reason.

Outlook is also not to be confused with Microsoft's "Web Outlook" client, available at outlook.live.com. It all seems totally insane.


< It all seems totally insane.

This is Microsoft we're talking about, right?


It still has a very ... plastic feeling. The way it writes feels cheap somehow. I don't know why, but Claude seems much more natural to me. I enjoy reading its writing a lot more.

That said, I'll often throw a prompt into both claude and chatgpt and read both answers. GPT is frequently smarter.


GPT is more accurate. But Claude has this way of association between things that seems smarter and more human to me.

> Combined results (Claude Mythos / Claude Opus 4.6 / GPT-5.4 / Gemini 3.1 Pro)

> Terminal-Bench 2.0: 82.0% / 65.4% / 75.1% / 68.5%

> USAMO: 97.6% / 42.3% / 95.2% / 74.4%

> The biggest jump in the numbers they quoted is 6%.

Just in the numbers you quoted, thats a 16.6% jump in terminal-bench and a 55.3% absolute increase in USAMO over their previous Opus 4.6 model.


I don’t know if you’re willingly disregarding everything being said to you or there’s a language barrier here.

Can you please stop posting comments with personal swipes in them? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://qht.co/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


You're right, I apologize for that. I have been responding with annoyance rather than walking away when I receive replies that appear to be ignoring context.

Appreciated! and of course, I know it's not easy - believe me I know...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: