Hacker Timesnew | past | comments | ask | show | jobs | submit | with's commentslogin

Everyone is doing this now. Granted, on Codex / Claude Code, you can disable it, it’s not the default to have it disabled. For some reason on Cursor, they keep shoving the “Made with Cursor” into my PR description despite me disabling attribution, which looks really stupid on a work PR.

I’m so tired of all this BS. Why did this become normal? and how do we not read this as cheap advertising?


I think people read it as cheap advertising because a PR isn't really the tool's output, it's team communication.

A little "made with X" in your own draft is one thing. Putting branding into a PR your coworkers have to read is another.


yes, but the same people who have the power to make those same things public are the ones benefiting from the same fraud, waste, and corruption

> do these types of techniques really work?

They have been proven to: https://www.anthropic.com/research/small-samples-poison


if the idea can just be obliterated by an LLM, there was never a moat to begin with

Just further proof that context is the real moat, not intelligence. All the models are already converging to be equally intelligent and that will only continue. GPT 5.4 / Opus 4.6 are the first two models I’ve used where I’m like, yeah, with the right spec/context they can pretty much do anything.

The “bundle” or “context” is the value.


docker is bloated. i'm almost certain half of every image is dead weight. unused apt packages, full distros for a single binary, shell configs nobody touches. but the incentive is to make things work, not make them small. so bloat wins.

still, i use it every day and i don't see what replaces it. every "docker killer" solves one problem while ignoring the 50 things docker does well enough.


Docker released "Docker Hardened Images" last year and made them free. They contain less bloat.

Buying more RAM for your server or only touching a select few images that are run most often is also a way to make things work. It might not be the most elegant software engineering approach, but it just works.



nice share!


Thanks, I also saw this as PyPI and was confused, lol


now somebody just needs to make a PiPy for the raspberry pi


Is that PiPyPy or PiPyPI?


Please don’t give ideas


nobody's asking who profits from false positives. these AI detection vendors have a direct financial incentive to flag aggressively. more flags = "more value" = more school contracts renewed. same playbook as selling antivirus to your grandma. sell fear, charge per seat, and make the false positive rate someone else's problem.


Do you have any evidence to back this up or is it speculative?

My institution subscribes to TurnItIn's AI detector. The documentation is quite clear that the system is tuned in a manner that produces a significant number of false negatives and minimizes false positives. They also state that they don't report anything under "20% AI-generated" content.

So the marketing I've seen is intended to reassure skittish administrators that the software is not going to generate false accusations.

That being said, I have no idea whether the marketing claims are true. The software is a black box.


Fair point, the "tuned to flag aggressively" claim was speculative on my part. Turnitin's own documentation says they favor false negatives over false positives.

That said, their accuracy claims have been disputed before. Inside Higher Ed [1] reported that Turnitin's real-world false positive rate was higher than originally asserted, and the company declined to disclose the updated number. And, USD also noted that while Turnitin claimed <1% false positives, a Washington Post investigation found a 50% rate on a smaller sample, and that non-native English speakers / neurodivergent students get flagged at higher rates [2].

Now, those are from 2023 and the product (and AI in general) has been updated drastically since. But the broader incentive problem holds even if the detector itself is conservatively tuned. The product is a black box. And the downstream cost of errors falls entirely on students, not on Turnitin's renewal rate. You don't need aggressive tuning for the incentive structure to be broken.

[1] https://www.insidehighered.com/news/quick-takes/2023/06/01/t...

[2] https://lawlibguides.sandiego.edu/c.php?g=1443311&p=10721367


>So the marketing I've seen is intended to reassure skittish administrators that the software is not going to generate false accusations.

This is it, right here. All policy I've seen lately has been geared towards students having expanded "due process" rights.


MCP is a completely useless abstraction, and I’m not sure why anyone would push for it over basic cli tools / skills.


this looks chaotic. I love it


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: