The bottom of that wiki page has links to EFF pages. However you are correct that they view it as a lost battle:
(Added 2015) Some of the documents that we previously received through FOIA suggested that all major manufacturers of color laser printers entered a secret agreement with governments to ensure that the output of those printers is forensically traceable. Although we still don't know if this is correct, or how subsequent generations of forensic tracking technologies might work, it is probably safest to assume that all modern color laser printers do include some form of tracking information that associates documents with the printer's serial number. (If any manufacturer wishes to go on record with a statement to the contrary, we'll be happy to publish that here.)
(Added 2017) REMINDER: IT APPEARS LIKELY THAT ALL RECENT COMMERCIAL COLOR LASER PRINTERS PRINT SOME KIND OF FORENSIC TRACKING CODES, NOT NECESSARILY USING YELLOW DOTS. THIS IS TRUE WHETHER OR NOT THOSE CODES ARE VISIBLE TO THE EYE AND WHETHER OR NOT THE PRINTER MODELS ARE LISTED HERE. THIS ALSO INCLUDES THE PRINTERS THAT ARE LISTED HERE AS NOT PRODUCING YELLOW DOTS.
This list is no longer being updated.
* EFF definitely did not think that the regular printer tracking dots mechanism was appropriate.
* You could probably argue this either as a modus ponens or a modus tollens -- that is, in either direction -- but one criticism that we made of the tracking dots was that they were (mostly) secret voluntary cooperation between industry and government, not an actual law. Perhaps an actual law is preferable because the public can understand in detail how it's being restricted, as well as oppose it politically and potentially challenge it in the courts.
Of course, the current 3D printing restrictions are proposed as an actual law. That does seem largely better to me than "we got most 3D printer companies to put some secret software in their printers to enforce some unspecified policies that the government asked them to, and the companies and the government don't want to talk about it", although one way it's better is simply the opportunity to oppose it in the legislature.
Thanks for trying to maintain the list as long as you could!
I think you are assuming that the government does not _also_ have secret agreements with big 3D printer manufacturers (to which the state of CA may not be privy)
As an example, what about a divide instruction. A machine without an FPU can emulate a machine that has one. It will legitimately have to run hundreds/thousands of instructions to emulate a single divide instruction, it will certainly take longer.
Thats OK, just means the emulation is slower doing that than something like add that the host has a native instruction for. In ‘emulator time’ you still only ran one instruction. That world is still consistent.
? That's not how Windows on ARM emulation works. It uses dynamic JIT translation from x86 to ARM. When the compiler sees, e.g., lock add [mem], reg presumably it'll emit a ldadd, but that will have different semantics if the operand is misaligned.
Do you have an example of it being flagged? I only see one old post from 7 years ago (not flagged), and that links to a scribd pdf rather than the author's website
The fact that terms like Aho-Corasick, PLDI, Go, etc. are properly capitalized, including if they begin sentences, but otherwise sentences are uncapitalized, makes me think it's an explicit LLM instruction "don't capitalize the start of sentences" rather than writing style.
ChatGPT also loves Aho-Corasick and seems to overuse it as an optimization fall back idea. ChatGPT has suggested the algorithm to me but the code ended up slowing down a lot.
No, this is just what that writing style looks like. Names and acronyms are usually capitalized normally.
I keep being surprised by the magnitude of the disconnect between this place and the other circles of hell. I'd have thought the Venn diagram would have a lot more overlap.
Oh the venn diagram might be big, the HN population just has a lot of variance I think, and is less of a community per se. I don't doubt what you're saying, though in the grand scheme of things, I think the "too lazy to hit shift" population dwarfs any of these groups.
Yeah, I can agree with the variance. Except that the "too lazy to hit shift" community is not something I would ever confuse with people writing long form articles about their regex engine research that they'll be presenting at PLDI.
The confusion might be understandable for people who have never encountered this style before, but that's still a very uncharitable take about an otherwise pretty interesting article.
Funnily, this was precisely the question I had after posting this (and the topic of an LLM disagreement discussed in another thread). Turns out not, but sibling comment is another confounding factor.
Having been reading generated comments almost daily for over three years now, I have a pretty good sense of it. There's a bunch of signals: how new the account is; how the comments look visually (the capitalization and layout of the paragraphs, particularly when all of one user's comments are displayed in a list). Em-dashes and short, emphatic sentences, make it more obvious of course.
There are cases that are more borderline; usually when someone has used a translation service or has used an LLM to polish up a comment they wrote themselves. For these ones there's less certainty, and whilst we discourage them, we're not as rigid in our aversion to them or as eager to ban accounts that do it.
But ones that are entirely generated are still pretty easy to spot, even just from visual appearance.
reply