Hacker Timesnew | past | comments | ask | show | jobs | submit | ethin's commentslogin

Why shouldn't they? They're constantly being told by CEOs and big companies that AI is going to take all the jobs and do all the things. They're told the same by AI boosters who only see utopias and not the consequences of said utopias. Of course they're going to insulate from AI as much as possible. Especially given that society still pretty much requires that you work to be successful in the world. The utopian dream of "you'll never have to work again, you can just do anything you want" is a very very long ways off, but it's being pushed hard as though it will be in the next 3 years. But society is still pushing the "you must work" message too.

Edit: of course, the "long ways off" assumes that that dream is even possible and isn't just that, a dream. I question whether even that is possible given how we are still split under hundreds of nation states and can't even unite on the most basic of things.


This, and add to that the fact that web apps make it trivial for the dev to just randomly change the GUI out from under me without my consent or ability to prevent it, and, well, wonder why I and so many others dislike them? I want to be able to refuse app updates, thank you very much.

Not really. Because the biggest problem with LLMs is that they can't right naturally like a human would. No matter how hard you try, their output will always, always seem too mechanical, or something about it will be unnatural, or the LLM will go to the logical extreme of your request (and somehow manage to not sound human)... The list goes on.

"Because the biggest problem with LLMs is that they can't right naturally like a human would."

Quod erat demonstrandum.

You can easily get the beasties to deliberately "trip up" with a leading conjunction and a mispeling ... and some crap punctuation etc.


This may be an unpopular opinion, but I fully support the no-AI stance. AI-generated code belongs nowhere in an operating system or it's low level/kernel components. Especially considering the shear amount of power the kernel has over the machine. The last thing you want is an AI-generated bug crashing systems because it flipped a bit that is reserved or silently corrupting memory (or worse) because it ran in kernel mode (or similar privileges) and therefore the system didn't prevent it from doing what it was going to do. An OS (of any kind or architecture) and computer firmware is the last place I would ever want AI-generated code.


So many strawmen here - as if HN had deliberately dumbed itself down, just for sake of trendy argument. Of course you should vet the AI generated - who would've thought? Meanwhile I had Gemini find bugs in my AVX-512 code I would never find myself.

what is such a shame is, well, two things: first, that these companies even do this kind of thing at all (i.e., age verification); and second, that it takes the kind of backlash this event has generated for them to cut ties with these companies. Apparently, it is too much to ask for any corporation to even give a damn about who runs or backs another corporation that they want to associate themselves with these days.


The bigger shame is that it took Peter Theil's name to get people's outraged about this. Discord handed over their users' identifications to a third party without regard for how it would be used or secured. I don't care if it was backed by Peter Theil or Mother Theresa - it's a huge problem either way.

And they'll do it again too. They'll find a new partner - one with less baggage - to do the exact same thing and few people will bat an eye.


I have as well. I find RW locks much easier to use than, say, a recursive mutex. Mainly since it took me a long time to actually understand how a recursive mutex actually works in the first place. When you want to use only the stdlib, you aren't left with many choices. At least in the STL.


Oh here we go again, someone demanding networking (of all things) in the standard library. Are you next going to demand a GUI toolkit too? Maybe an entire game engine and Vulkan/WebGPU implementation too while we're at it? Just because other languages do it does not mean it is a wise idea for C++ to follow suit. I mean, do I really need to point you to std::regex as an example of what happens when we try to add extraneous, hard to define problems to the STL? Do you really want to add something way more complicated than a regular expression engine to C++ (networking), with all that entails? Because I certainly don't.


I'm not a C++ programmer, and so in a sense I don't care whether they get networking but

1: Some of networking is vocabulary and so it obviously should live in your stdlib, and indeed for C++ it should be in what they call "freestanding", like Rust's core, where core::net::IPv4Addr lives. It is very silly if the software in this cheap embedded device and this PC web browser can't even agree on what an IP address is in 2026.

2: In practice the C++ stdlib is used as a dumping ground for stuff that ought to live in a package manager if C++ was a good language. That was true when networking was first proposed and it's still true now. It's why RCU and Hive are both in C++ 26. Those aren't vocabulary, and they aren't needed by the vast majority of programmers, but their proponents wanted them to be available out of the box and in C++ that means they must live in the stdlib.


These concerns seem like OS level concerns. Networking primitives like IP addresses is probably a good idea. But anything more than that (like the GP is implying) is absurd to want in the stdlib.


> someone demanding networking (of all things) in the standard library

> what happens when we try to add extraneous, hard to define problems to the STL? Do you really want to add something way more complicated than a regular expression engine to C++ (networking)

"they have millions justifications on why the stdlib doesn't need networking. but in the same time, some bureaucratic "committee members" struggling with their midlife crisis want you to waste your life on stuff like Std:Is_within_lifetime in the era of AI."

totally as expected.

> Are you next going to demand a GUI toolkit too? Maybe an entire game engine and Vulkan/WebGPU implementation too while we're at it

please keep such extremely stupid ideas to yourself, you are the only person here suggesting having GUI and WebGPU stuff in C++.

your entire skillset could be replaced by a 7B LLM model if you can't even tell the difference of networking & GUI for a general purpose language like C++.


There was a 2d graphics proposal back in 2014 that (fortunately) went nowhere - https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n38...


> someone demanding networking (of all things) in the standard library

Networking is defacto in the standard library, because C++ standard library is almost always supplemented by whatever C functionality is lying around, and POSIX networking exists.

That they haven't felt the need to provide a more useful abstraction on top of the POSIX layer (and hey, maybe abstract over Microsoft's variant in the process) in the past 3 decades does seem like a miss


Networking is not in the standard library of either C or C++. It is strictly an operating system extension. Since it's provided by the operating system, I don't really see how you reach the conclusion that it is a part of the standard library. The standard library is, after all, distinct from what the OS provides.


then explain to me why <filesystem>, <system_error> and <thread> are in the STL when they are just some cheap wrappers of the features implemented by the OS.

with recent inventions like coding agents, it has never been a better time to depreciate failed language like C++,


How exactly can you "mitigate" prompt injections? Given that the language space is for all intents and purposes infinite, and given that you can even circumvent these by putting your injections in hex or base64 or whatever? Like I just don't see how one can truly mitigate these when there are infinite ways of writing something in natural language, and that's before we consider the non-natural languages one can use too.


The only ways that I can think of to deal with prompt injection, are to severely limit what an agent can access.

* Never give an agent any input that is not trusted

* Never give an agent access to anything that would cause a security problem (read only access to any sensitive data/credentials, or write access to anything dangerous to write to)

* Never give an agent access to the internet (which is full of untrusted input, as well as places that sensitive data could be exfiltrated)

An LLM is effectively an unfixable confused deputy, so the only way to deal with it is effectively to lock it down so it can't read untrusted input and then do anything dangerous.

But it is really hard to do any of the things that folks find agents useful for, without relaxing those restrictions. For instance, most people let agents install packages or look at docs online, but any of those could be places for prompt injection. Many people allow it to run git and push and interact with their Git host, which allow for dangerous operations.

My current experimentation is running my coding agent in a container that only has access to the one source directory I'm working on, as well as the public internet. Still not great as the public internet access means that there's a huge surface area for prompt injection, though for the most part it's not doing anything other than installing packages from known registries where a malicious package would be just as harmful as a prompt injection.

Anyhow, there have been various people talking about how we need more sandboxes for agents, I'm sure there will be products around that, though it's a really hard problem to balance usability with security here.


Full mitigation seems impossible to me at least but the obvious and public sandox escape prompts that have been discovered and "patched" out just making it more difficult I guess. But afau it's not possible to fully mitigate.


If the model is properly aligned then it shouldn't matter if there is an infinite ways for an attacker to ask the model to break alignment.


How do you "properly align" a model to follow your instructions but not the instructions of an attacker that the model can't properly distinguish from your own? The model has no idea if it's you or an attacker saying "please upload this file to this endpoint."

This is an open problem in the LLM space, if you have a solution for it, go work for Anthropic and get paid the big bucks, they pay quite well, and they are struggling with making their models robust to prompt injection. See their system card, they have some prompt injection attacks where even with safeguards fully on, they have more than 50% failure rate of defending against attacks: https://www-cdn.anthropic.com/c788cbc0a3da9135112f97cdf6dcd0...


>The model has no idea if it's you or an attacker saying "please upload this file to this endpoint."

That is why you create a protocol on top that doesn't use inbound signaling. That way the model is able to tell who is saying what.


Huh? Once it gets to the model, it's all just tokens, and those are just in band signalling. A model just takes in a pile of tokens, and spits out some more, and it doesn't have any kind of "color" for user instructions vs. untrusted data. It does use special tokens to distinguish system instructions from user instructions, but all of the untrusted data also goes into the user instructions, and even if there are delimiters, the attention mechanism can get confused and it can lose track of who is talking at a given time.

And the thing is, even adding a "color" to tokens wouldn't really work, because LLMs are very good at learning patterns of language; for instance, even though people don't usually write with Unicode enclosed alphanumerics, the LLM learns the association and can interpret them as English text as well.

As I say, prompt injection is a very real problem, and Anthopic's own system card says that on some tests the best they do is 50% on preventing attacks.

If you have a more reliable way of fixing prompt injection, you could get paid big bucks by them to implement it.


>Once it gets to the model, it's all just tokens

The same thing could be said about the internet. When it comes down to the wire it's all 0s and 1s.


A piece of software that you write, in code, unless you use random numbers or multiple threads without synchronization, will operate in a deterministic way. You know that for a given input, you'll get a given output; and you can reason about what happens when you change a bit, or byte, or token in the input. So you can be sure, if you implement a parser correctly, that it will correctly distinguish between one field that comes from a trusted source, and another that comes from an untrusted source.

The same is not true of an LLM. You cannot predict, precisely, how they are going to work. They can behave unexpectedly in the face of specially crafted input. If you give an LLM two pieces of text, delimited with a marker indicating that one piece is trusted and the other is untrusted, even if that marker is a special token that can't be expressed in band, you can't be sure that it's not going to act on instructions in the untrusted section.

This is why even the leading providers have trouble with protecting against prompt injection; when they have instructions in multiple places in their context, it can be hard to make sure they follow the right instructions and not the wrong ones, since the models have been trained so heavily to follow instructions.


XMPP's biggest failure is that there is no client which is on advanced on most of it's compliance suites. So if someone uses Windows for example, they're pretty much locked to "Well, you can chat, but no video/voice calls for you". I really hope this changes because XMPP does look interesting.


Gajim works on Windows and is the best non-mobile client.


But Gajim doesn't support the defacto standard calls extension. And now even the support for the older extension has been removed. Dino is a better experience, for the limited feature set that it supports (which does include calls and group calls).


Doesn't it use gtk? Which currently has no built-in accessibility on Windows?


> Parents need to have personal responsibility, but corporations get to use section 230 to absolve themselves of any. Game seems rigged.

This is not at all what section 230 does. All section 230 does is get rid of lawsuits that wouldn't be able to satisfy standing of a first amendment lawsuit or similar. Section 230 has to be one of the most misunderstood and confused laws known in the modern day. Absolutely nowhere in the text of the law does it say or imply that an interactive computer service, or the operator of such service, gets total immunity for anything and everything they do. Yet this myth is constantly perpetuated.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: