Hacker Timesnew | past | comments | ask | show | jobs | submit | hedgehog's commentslogin

It's also a bit odd they don't mention column-oriented databases at all.

Yeah that was my second thought. ECS' favoring of structs-of-arrays over traditional arrays-of-structs for game entities boils down to the same motivations and resulting physical layout as column-stores vs row-stores.

Why would column-oriented databases be mentioned? My understanding is that these are typically used for OLAP, but the article seems to talk only about OLTP.

Modern database engines tend to use PAX-style storage layouts, which are column structured, regardless of use case. There is a new type of row-oriented analytic storage layout that would be even better for OLTP but it is not widely known yet so I wouldn't expect to see it mentioned.

Because there is a whole section that describes column based storage without mentioning that some databases have column based storage as an option.

This is one of the main problems I have with LLMs. It finds patterns in words but not content. I see this in code reviews and eventually outages. Something looks reasonable at the micro scale but clearly didn’t understand something important (because they don’t understand) and it causes a major issue.

Structurally a transformer model is so unrelated to the shape of the brain there's no reason to think they'd have many similarities. It's also pretty well established that the brain doesn't do anything resembling wholesale SGD (which to spell it is evidence that it doesn't learn in the same way).

>Structurally a transformer model is so unrelated to the shape of the brain there's no reason to think they'd have many similarities.

Substrate dissimilarities will mask computational similarities. Attention surfaces affinities between nearby tokens; dendrites strengthen and weaken connections to surrounding neurons according to correlations in firing rates. Not all that dissimilar.


Sure the implementation details are different.

I suppose I should have asked by what definition of "consciousness and agency" are today's LLMs (with proper tooling) not meeting?

And if today's models aren't meeting your standard, what makes you think that future LLMs won't get there?


Given the large visible differences in behavior and construction, akin to the difference between a horse and a pickup truck, I would ask the reverse question: In what ways do LLMs meet the definition of having consciousness and agency?

Veering into the realm of conjecture and opinion, I tend to think a 1:1 computer simulation of human cognition is possible, and transformers being computationally universal are thus theoretically capable of running that workload. That being said, that's a bit like looking at a bird in flight and imagining going to the moon: only tangentially related to engineering reality.


> In what ways do LLMs meet the definition of having consciousness and agency?

Agency: an ability to make decisions and act independently. Agentic pipelines are doing this.

Consciousness: something something feedback[1] (or a non-transferable feeling of being conscious, but that is useless for the discussion). Recurrent Processing Theory: A computation is conscious if it involves high-level processed representations being fed back into the low-level processors that generate it.

Tokens are being fed back into the transformer.

> that's a bit like looking at a bird in flight and imagining going to the moon: only tangentially related to engineering reality.

Is it? Vacuum of space is a tangible problem for aerodynamics-based propulsion. Which analogous thing do we have with ML? The scaled-up monkey brain[2] might not qualify as the moon.

[1] https://www.astralcodexten.com/p/the-new-ai-consciousness-pa...

[2] https://www.frontiersin.org/journals/human-neuroscience/arti...


What about modern LLMs isn't "agentic" enough?

Doesn't matter if they're conscious for that. They're clearly capable of goal oriented behavior.


These questions really vex me. The appearance of intelligence is almost orthogonal to "consciousness and agency." If a human has a stroke and forgets how to speak, or never learns, or has some severe form of learning disorder, they still have exactly the same rich inner life full of subjective qualititative experience known only to them as the rest of us. Similar to an array of GPUs. If you remove the text encodings from the rest of the computing system it is a part of, outputs will appear as gibberish to you and it will no longer appear to be intelligent at all, but whatever is happening at the level of electrons meeting silicon would still be exactly the same. If it's having conscious experience at all, it should be having it regardless of whether the outputs it computes are interpreted as text or as textures on a game background.

I just don't see why "I can talk to it now" changes anything. We don't give humans less moral consideration when they're dreaming, hallucinating, tripping on LSD. The brain is just as conscious when it's having nothing but completely abstract nonsense thoughts as when it's writing The Republic.

I understand why it feels different to people. Shit, this thing can talk to me; maybe it's alive and I should treat it like such. But that's a conservative reaction to a black box known only by its behavior. The problem is these things are not actually black boxes. We don't understand the functions being computed or we'd just hard-code them and not need statistical learning techniques, but we do understand how computers work. We know process state is saved off and restored billions of times per second because of context switching. We know that state is simply a stored byte sequence that can be copied, backed up, restored endlessly. Servers and computing hardware can be destroyed but software cannot and LLMs are software. It's not at all like a brain. There are animals that go into various levels of reduced or suspended function that appear like dormancy, but there is no stream of personal subjective experience that can survive the complete destruction of its own physical body. The fact that it pays off evolutionarily to tacitly encode that reality into our instincts at an extremely deep, core level is why we have fear and pain in the first place, to nudge us toward predictive modeling of the world that keeps us alive, able to find food, and able to reproduce. Software needs none of that. There is no reason whatsoeve that, assuming a processor has subjective experience, that the subjective experience of having some gates fire versus others gets interpreted by humans programmers as "loss" and "training" and some is numerically approximating a PDE solution. Why should those feel different to the machine when the firing patterns are exactly the same and only the human interpretation of the output is different?

It just feels like a vast, vast category error for people to be speculating about machine consciousness and moralizing about how we "treat" software systems.


If platonic representation hypothesis holds across substrates, then it might matter very little, in the end. It holds across architectures in ML, empirically.

The crowd of "backpropagation and Hebbian learning + predictive coding are two facets of the very same gradient descent" also has a surprisingly good track record so far.


I don't know which direction you're going with this, but predictive coding has a pretty obvious advantage when it comes to continuous learning. Since predictive coding primarily encodes errors, it can distinguish between known and novel data and therefore reduce the damaging effects of catastrophic forgetting by having a very obvious regularisation scheme for avoiding forgetting.

The errors are also not distributed in the same way as you'd expect from a human. The tools can synthesize a whole feature in a moderately complicated web app including UI code, schema changes, etc, and it comes out perfectly. Then I ask for something simple like a shopping list of windshield wipers etc for the cars and that comes out wildly wrong (like wrong number of wipers for the cars, not just the wrong parts), stuff that a ten year old child would have no trouble with. I work in the field so I have a qualitative understanding of this behavior but I think it can be extremely confusing to many people.

I was just trying to reconcile his reply with the charts. Have you tested how this scales down for smaller systems, as one might find in on the management side of a network switch?

They're also wrong. The geographic center (around Ellensburg or so) is also in what is known as Eastern WA (east of the Cascades).

Spokane is Eastern Washington, the college in Cheney is literally called Eastern, its just not a desert.

For the raw footage of something with as much contrast as the moon against a backdrop of space it would make sense to use a format like ProRes that preserves more dynamic range.

Why would it affect selling a business?

Previous owner can start the same business immediately and poach all the clients, reducing the value of the sold business to zero. Buyers obviously anticipate this and won't buy the business without the non-compete.

That would violate a non-compete attached to the sale.

The posted article is literally about banning non-competes.

...for employees. For business owners there are different rules (IIRC > 1% ownership threshold).

There's also a big difference between starting a competing business like your example, and being barred from say working on "cloud infrastructure" because your previous employer also worked on "cloud infrastructure". It can be blurry for executives, but in general noncompetes seem to be used to push pay down more than for any legitimate business purpose.

If we wait long enough someone out there will upgrade it and send it back to us.

For those unaware (spoiler follows) this is the reveal in the plot of 'Star Trek - The Motion Picture'.

They were $450 or so until recently, now... good luck.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: