I suspect his diagnostic is pretty accurate, though. The bitter lesson came up when deep learning was already mainstream. The text discusses how that happened, and it can be the case that convenience beats accuracy. Accuracy is an epistemic value, but current AI is largely driven by market values. If accuracy manages to get along, great, but other than that, market-laden convenience reigns. Commercially, it is often more convenient to even change the world in order to make it easier for our models (consider how we're willing to create special places without pedestrians or human-driven vehicles for autonomous vehicles as a "solution" for their shortcomings).
I like RSS and I use it, but this sounds like wishful thinking. Even the amount of human produced content is just too big for one to be their own curator. We have those few authors or sites we keep up, but other than that we must rely on external help, such as HN or an agent.
Their argument is not sound, but it is informative paying attention to what they consider "evidence" for AGI. A nice instance of a problem that seems peculiar to AI: it tries to define both its target phenomenon and how well it is doing towards it.
I confess it was disappointing for me. Their main claim seems to be that thinking comprises pattern matching and pattern completion--allowing them to say that LLMs do resemble something we humans do-- but that's essentially the idea behind the connectionist movement from the 1980's - the one out of which current DNN models came from. Perhaps a friend of 1960's symbolic AI would be unhappy with that claim, but there are not many of these around anymore (Gary Marcus is misrepresented as one such, but his view is that models should be hybrid, not purely symbolic).
Nowadays, the question about whether LLMs are "actually" doing something similar to human thinking revolves around other dimensions, such as whether they rely on emergent world-models or not. Whether such world models would require symbolic reasoning or not is a different matter.
> Qubes is a good approach to an OS, but it's Xen security, not OS security, and I'd rather run a secure OS other than Fedora or Debian on Qubes.
Not sure what you mean by "Xen security" in contrast with "OS security". Qubes is an OS. Though a lot depends on your threat model, if you have high security needs, Qubes is likely to be your best companion.
Anyway, another reasonable choice is Kicksecure. It's the debian-based OS underlying Whonix (Kicksecure is focused on security and Whonix adds its privacy/anonymity setup on top of it). You can use Kicksecure as a VM within Qubes, by the way.
I honestly don't think you'll be able to get very far unless the team is already onboard with the plan to switch towards a more asynchronous culture. If they lack the motivation, they won't bother improving their writing.
The tricky part is that they might be interested in the results you promise, but still lack motivation. It's common for someone who's interested in losing weight not to be very motivated to do it themselves. They don't want to lose weight themselves, they want to "be slimmed down by someone else". You may face a similar difficulty. That's part of the reason why changing a culture is so hard.
I know that's not a direct answer to your question, but I needed this context to say this: I think whatever tricks and tips you can come up with yourself are more likely to succeed. That's because you're already familiar with the specific needs and the specific difficulties people might face when handling the most frequent and repetitive issues.
Rather than thinking how to get fuzzy improvements on people's overall writing skills, perhaps you could try to focus on suggesting specific solutions for specific problems ("hey everyone, I've noticed that when handling X people usually forget to tell p, q and z. So let us agree on using this text structure '1) p; 2) q; and 3) z; whenever handling X"). I think that, by accumulating lots of small tricks like these over time, you'll be able to go further. Going bottom-up seems easier than trying to change things top-down.
I'm not sure this makes sense to you, for I'm not familiar with your concrete situation, but I hope it helps somehow.
I was about to make a very similar comment. I won't say I'll never switch to neovim, for a lot depends on future vim/neovim development, and unexpected things happen.
But I do agree that vim's stability is priceless. It's been years without any need for major changes in my vimrc, and without any trouble with the plugins I use.
I'm sympathetic with the author, though. Whenever you need to change, finding an alternative that "just works" always makes things easier and you can quickly get back to being productive. I'm not so sure that I wouldn't go down a similar path if the vim ecosystem collapsed.
Not hard to see how an AI agent could achieve something similar even as a step towards some innocently established goal.
Poor security + hacker-like capacities for anyone using an AI agent.
What could go wrong?
reply