Yeah, which is why this Ursa-for-Kafka (UFK) fork can be a drop-in replacement. It holds both the classic Kafka topics with disks, and the Ursa engine topics that have higher latency.
I agree with your assessment re: latency. I've been very vocal about this ever since this type of architecture came out.
It could also be that the world as a whole cares less about privacy today than they did seven years ago. Without a relative measurement from a similar platform, it's a bit of an empty statement
One thing that has certainly changed is that algorithms have become more aggressive. If your content isn't performing well, it gets hidden much faster and more aggressively than before. This makes sense when you consider it from the PoV of the platforms (they have much more content to choose from)
A very in-depth tool that acts as a easily browse-able reference to many Apache Kafka internals like configuration options, error types, the wire format (by version), config advice and version upgrade diffs.
They make $8-9B a year (~90% profit margins) selling software to mainframes, which were deployed ages ago but still have to be maintained because critical COBOL business code was written on their systems - and migration is too riskly/costly.
To give you an idea:
- of the risk in regulated industries like banking: a UK bank was once fined *$62 million* for botching a mainframe migration and causing downtime.
- of the difficulty and risk in non-tech industries: Australia once spent *$120 million* trying to migrate its social security system off mainframes... and failed.
Mainframes are not their only business, of course, but it's a major cash cow that's under appreciated. I, for one, didn't know that business keeps growing.
I think it's worth it to build your own miniaturized versions of OpenClaw/claw-like agents. It's easy enough to build and the confidence of it being in a language you're familiar with, small enough surface area to limit risk, etc. seems worth it imo
we will need some sort of payment-block checkmark for use of social media soon enough. This claw phenomenon is opening the floodgates of spam even more than before
Mm, not at all. The usual LLM doesn't have its own file system, browser, persistent memory of all actions, etc. The usual LLM experience is you open chatgpt.com and have a singular chat session.
> I built this because running Kafka locally for development is painful —
gigabytes of RAM, slow startup, ZooKeeper/KRaft configuration. I just
wanted something that accepts produce requests and gets out of the way.
This is not true. Kafka's latest `-native` images are very fast to start up (~100ms) and use relatively little memory.
I agree with your assessment re: latency. I've been very vocal about this ever since this type of architecture came out.
reply