Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

My Mastodon instance cries in how little RAM you're able to run that on. I'm envious.


Stuff like that is turning into one of our running gags at work. You have all these machine learning processes and our legacy java applications gobbling up memory by the dozen of gigabytes, or our newer java apps being a bit less greedy but still gigabyte sized - and it's never enough.

And then we have something like our zabbix proxies, and when they passed a couple tens of thousands of items, we had to increase it's cache memory... from 16MB to a glorious 64MB. Such a splurge. And the server is using a whole 128MB for its write caches. Or our Grafanas are using a total of about 500MB of memory server side total to chew through oodles of data.


It's still ridiculous that we ever thought giving Java processes so much memory was normal. I'm very happy golang and rust are getting more main stream.


Indeed. In some cases, I find the memory need to be plausible. For example, our logging ES has nodes running with 32GB of heap, but these nodes are indexing and searching ~900GB of logs each. That's, per node, actually in line with our larger postgres instances, running at 32 - 64GB or memory on 600GB - 800GB of dataset. Or our logstash needs 4-8G go go through 2k messages per second.

But then I have other stuff running on huge node and struggling to process 10 messages per second and falling over whenever load increases by 10%.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: