Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Indeed, as I used to tell my ops colleagues when they pointed to RAM utilization graphs, "we paid for all of that RAM, why aren't we using it?"
 help



"Uneaten food" <CHOMP CHOMP> "is wasted food." <CHOMP CHOMP>

Because OoM errors are oh so fun.

I write algorithms that operate on predictable amounts of data. It's very easy to work out the maximum amount of things we need to have and then allocate it all in fixed size arrays. If you allocate all your memory at startup you can never OOM at runtime. Some containers need over 100GB but like the parent comment said we've already bought the RAM.

I write algorithms that operate on less predictable amounts of data.

If you operate over all of your data every time it's a lot more predictable ;)

The data I operate on come in from the outside world, I can't operate on all of it because most of it doesn't exist yet. I can't process an event that hasn't happened yet

Caches are automatically released by the OS when demand for memory increases.

You eventually run out of caches to evict.

That is completely irrelevant to this discussion about using the RAM you’ve paid for.

At that point you can still fall back onto swap on NVME.

Doesn’t Apple use pretty damn quick NVME? I wonder how much of a performance drop it actually is. Certainly not as bad as running a swap file on a 5400 rpm HDD…

Isn't that NVME also very expensive to replace because it's tied to hardware identifiers? If you keep swapping all the time, surely NVME would be the first part to fail

This was heavily debated in the 11.4 timeframe because there was risk that this version of the OS could excessively wear NVME.

https://appleinsider.com/articles/21/06/04/apple-resolves-m1...

The issue was subsequently resolved but the consensus was with modern wear leveling this isn't so much a thing.

I have a 2021 MacBook Pro with the original drive. I use it heavily for development practically every day and just dumped the SMART data.

Model Number: APPLE SSD AP1024R

=== START OF SMART DATA SECTION ===

SMART overall-health self-assessment test result: PASSED

Available Spare: 100%

Available Spare Threshold: 99%

As always, YMMV


How often are ooms caused by lack of ram rather than programming?

> How often are ooms caused by lack of ram rather than programming?

You're right, but in a production deployment, that extra ram might mean the difference between a close call that you patch the next day and an all hands emergency to call in devops and engineers together during peak usage.

source: been there


we're still talking about the MacBook, right?

> we're still talking about the MacBook, right?

na, this is just PTSD talking


I don't think MacOS OoMs as Linux

(and to be honest the way Linux does acts on OoMs are quite debatable)


macOS can OOM, ish.

If you don't have any more disk space for swap, or memory pressure gets too high, you get the "You've ran out of application memory" dialog box with a list of applications you can force quit, and macOS leaves it up to the user on what to kill instead of the system choosing automatically.


do you also say that about hdd space? about money in the bank?

It’s counterintuitive but I learned this best by playing RTS games. If you don’t spend money your opponent can outdo you on the map by simply spending their money. But the principle extends, everything you have doing nothing (buildings units etc) is losing. The most efficient process is to have all your resources working for you at all times.

If you don't have savings to spend for a potential change of tactics, larger players, groups or players with different strategies can easily overtake you as your perfectly efficient economy collapses.

Going to also echo the comment that this isn't an RTS


> this isn't an RTS

Yep. RTS is a context where the principles are more true.

In real life you aren’t in a 1-1 matchup with competitive success criteria.


It's why I wake up at 3am to make sure my agents aren't waiting on me :D

> It’s counterintuitive but I learned this best by playing RTS games. If you don’t spend money your opponent can outdo you on the map by simply spending their money.

OK, hear me out over here:

We are not in an RTS.

Edit: in real-world settings lacking redundancy tends to make systems incredibly fragile, in a way that just rarely matters in an RTS. Which we are _not in_.


Agreed. Real life is not an RTS. Optimizing computer or business resources - kind of like one.

Why he wouldn't say it about HDD space? You buy HDD to keep them empty?

And as for the money analogy, what's the idea there, that memory grows interest? Or that it's better to put your money in the bank and leave it there, as opposed to buy assets or stocks, and of course, pay for food, rent, and stuff you enjoy?


Money analogy could better be put as one of:

1. Store your money in a 0% interest account—leave RAM totally unused—or put it in an account that actually generates some interest—fill the RAM with something, anything that might be useful.

2. Store your money buried in your backyard or put it in a bank account? If you want to actually use your money, it's already loaded into the bank.

Imperfect analogies because money is fungible. In either case though, money getting spent day-to-day (e.g. the memory being used by running programs) is separate.


Then why do you have any hard drive space available at this moment?

Isn't it obvious?

Because wanting to utilize something as much as you can to get your money's worth, and wanting to fully exhaust it as a resource are two different things.


HDD is for storage. RAM is for speed?

Why do you even need RAM when you could run everything from your HDD with much cheaper cost/MB.


It's the same in all three cases: using HDD space, money and RAM for good purposes (disk cache) is useful, wasting it (Electron) is bad.

(Weird side question: are you by any chance the Jason Farnon who wrote IBFT?)


> about money in the bank?

Yes, generally. That's the entire idea behind the stock market.


> do you also say that about hdd space?

For slightly different reasons. My game drive is using about 900 GB out of 953 GB usable space - because while I have a fast connection, it's nicer to just have stuff available.

Same for some projects where we need to interface with cloud APIs to fetch data - even though the services are available and we could pull some of the data on demand, sometimes it's nicer to just have a 10 TB drive and to pull larger datasets (like satellite imagery) locally, just so that if you need to do something with it in a few weeks, you won't have to wait for an hour.


because memory access performance is not O(1) but depends on the size of what's in memory (https://www.ilikebigbits.com/2014_04_21_myth_of_ram_1.html). Every byte used makes the whole thing slower.

I am not following, isn't this just a graph that shows that how fast operations happen is largely dependent on the odds that it is in cache at various levels (CPU/Ram/Disk)?

The memory operation itself is O(1), around 100 ns, where at a certain point we are doing full ram fetches each time because the odds of it being in CPU cache are low?

Typically O notation is an upper bound, and it holds well there.

That said, due to cache hits, the lower bound is much lower than that.

You see similar performance degradation if you iterate in a double sided array the in the wrong index first.


O notation is technically meaningless for systems with bounded resources. That said, yes the performance is depending on the probability of cache hits, notably also the TLB. For large amounts of memory used and random access patterns, assuming logarithmic costs for memory access tends to model reality better.

The author of that post effectively re-defines "memory"/"RAM" as "data", and uses that to say "accessing data in the limit scales to N x sqrt(N) as N increases". Which, like, yeah? Duh, I can't fit 200PB of data into the physical RAM of my computer and the more data I have to access the slower it'll be to access any part of it without working harder at other abstraction layers to bring the time taken down. That's true. It's also unrelated to what people are talking about when they say "memory access is O(1)". When people say "memory access is O(1)" they are talking about cases where their data fits in memory (RAM).

Their experimental results would in fact be a flat line IF they could disable all the CPU caches, even though performance would be slow.


Memory access performance depends on the _maximum size of memory you need to address_. You can clearly see it in the graph of that article where L1, L2, L3 and RAM are no longer enough to fit the linked list. However while the working set fits in them the performance scales much better. So as long as you give priority to the working set, you can fill the rest of the biggest memory with whatever you want without affecting performance.

RAM is always storing something, it’s just sometimes zeros or garbage. Nothing in how DRAM timings work is sensitive to what bits are encoded in each cell.

> Every byte used makes the whole thing slower.

This is an incorrect conclusion to make from the link you posted in the context of this discussion. That post is a very long-winded way of saying that the average speed of addressing N elements depends on N and the size of the caches, which isn't news to anyone. Key word: addressing.


Huh? There is nothing called "empty memory". There is always something being stored in the memory, the important thing is whether you care about that specific bits or not.

And no, the articles you linked is about caching, not RAM access. Hardware-wise, it doesn't matter what you have in the cells, access latency is the same. There is gonna be some degradation with #read/write cycles, but that is besides the point.


why is it not O(1)? It has to service within a deadline time, so it is still constant.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: