Hacker Timesnew | past | comments | ask | show | jobs | submit | adgjlsfhk1's commentslogin

A lot of the answer is that if you can do more work while generating less garbage (lower allocation rate) this problem basically solves itself. Basically every "high performance GC language" other than Java allows for "value type"/"struct"s which allow for way lower allocation rate, which puts a lot less pressure on the GC.

And yet Java outruns pretty much all of them, because it doesn't actually allocate everything on the heap all of the time. And you've been able to declare and work with larger structures in raw memory for ... 20 years? You mostly don't need to, but sometimes you want to win benchmark wars.

And of course it's getting value types now, so it'll win there too. As well as vectors.


The biggest thing is BigInt by default. It makes every integer operation require an overflow check.

JS (when using ints, which v8 does) is the same in this respect.

The counterpoint is that Java has so much SOTA GC work precisely because the language makes writing efficient code that doesn't heavily tax the GC basically impossible.

Tell me you haven't looked at Java in 15 years without telling me.

Given that the vast majority of non-GC language code does a terrible job of managing memory, it's not difficult at all for the JVM to win out on efficient, reliable systems.


To be clear, I think that GC is absolutely the right approach, you just need an object model that lets you write idiomatic code that allocates an order of magnitude or 2 less. Once Valhala is (finally) released, the Java ecosystem will start to have the tools to write efficient code (the same tools that C# has had for ~2 decades now), but until then it's just completely impossible to write object oriented code in Java without millions of allocations per second.

Absolutely not. The way to spend as much money as possible is to do intentionally inefficient repairs (e.g. last minute/reactive). The providers gain from grid unreliability since by causing problems, they get to justify spending money to "fix" them.

Mimalloc v3 has just come out (about a month ago) and is a significant improvement over both v2 and v1 (what you likely last tested)

One of the best parts about GC languages is they tend to have much more efficient allocation/freeing because the cost is much more lumped together so it shows up better in a profile.

Agreed, however there is also a reason why the best ones also pack multiple GC algorithms, like in Java and .NET, because one approach doesn't fit all workloads.

Then there’s perl, which doesn’t free at all.

Perl frees memory. It uses refcounting, so you need to break heap cycles or it will leak.

(99% of the time, I find this less problematic than Java’s approach, fwiw).


Unless this has changed recently, perl doesn't free memory to the kernel, only within its own process/vm.

Freedom is overrated... :P

doesn't java also?

I heard that was a common complaint for minecraft


Minecraft for somewhat silly reasons was largely stuck using Java8 for ~a decade longer than it should have which meant that it was using some fairly outdated GC algorithms.

"silly reasons" being Java breaking backwards compatibility

decade seems a usual timescale for that, considering f.e. python 2->3


So much software was stuck on Java 8 and for so long that some of the better GC algorithms got backported to it.

What do you mean - if Java returns memory to the OS? Which one - Java heap of the malloc/free by the JVM?

Java is pretty greedy with the memory it claims. Especially historically it was pretty hard to get the JVM to release memory back to the OS.

To an outsider, that looks like the JVM heap just steadily growing, which is easy to mistake for a memory leak.


> Especially historically it was pretty hard to get the JVM to release memory back to the OS.

This feels like a huge understatement. I still have some PTSD around when I did Java professionally between like 2005 and 2014.

The early part of that was particularly horrible.


Java has a quite strict max heap setting, it's very uncommon to let it allocate up to 25% of the system memory (the default). It won't grow past that point, though.

Baring bugs/native leaks - Java has a very predictable memory allocation.


we aren't talking about allocation, tho

we are talking about DEallocation


it's a reply to:

"To an outsider, that looks like the JVM heap just steadily growing, which is easy to mistake for a memory leak."

I cut the part that it's possible to make JVM return memory heap after compaction but usually it's not done, i.e. if something grew once, it's likely to do it again.


This only really ends up being a problem on windows. On systems with proper virtual memory setups, the cost of unused memory is very low (since the the OS can just page it out)

Unfortunately, the JVM and collectors like the JVM's plays really bad with virtual memory. (Actually, G1 might play better. Everything else does not).

The issue is that through the standard course of a JVM application running, every allocated page will ultimately be touched. The JVM fills up new gen, runs a minor collection, moves old objects to old gen, and continues until old gen gets filled. When old gen is filled, a major collection is triggered and all the live objects get moved around in memory.

This natural action of the JVM means you'll see a sawtooth of used memory in a properly running JVM where the peak of the sawtooth occasionally hits the memory maximum, which in turn causes the used memory to plummet.


Depends on which JVM, PTC and Aicas do alright with their real time GCs for embedded deployment.

I've never really used anything other than the OpenJDK and Azuls.

How does PTC and Aicas does GC? Is it ref counted? I'm guessing they aren't doing moving collectors.


They are real time GCs, nothing to do with refcounting.

One of the founding members of Aicas is the author of "Hard Realtime Garbage Collection in Modern Object Oriented Programming Languages" book, which was done as part of his PhD.


For video games it is pretty bad, because reading back a page from disk containing "freed" (from the application perspective, but not returned to the OS) junk you don't care about is significantly slower than the OS just handing you a fresh one. A 10-20ms delay is a noticeable stutter and even on an SSD that's only a handful of round-trips.

Games today should be using ZGC.

There's a lot of bad tuning guides for minecraft that should be completely ignored and thrown in the trash. The only GC setting you need for it is `-XX:+UseZGC`

For example, a number of the minecraft golden guides I've seen will suggest things like setting pause targets but also survivor space sizes. The thing is, the pause target is disabled when you start playing with survivor space sizes.


Overall if java hits the swap, it's a bad case. Windows is a like special beast when it comes to 'swapping', even if you don't truly needed it. On linux all (server) services run with swapoff.

Not used Windows Server that much?

When it works. Many programs in GC language end up fighting the GC by allocating a large buffer and managing it by hand anyway because when performance counts you can't have allocation time in there at all. (you see this in C all the time as well)

That's generally a bad idea. Not always, but generally.

It was a better idea when Java had the old mark and sweep collector. However, with the generational collectors (which are all Java collectors now. except for epsilon) it's more problematic. Reusing buffers and objects in those buffers will pretty much guarantees that buffer ends up in oldgen. That means to clear it out, the VM has to do more expensive collections.

The actual allocation time for most of Java's collectors is almost 0, it's a capacity check and a pointer bump in most circumstances. Giving the JVM more memory will generally solve issues with memory pressure and GC times. That's (generally) a better solution to performance problems vs doing the large buffer.

Now, that said, there certainly have been times where allocation pressure is a major problem and removing the allocation is the solution. In particular, I've found boxing to often be a major cause of performance problems.


If your workload is very regular, you can still do better with an arena allocator. Within the arena, it uses the same pointer-bump allocation as Java normally uses, but then you can free the whole area back to the start by resetting the pointer to its initial value. If you use the arena for servicing a single request, for instance, you then reset as soon as you're done with the request, setting you up with a totally free area for the next request. That's more efficient than a GC. But it also requires your algorithm to fall into that pattern where you KNOW that you can and should throw everything from the request away. If you can't guarantee that, then modern collectors are pretty magical and tunable.

If people didn't need to do it, they wouldn't generally do it. Not always, but generally.

People do stuff they shouldn't all the time.

For example, some code I had to clean up pretty early on in my career was a dev, for unknown reasons, reinventing the `ArrayList` and then using that invention as a set (doing deduplication by iterating over the elements and checking for duplicates). It was done in the name of performance, but it was never a slow part of the code. I replaced the whole thing with a `HashSet` and saved ~300 loc as a result.

This individual did that sort of stuff all over the code base.


Reinventing data structures poorly is very common.

Heap allocation in java is something trivial happens constantly. People typically do funky stuff with memory allocation because they have to, because the GC is causing pauses.

People avoid system allocators in C++ too, they just don't have to do it because of uncontrollable pauses.


> People typically do funky stuff with memory allocation because they have to

This same dev did things like putting what he deemed as being large objects (icons) into weak references to save memory. When the references were collected, invariably they had to be reloaded.

That was not the source of memory pressure issues in the app.

I've developed a mistrust for a lot of devs "doing it because we have to" when it comes to performance tweaks. It's not a never thing that a buffer is the right thing to do, but it's not been something I had to reach for to solve GC pressure issues. Often times, far more simple solutions like pulling an allocation out of the middle of a loop, or switching from boxed types to primatives, was all that was needed to relieve memory pressure.

The closest I've come to it is replacing code which would do an expensive and allocation heavy calculation with a field that caches the result of that calculation on the first call.


Premature optimization is the root of all evil.

I'm not sure why you're rationale for how to deal with garbage collected memory is based on a guy that didn't know standard data structures and your own gut feelings.

Any program that cares about performance is going to focus on minimizing memory allocation first. The difference between a GCed language like java is that the problems manifest as gc pauses that may or may not be predictable. In a language like C++ you can skip the pauses and worry about the overall throughput.


Well, let me just circle back to the start of this comment chain.

> Many programs in GC language end up fighting the GC by allocating a large buffer and managing it by hand

That's the primary thing I'm contending with. This is a strategy for fighting the GC, but it's also generally a bad strategy. One that I think gets pulled more because someone heard of the suggestion and less because it's a good way to make things faster.

That guy I'm talking about did a lot of "performance optimizations" based on gut feelings and not data. I've observed that a lot of engineers operate that way.

But I've further observed that when it comes to optimizing for the GC, a large amount of problems don't need such an extreme measure like building your own memory buffer and managing it directly. In fact, that sort of a measure is generally counter productive in a GC environment as it makes major collections more costly. It isn't a "never do this" thing, but it's also not something that "many programs" should be doing.

I agree that many programs with a GC will probably need to change their algorithms to minimize allocations. I disagree that "allocating a large buffer and managing it by hand" is a technique that almost any program or library needs to engage in to minimize GCs.


This is a strategy for fighting the GC, but it's also generally a bad strategy.

Allocating a large buffer is literally what an array or vector is. A heap uses a heap structure and hops around in memory for every allocation and free. It gets worse the more allocations there are. The allocations are fragmented and in different parts of memory.

Allocating a large buffer takes care of all this if it is possible to anything else. It doesn't make sense to make lots of heap allocations when what you want is multiple items next to each other in memory and one heap allocation.

That guy I'm talking about did a lot of "performance optimizations" based on gut feelings and not data.

You need to let this go, that guy has nothing to do with what works when optimizing memory usage and allocation.

But I've further observed that when it comes to optimizing for the GC, a large amount of problems don't need such an extreme measure like building your own memory buffer and managing it directly.

Making an array of contiguous items is not an "extreme strategy", it's the most efficient and simplest way for a program to run. Other memory allocations can just be an extension of this.

I agree that many programs with a GC will probably need to change their algorithms to minimize allocations. I disagree that "allocating a large buffer and managing it by hand"

If you need the same amount of memory but need to minimize allocations how do you think that is done? You make larger allocations and split them up. You keep saying "managing it by hand" as if there is something that has to be tricky or difficult. Using indices of an array is not difficult and neither is handing out indices or ranges to in small sections.


> A heap uses a heap structure and hops around in memory for every allocation and free.

Not in the JVM. And maybe this is ultimately what we are butting up against. After all, the JVM isn't all GCed languages, it's just one of many.

In the JVM, heap allocations are done via bump allocation. When a region is filled, the JVM performs a garbage collection which moves objects in the heap (it compacts the memory). It's not an actual heap structure for the JVM.

> It doesn't make sense to make lots of heap allocations when what you want is multiple items next to each other in memory and one heap allocation.

That is (currently) not possible to do in the JVM, barring primitives. When I create a `new Foo[128]` in the JVM, that creates an array big enough to hold 128 references of Foo, not 128 Foo objects. Those have to be allocated onto the heap separately. This is part of the reason why managing such an object pool is pointless in the JVM. You have to make the allocations anyways and you are paying for the management cost of that pool.

The object pool is also particularly bad in the JVM because it stops the JVM from performing optimizations like scalarization. That's where the JVM can completely avoid a heap allocation all together and instead pulls out the internal fields of the allocated object to hand off to a calling function. In order for that optimization to occur, and object can't escape the current scope.

I get why this isn't the same story if you are talking about another language like C# or go. There are still the negative consequences of needing to manage the buffer, especially if the intent is to track allocations of items in the buffer and to reassign them. But there is a gain in the locality that's nice.

> Using indices of an array is not difficult and neither is handing out indices or ranges to in small sections.

Easy to do? Sure. Easy to do fast? Well, no. That's entirely the reason why C++ has multiple allocators. It's the crux of the problem an allocator is trying to solve in the first place "How can I efficiently give a chunk of memory back to the application".

Obviously, it'll matter what your usage pattern is, but if it's at all complex, you'll run into the same problems that the general allocator hits.


In the JVM, heap allocations are done via bump allocation.

If that were true then they wouldn't be heap allocations.

https://www.digitalocean.com/community/tutorials/java-jvm-me...

https://docs.oracle.com/en/java/javase/21/core/heap-and-heap...

not possible to do in the JVM, barring primitives

Then you make data structures out of arrays of primitives.

Easy to do? Sure. Easy to do fast? Well, no. That's entirely the reason why C++ has multiple allocators.

I don't know what this means. Vectors are trivial and if you hand out ranges of memory in an arena allocator you allocate it once and free it once which solves the heavy allocation problem. The allocator parameter in templates don't factor in to this.


> If that were true then they wouldn't be heap allocations.

"Heap" is a misnomer. It's not called that due to the classic CS "heap" datastructure. It's called that for the same reason it's called a heap allocation in C++. Modern C++ allocators don't use a heap structure either.

How the JVM does allocations for all it's collectors is in fact a bump allocator in the heap space. There are some weedsy details (for example, threads in the JVM have their own heap space for doing allocation to avoid contention in allocation) but suffice it to say it ultimately translates into a region check then pointer bump. This is why the JVM is so fast at allocation, much faster than C++ can be. [1] [2]

> I don't know what this means.

JVM allocations are typically pointer bumps, adding a number to a register. There's really nothing faster than it. If you are implementing an arena then you've already lost in terms of performance.

[1] https://www.datadoghq.com/blog/understanding-java-gc/#memory...

[2] https://inside.java/2020/06/25/compact-forwarding/


Modern C++ allocators don't use a heap structure either.

"Yes, malloc uses a heap data structure to allocate memory dynamically for programs. The heap allows for persistent memory allocation that can be managed manually by the programmer."

"How Malloc Works with the Heap

    Heap Data Structure: Malloc uses a heap data structure to manage memory. The heap is a region of a process's memory that is used for dynamic memory allocation.

    Memory Management: When you call malloc, it searches the heap for a suitable block of memory that can accommodate the requested size. If found, it allocates that memory and returns a pointer to it."
How the JVM does allocations for all it's collectors is in fact a bump allocator in the heap space.

This doesn't make sense. It's one or the other. A heap isn't about getting more memory or mapping it into a process space, it is about managing the memory already in the process space and being able to free memory in a different order than you allocated it, then give that memory back out without system calls.

https://www.geeksforgeeks.org/c/dynamic-memory-allocation-in...

https://en.wikipedia.org/wiki/C_dynamic_memory_allocation

JVM allocations are typically pointer bumps, adding a number to a register.

I think you are mixing up mapping memory into a process (which is a system call not a register addition) and managing the memory once it is in process space.

The allocator frees memory and reuses it within a process. If freeing it was as simple as subtracting from a register then there would be no difference in speed between the stack and the heap and there would be no GC pauses and no GC complexity. None of these things are true obviously since java has been dealing with these problems for 30 years.

This is why the JVM is so fast at allocation, much faster than C++ can be

Java is slower than C++ and less predictable because you can't avoid the GC which is the whole point here.

The original point was that you have to either avoid the GC or fight the GC and a lot of what you have talked about is either not true or explains why someone has to avoid and fight the GC in the first place.


You're wrong for like 6 different reasons.

Java does do bump pointer allocation. The key is that when GC runs, surviving objects get moved. The slow part of GC isn't the allocation (GCs generally have much faster allocators than malloc). The slow part is the barriers that the GC requires and the pauses.


You're wrong for like 6 different reasons.

If that were true you could have listed one the made sense in context. This person was saying that allocation was as fast as a incrementing a register while continually ignoring the fact that deallocation needs to happen along with any organization of allocated memory.

Then they were ignoring that large allocations have big speed benefits for a reason.

Conflating java moving a pointer, mapping memory into a process, sbrk, and arena allocation is going in circles, but the fundamentals that people need to fight the GC or work around it remains.

Allocations have a price and the first step to optimizing any program is avoiding that, but in GC languages you get pauses on top of your slow downs.


Right? "I had this one contingent experience and I've built my entire world view and set of practices around it."

Any extra throughput is far overshadowed by trying to control pauses and too much heap allocations happening because too much gets put on the heap. For anything interactive the options are usually fighting the gc or avoiding gc.

and you can get another 2x improvement by incrementing i by 2

for (size_t i=p; i <= 0xFFFFFFFF / p; i+=2) {

since every other one will be a multiple of 2


IMO that algorithm is barely a sieve. It is technically pareto optimal, but it seems like in practice it would just end up worse than either an optimized segmented sieve (it is ~log(n)^2 slower) or an O(1) memory method (probable prime test followed by a deterministic prime test) depending on how large and how wide the numbers you're testing are.

> I do agree that it'd be reasonable to just assume fast misaligned ops, but for whatever reason gcc and clang just don't, and that's what we have for defaults.

This very much has a "for now" on it. Once there is actually widespread hardware with the feature, I would be very surprised if the compilers don't update their heuristics (at least for RVA23 chips)


Indeed we shall hope heuristics update; but of course if no compilers emit it hardware has no reason to actually bother making fast misaligned ops, so it's primed for going wrong.

hardware devs traditionally have been pretty good at helping the compiler teams with things like this (because its a lot cheaper to improve the compiler than your chip).

RVA23 (and RVA20 before it) aren't an admission that Risc-V got it wrong. It's a necessary step to make Risc-V competetive in the desktop space as opposed to micro-controllers where the flexibility is hugely valuable.

Rubbish.

The "G" extension for everything you want to run shrink-wrapped binaries on a standard OS has been there since the May 7 2014 "User Level ISA, Version 2.0", which is before RISC-V started to be promoted outside of Berkeley e.g. at Hot Chips 26 in August 2014, and the first RISC-V workshop in January 2015 in Monterey.

The name "G" has morphed into now (along with the C extension) being called "RVA20", which led to "RVA22" and "RVA23", but the principle is unchanged.

"An integer base plus these four standard extensions (“IMAFD”) is given the abbreviation “G” and provides a general-purpose scalar instruction set. RV32G and RV64G are currently the default target of our compiler toolchains."

pp 4-5 in

https://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: