Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hi HN, I'm the author of this post and a JVM engineer working on OpenJDK.

I've spent the last few years researching GC for my PhD and realized that the ecosystem lacked standard tools to quantify GC CPU overhead—especially with modern concurrent collectors where pause times don't tell the whole story.

To fix this blind spot, I built a new telemetry framework into OpenJDK 26. This post walks through the CPU-memory trade-off and shows how to use the new API to measure exactly what your GC is costing you.

I'll be around and am happy to answer any questions about the post or the implementation!

 help



Thank you for this interface! It will definitely help in tracking down GC related performance issues or in selecting optimal settings.

One thing that I still struggle with, is to see how much penalty our application threads suffer from other work, say GC. In the blog you mention that GC is not only impacting by cpu doing work like traversing and moving (old/live) objects but also the cost of thread pauses and other barriers.

How can we detect these? Is there a way we can share the data in some way like with OpenTelemetry?

Currently I do it by running a load on an application and retaining its memory resources until the point where it CPU skyrockets because of the strongly increasing GC cycles and then comparing the cpu utilisation and ratio between cpu used/work.

Edit: it would be interesting to have the GC time spent added to a span. Even though that time is shared across multiple units of work, at least you can use it as a datapoint that the work was (significantly?) delayed by the GC occurring, or waiting for the required memory to be freed.


Thanks for reading! Your current method, pushing the load until the GC spirals and then comparing the CPU utilization, is exactly the painful, trial-and-error approach I'm hoping this new API helps alleviate.

You've hit on the exact next frontier of GC observability. The API in JDK 26 tracks the explicit GC cost (the work done by the actual GC threads). Tracking the implicit costs, like the overhead of ZGC's load barriers or G1's write barriers executing directly inside your application threads, along with the cache eviction penalties, is essentially the holy grail of GC telemetry.

I have spent a lot of time thinking about how to isolate those costs as part of my research. The challenge is that instrumenting those barrier events in a production VM without destroying application throughput (and creating observer effects) is incredibly difficult. It is absolutely an area of future research I am actively thinking about, but there isn't a silver bullet for it in standard HotSpot just yet.

Something that you could look at there are some support to analyze with regards to thread pauses is time to safepoint.

Regarding OpenTelemetry. MemoryMXBean.getTotalGcCpuTime() is exposed via the standard Java Management API, so it should be able to hook into this.


After writing my previous post I was wondering, do we actually need to instrument the barrier events and other code tied to a GC? Currently we benchmark our application with different GC at different settings and resource constraints and the we pick one sizing and settings combination that we like (read most work/totalcpu that is still fits within the allocation constraints of our clusters). What ultimately matters for production is how the app behaves in production.

This will not help directly when developing new (versions) or GC. On the other hand, if we can have a noop GC including omitting any of the barriers etc required for GC to function we can create a baseline for apps. Provided we have enough total memory to run the benchmark in.

Edit: I guess we can then also use perf to compare cache misses between runs with different GC implementations and settings. Not sure how this works out in real life as it will be very CPU, kernel, and other loads dependent.


The problem is that there is no baseline for measuring GC overhead. You cannot turn it off, you can only replace and compare with different strategies. For example sbrk is technically a noop GC, but that also has overhead and impact because it will not compact objects and give you bad cache behavior. (It illustrates the OP's point that it is not enough to measure pauses, sbrk has no pauses but gets outperformed easily.)

You could stop collecting performance counters around GC phases, but you even if you are not measuring the CPU still runs through its instructions, causing the second order effects. And as you mentioned too-short-to-measure barriers and other bookkeeping overheads (updating ref counters etc) or simply the fact that some tag bits or object slots are reserved all impact performance.

There is a good write-up of the problem and a way to estimate the cost based on different GC strategies, as you suggested, here: https://arxiv.org/abs/2112.07880

The way I found to measure a no-GC baseline is to compare them in an accurate workload performance simulator. Mark all GC and allocator related code regions and have the simulator skip all those instructions. Critically that needs to be a simulator that does not deal with the functional simulation, but gets it's instructions from a functional simulator, emulator or PIN tool that does execute everything. It's laborious, not very fast and impractical for production work. But, it's the only way I found to answer a question like "What is the absolite overhead of memory management in Python?". (Answer: lower bound walltime sits around +25% avg, heavily depending on the pyperformance benchmark)


I'm a bit confused about the colors used in the CPU graphs. In the first graphs it looks like green means that the application is running and red means that the GC is running. But once we get to Figure 4 then red means the GC is running (on the GC threads) or nothing is running (on the Main thread)? If red always means that GC work is being done on that thread then this is inconsistent with the text that says "By distributing reclamation work across both cores..." since we would have three threads running at once. Once you move to the concurrent GC figures you definitely have three things running at once. Unless you're assuming SMT with each core running two threads?

In Figure 3 you somehow have 101% wall time. :)


Thanks for the detailed read and the great questions!

Regarding the colors and thread counts in Figure 4: the key piece of context here is that the application thread (the Main thread) is completely paused during this phase. It isn't actually running anything at all. Because the application is halted, only the GC threads are doing active work. Therefore, rather than three threads running at once, we strictly have two things running concurrently. This is a helpful piece of feedback and I'll make sure to make this clearer in future writings.

Good eye on the 101% wall time. That was due to a minor bug in my plotting script that specifically affected the GC plots with no concurrent time. I have corrected this and updated the post. The fixed plot should be visible on the site in a future near you just as soon as the edge caches invalidate.


Hey, noob question, but does OpenJDK look at variable scope and avoid allocating on the heap to begin with if a variable is known to not escape the function's stack frame?

Not strictly related to this post, but I figured it'd be helpful to get an authoritative answer from you on this.


Yes, Hotspot performs Escape Analysis to avoid heap allocation. This is a nice article: https://shipilev.net/jvm/anatomy-quarks/18-scalar-replacemen...

I built this 15 years ago and it got fairly popular, but is long dead now...

https://github.com/jmxtrans/jmxtrans

Kind of amazing how people are still building telemetry into Java. Great post and great work. Keep it up.


Great article!

Will the new metric be exposed in JFR recordings as well?


Thanks!

It is not currently exposed in JFR for JDK 26, but I agree that it would be the logical next step. Now that the underlying telemetry framework (cpuTimeUsage.hpp) is in place within HotSpot, wiring it up to JFR events would be a natural extension.


I just want to say this is an incredibly detailed, well written, and beautifully illustrated article. Solid work.

Thanks! I really appreciate that. I spent a lot of time trying to nail the illustrations so I'm really glad it landed well. :-)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: