Java Memory Management Improvement Proposal

For the umpteenth time, I had to deal with the dreaded Java OutOfMemoryError, this time when trying to run Fisheye Subversion browser.  My problem deals with a known issue in Fisheye where a background indexing task sometimes runs causes an OOM error.  Of course, an OOM does not just effect the indexing task, it can also impact other operations unrelated to the task with the memory leak.  While diagnosing this problem, it got me thinking about how this could be better managed.

The Problem

 The JVM defines a single heap for all objects created across all threads in the java process.  A consequence of this is that if one processing thread has a memory leak, any thread on the JVM may start suffering from OOM errors.  However, not all threads are equally important to my application.  I may not mind too much if a batch processing thread fails with an OOM but I may care very much if the lack of memory causes all of my Tomcat threads to no longer be able to process web requests.

Proposed Solution

What I would like is to be able to segment my heap so that I can dedicate portions of the heap (either by percentage of the total heap or by absolute number of bytes) to a specific set of work.  That way if an OOM occurs, I can contain its impact to a certain set of threads while other threads continue processing.  I would see this being configured as JVM runtime -X arguments, something like -XHeapSegment:Name=MyMemHeap,Size=25%,Thread=<ThreadNameRegex>

I could see the partitioning being relative (by percentage) or absolute (total bytes allocated).  I could also see linking it to ThreadGroups as opposed to just Threads.

Some Questions

  • Do others see value in this proposal?
  • I am not an expert at JVM internals.  Is there a fundamental reason this would not work?
  • Any other suggestions?
  • Should we allow this segmentation to be defined at compile-time as well through annotations?


  • For the purposes of the problem I defined, I do not have a requirement that the young generations also be segmented.  However, if it is impractical to keep a single set of young generations and segmented older generations, I can deal with having segmentation in the young generations as well.
  • Obviously, this would in no way address all OutOfMemoryErrors.  We can still look forward to running out of PermGenSpace

9 thoughts on “Java Memory Management Improvement Proposal

  1. I would much prefer getting rid of min/max heap sizes and permanent generation sizes altogether. I see no reason why I have to figure out the appropriate size of a JVM instead of having it work like every other application, pulling memory as needed from the OS and giving it back when it releases objects. I understand the technical problems but it’s high time they went away for desktop/server uses.

  2. Heap segmentation is already a feature of real-time JVMs. Look into JSR 1 to find out what sort of capabilities are available. I will warn you that managing this sort of thing is not as easy as it sounds.

  3. @Dave You are entirely correct about this being a feature of real-time JVMs. But my problem does not require a full real-time implementation. I am not interested in determinism. For example, I don’t require isolation from GC pauses. While I could use a real-time JVM, I’d rather some of the memory features be ported to the standard JVM.

  4. OOME was a bit of a shock to me when I first ran into it in 97. Prior I’d been working with Smalltalk virtual machines and in particular, the GemStone virtual machine. That VM was connected to database which meant that is could spill memory onto disk. However, Java can’t and if you want it to play nicely with other applications, you need to define some limits. Otherwise a troubled app/thread could cause the VM to consume the entire machine. That said, allocating a portion of heap to a thread is possible. In fact it’s already being done. As long as you keep all reference localized to a thread (relies on escape analysis), objects should be allocated on heap that is localized to the thread. I believe (but would need to check and confirm) that this size of this space is both capped and configurable.

  5. @Kirk You are right that the objects allocated on the heap localized to the thread that eventually causes the OOM will be dumped first. But the problem is that there is no guarantee that the resource hogging thread will be the one that triggers the OOM error. It could be a “lighter” thread that causes the OOM and has its objects released.

  6. Rather than trying to sort out which objects belong to which thread (not an easy problem!), have a watchdog thread that monitors memory usage (see Runtime.freeMemory(), Runtime.totalMemory(), Runtime.maxMemory()), and kills dispensible threads if memory exhaustion approaches.

    Please don’t literally kill the threads; instead, program in a safe abort (or suspend to external storage) method. Thread.kill is not thread-safe.

  7. @R Hayes Very good suggestion to the problem IF I can control the safe abort of the resource threads. In my above example with Fisheye however, I am using a third party application and reliant on their developers to put the hooks into their runtime processes.

  8. Your proposed solution sounds a lot like just having separate processes (JVMs). 🙂

Comments are closed.