Bug #4642


memory usage & slowdowns

Added by Oliver Soong almost 14 years ago. Updated over 8 years ago.

Target version:
Start date:
Due date:
% Done:


Estimated time:


I just hit a big slowdown caused by OOM problems. This bug is mostly a place to put down some of the stuff I found out. I used jmap to produce histograms when Kepler was crawling and immediately after a fresh restart. When Kepler was slow, there was a single workflow open with 4 actors and the Check System Settings window. The fresh Kepler retained the wrm and cache content, but discarded the 4 actors and all the accumulated memory leaking cruft.

A few things jump out at me, and I'd say I'm pretty uninformed. I've formatted as Object: stale #, fresh #.

org.kepler.util.WorkflowRun: 39206, 29
javax.swing.JMenuItem: 3411, 96
java.util.HashMap: 689643, 22885
org.kepler.objectmanager.lsid.KeplerLSID: 120115, 1339
java.util.LinkedList: 95565, 4468
ptolemy.kernel.util.Location: 1837, 45

Interestingly enough, I have 28 wrm entries. I think something's up with the wrm, but also a lot of GUI objects seem to be hanging around as well, so there may be other things going on as well.

And on a side note, jps -> jmap -> jhat produces some pretty cool results.

Files (89.4 KB) Oliver Soong, 12/18/2009 07:45 PM

Related issues

Blocked by Kepler - Bug #5095: test kepler and wrp for memory leaksIn Progressjianwu jianwu07/14/2010

Actions #1

Updated by Derik Barseghian almost 14 years ago

I've done some refactoring of the WRM so that I'm no longer creating so many runs so frequently -- hopefully this will help here.

Something I've noticed is that some things (e.g. objects and listeners created in WorkflowRunManagerPanel) are sticking around and still being utilized even after the frame containing the WorkflowRunManagerPanel TabPane has been closed. This is definitely an area for improvement.

Actions #2

Updated by Christopher Brooks over 13 years ago

See also bug 5095.

This bug should be primarily about performance, though memory management
will take part.

For information about performance, see

The easiest way to track down performance issues is to create a small
non-gui Java program that runs the model and use a commercial tool like
JProfiler. Oliver suggests jps -> jmap -> jhat.

To close this bug a requirement should be that a wiki page is created about
how to track down performance and memory problems in Kepler.

Also, the performance test cases should be checked in. Ideally, there
would be tests that would exercise these test cases and report an error
if the times changed more than a certain amount. This is hard to do because
the times change depending on the machine and the load.

Actions #3

Updated by Ilkay Altintas over 13 years ago

Seems similar to 5095. Derik will further analyze.

Actions #4

Updated by David Welker almost 13 years ago

Postponing to 2.3.

Actions #5

Updated by Redmine Admin over 10 years ago

Original Bugzilla ID was 4642

Actions #6

Updated by Daniel Crawl over 8 years ago

  • Target version changed from 2.5.0 to 2.X.Y

Also available in: Atom PDF