Bug #3115


Need to check Kepler for memory leaks

Added by Dan Higgins about 16 years ago. Updated almost 16 years ago.

Target version:
Start date:
Due date:
% Done:


Estimated time:


Need to do some testing to see if Kepler memory leaks occur (i.e. does memory use increase even when workflows closed).

This was investigated 6 months (or more) ago, but has not been checked recently.,

Actions #1

Updated by Christopher Brooks about 16 years ago

This thread concerning memory leaks might be of use:

Jackie wrote:

I narrowed down the memory leak a bit and found out one main cause. It looks
like it's coming from executing MoML change requests. For example, issuing
multiple calls like NamedObj.requestChange(new MoMLChangeRequest(....))
would quickly overflow the memory.


----- Original Message -----
From: "Christopher Brooks" <>
To: "Jackie Man-Kit Leung" <>
Cc: <>
Sent: Friday, November 30, 2007 2:51 PM
Subject: Re: Increasing memory usage after running multiple rounds of test

Hi Jackie,
Welcome to the world of memory leaks.
See $PTII/doc/coding/performance.htm

Calling MoMLParser.reset and MoMLParser.purgeAllModelRecords()
might not free up all memory. It is easier to look
and see what memory is leaking and fix it than it is
to try to figure it out without leakage data.

Definitely try to work within a non-gui environment at first.
Using the test environment is the way to go.

There are several products available.

Check out HP's JMeter, which is free.

See also JProfiler and JProbe.
We can buy copies of these products, though it might take awhile
to do so.

Can you look over $PTII/doc/coding/performance.htm
and updated it as necessary?



I have been chasing a memory leak problem that causes the tcl script to
crash during regression testing. The jvm basically throws a
OutOfMemoryException and stop functioning for subsequent tests. This
happens even if i call MomlParser.purgeAllModelRecord() and
MomlParser.reset() in-between all the tests. Some memory is not being
released but i am not sure what. I put in some print statements and
found that the amount of memory leak is highly non-uniform (i.e. some
model tests have zero leak). Plus, i found that the order in which these
models get run change the amount of leak for a particular model test as

I think there are two possibilities: one is because Ptolemy II is
caching the actors, which is unlikely because i think calling
MomlParser.purgeAllModelRecord() and MomlParser.reset() would have
solved the problem, if that's the case. Another possibility i thought of
is that some of the tokens may be cached by the software so it can reuse
them for future computation and be efficient. However, i don't seem find
any code doing that. Any suggestions?


Actions #2

Updated by Chad Berkley almost 16 years ago

We should run each of the demo workflows with a memory usage analyzer to make sure that no major leaks are found with the demo usage.

Actions #4

Updated by Sean Riddle almost 16 years ago

I did not detect significant memory leaks. From a cost-benefit perspective, there is little advantage to push to find them now versus at a later point in time. This should not impair release.

On several contiguous runs of a workflow, there was no detected leakage, and on a single long-running workflow, there was no detectable leakage.

Actions #5

Updated by Redmine Admin almost 11 years ago

Original Bugzilla ID was 3115


Also available in: Atom PDF