Bug #4506
closedException is thrown during workflow execution
0%
Description
Within PN like workflow it seems that one thread iterates through the hashmap while other one modifies the data of the hashmap. Maybe Hashtable should be used here?
Or external synchronization should be used?
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextEntry(HashMap.java:810)
at java.util.HashMap$KeyIterator.next(HashMap.java:845)
at ptolemy.actor.process.ProcessDirector.stopFire(ProcessDirector.java:481)
at ptolemy.actor.CompositeActor.stopFire(CompositeActor.java:1375)
at ptolemy.actor.CompositeActor.requestChange(CompositeActor.java:1184)
at ptolemy.kernel.util.NamedObj.requestChange(NamedObj.java:1651)
at ptolemy.vergil.basic.RunnableGraphController.managerStateChanged(RunnableGraphController.java:181)
at ptolemy.actor.Manager._notifyListenersOfStateChange(Manager.java:1363)
at ptolemy.actor.Manager._setState(Manager.java:1378)
at ptolemy.actor.Manager.wrapup(Manager.java:1270)
at ptolemy.actor.Manager.execute(Manager.java:365)
at ptolemy.actor.Manager.run(Manager.java:1071)
at ptolemy.actor.Manager$3.run(Manager.java:1112)
Updated by Michal Owsiak about 15 years ago
If it comes to the use case - yes and no.
It fails during execution of the workflow that combines PN and DDF directors.
Unfortunately, the workflow will not help in this case because it requires an access to HPC/GRID resources - which means it is useless as long as you don't have an access to the machines that are used.
I have prepared the similar case - but very simple, but it doesn't fail. I will try to create the use case that doesn't require an access to the machines - so it can be used as test case.
Updated by Chad Berkley almost 15 years ago
Seems like a workflow bug, not a kepler bug.