IPCC_Base_Layers.xml work-flow gets out of memeory error
When this work-flow tried to download 10 big data files, it can finish part of downloading - maybe 6 and the rest 4 will keep busy. From log message, we can find something like:
Exception in thread "Data ecogrid://knb/IPCC.200735416582829.1" java.lang.OutOfMemoryError: Java heap space
Exception in thread "Data ecogrid://knb/IPCC.200735416192871.1" java.lang.OutOfMemoryError: Java heap space
Exception in thread "Data ecogrid://knb/IPCC.200735417143125.1" java.lang.OutOfMemoryError: Java heap space
Exception in thread "Data ecogrid://knb/IPCC.200735416271261.1" java.lang.OutOfMemoryError: Java heap space
When I increase the heap size of Kepler to 1.2G (current value is 5M), the downloading can be done without any problem.
I went to through the downloading code again, it seems to me we use input and out stream, so not sure why it uses so many memory.
If it is possible we can remove this work-flow from this release?
#4 Updated by Jing Tao about 12 years ago
At end of January 2008, the src/org/ecoinformatics/ecogrid/queryservice/QueryServiceGetToStreamClient.java class somehow was modified - "_call.setStreaming(true);" were commented to fixed some issue. So our get client didn't stream data, did put data into memory.
Restoring this line doesn't seem to generate any issue, but helps kepler to fix this bug.
Ben checked a new version of org.ecoinformatics.ecogrid.AuthenticatedQueryService-stub.jar and I checked a new version of org.ecoinformatics.ecogrid.QueryService-stub.jar into kepler.
With the new jar file, both derik and i tested the work flow and the downloading worked. The bug seems to be fixed.