Kepler: Issueshttps://projects.ecoinformatics.org/ecoinfo/https://projects.ecoinformatics.org/ecoinfo/ecoinfo/favicon.ico?14691340362010-11-30T22:50:17ZEcoinformatics Redmine
Redmine Bug #5249 (In Progress): test kepler for memory leakshttps://projects.ecoinformatics.org/ecoinfo/issues/52492010-11-30T22:50:17Zjianwu jianwujianwu@sdsc.edu
<p>a separate bug only for memory leak fixing for Kepler suite. bug 5095 depends on it.</p> Bug #5170 (New): uploading a workflow requires a refreshhttps://projects.ecoinformatics.org/ecoinfo/issues/51702010-09-03T19:48:36ZDaniel Crawldanielcrawl@gmail.com
<p>When a workflow is uploaded to a remote repository, the component tree must be searched again before the workflow appears. It'd be nice if the workflow appeared when the upload completed.</p> Bug #5095 (In Progress): test kepler and wrp for memory leakshttps://projects.ecoinformatics.org/ecoinfo/issues/50952010-07-14T22:56:35ZMatt Jonesjones@nceas.ucsb.edu
<p>Oliver Soong reported having difficulties with memory leaks. There are two specific bugs about this, which I have set to block this testing bug. In addition, testing may reveal additional leaks, which should be fixed before 2.1 is released. Here's Oliver's synopsis of the issues:</p>
<p>I think this is limited to the wrp suite, but Kepler’s performance degrades significantly over time. Provenance recording can become prohibitively slow, and there is no native in-Kepler fix. There is a large memory leak somewhere, and many components are quite memory-intensive regardless. Given the intention to record executions and the large number of analyses scientists perform, I suspect any dedicated user of Kepler will quickly encounter data management problems. In my case, I stopped using local repositories and began closing Kepler after running any large workflows.</p> Bug #5069 (New): The ability to create user manuals from a wiki that is continuously updated.https://projects.ecoinformatics.org/ecoinfo/issues/50692010-07-01T03:47:48ZDavid Welkerwelker4kepler@gmail.com
<p>(5) The ability to create user manuals from a wiki that is continuously updated.</p>
<p>Currently, our user manual is produced from a document in the Microsoft Word document. This makes it inconvenient to update a version of the manual that reflects the most recent versions. As a result, the manual tends to be updated all at once during a release and probably is not as good as it could otherwise be. Also, being able to read the manual off the web and not just in PDF format would be very convenient for users as well.</p> Bug #5067 (New): The introduction of basic services so that module names are not referenced.https://projects.ecoinformatics.org/ecoinfo/issues/50672010-07-01T03:46:34ZDavid Welkerwelker4kepler@gmail.com
<p>(3) The introduction of basic services so that module names are not referenced.</p>
<p>Modules would just declare that they provide a service and this would have a meaning that is understandable to humans. Services should be used to provide properties and also resources, so that module names are not referenced in code. Also, a workflow could declare that it requires certain services and detect if the modules present do not provide those services and in that case provide an appropriate message to the users. The most difficult issue with services is whether they should be versioned and if so, how?</p> Bug #4829 (New): User log4j fileshttps://projects.ecoinformatics.org/ecoinfo/issues/48292010-02-24T02:22:14ZDerik Barseghianbarseghian@nceas.ucsb.edu
<p>We had a request that users' be able to maintain their own log4j files, e.g. in an environment where many users use the same installation of Kepler, each could run with different logging settings.</p> Bug #4458 (New): test for moving kepler directoryhttps://projects.ecoinformatics.org/ecoinfo/issues/44582009-10-15T02:41:52ZDerik Barseghianbarseghian@nceas.ucsb.edu
<p>I've been getting an error during the first launch after an ant clean-cache when using the wrp suite, to do with missing ontologies. It turns out my ontology_catalog.xml file was pointing to old paths that no longer exist on my disk. My guess is I renamed(moved) my kepler directory after this file had been serialized by kepler, using absolute paths. I reverted to the repository version of onotlogy_catalog.xml (which just has filenames, not paths) and am now fine.</p>
<p>This raises a larger issue -- it would probably be good to have a test in place that tries moving the kepler application directory after kepler's been in use for awhile, and see if anything breaks.</p> Bug #4310 (New): ValueListeners receive valueChanged events when values have not changedhttps://projects.ecoinformatics.org/ecoinfo/issues/43102009-08-13T18:34:29ZDaniel Crawldanielcrawl@gmail.com
<p>A ValueListener sometimes receives events for a Settable when the Settable's value has not changed. This can lead to a stack overflow since reading the value of the Settable may generate another valueChanged event.</p>
<p>To fix this, valueChanged not be called unless the value has actually changed.</p> Bug #3915 (New): The error dialogue won't go away.https://projects.ecoinformatics.org/ecoinfo/issues/39152009-03-24T00:11:16Zjianwu jianwujianwu@sdsc.edu
<p>Workflow: There are two composite actors, one called CompositeActor1 on the top level, another called CompositeActor2, in CompositeActor1. There are two String Parameters: one called p1 on the top level, another called p2 with value '$p1/l', in CompositeActor1. p2 is used in actors in CompositeActor1, such as expression, file open.</p>
<p>Steps: <br />1) Open the whole workflow,<br />2) Open CompositeActor1,<br />3) Open CompositeActor2,<br />4) Close CompositeActor2,<br />5) Delete CompositeActor2,<br />6) Change the value of p1.</p>
<p>There will be an error saying that: "Error evaluating expression: $p1/l in .CompositeActor2.p2 Because The ID p1 is undefined."</p>
<p>There is no way to close the error except closing Kepler by force, which will lost all unsaved modification.</p>
<p>I found the bug with Kepler version 16865 and ptolemy version 52661, but I think this bug is always there.</p>
<p>I attached the workflow and error dialogue.</p> Bug #3899 (New): web service actor does not work through proxyhttps://projects.ecoinformatics.org/ecoinfo/issues/38992009-03-17T19:19:25ZChris Weedchrisweed@gmail.com
<p>The run script does not set up the java proxy settings, so anything that needs to connect to the internet through http fails.</p>
<p>I had to edit kepler.sh to be the following:<br />java -Xmx512m -Xss5m -Dhttp.proxyHost=proxy -Dhttp.proxyPort=8080 -DKEPLER="$KEP" -DKEPLER_DOCS="$KEP" -Djava.endorsed.dirs=./lib/jar/base-jars/apache -Djava.library.path=./lib org.kepler.loader.Kepler $*</p>
<p>However, this needs a more elegant solution, such as an environment variable, to allow the user to set this.</p> Bug #3576 (New): support for accessing cascading metadata from within CompositeCoactorhttps://projects.ecoinformatics.org/ecoinfo/issues/35762008-10-27T22:04:24ZTimothy McPhillipsmcphillips@ecoinformatics.org
<p>The CompositeCoactor class extends TypedCompositeActor (and implements Coactor) to provide a mechanism for implementing coactors from SDF sub-workflows of conventional actors. Data is extracted from the read scope using input ports named according to the types of data to be extracted, e.g., a port named 'StringToken' will extract a single string token out of the current read scope and provide it as input to the subworkflow on each firing; a port named 'StringToken+' will provide an array token containing one or more string tokens extracted from the read scope on each firing, etc.</p>
<p>Currently, metadata or annotations applied to the top-level collection in a scope-match can also be extracted by specifying the key for the metadata element required (e.g., by naming a port 'StringToken [key=filename]'). One thing that can't be done is to access metadata applied to collections above the read scope and cascading down to it. This function would be very useful for reusing information across multiple invocations of a composite coactor.</p> Bug #3574 (New): Support for importing directory contents using CollectionSourcehttps://projects.ecoinformatics.org/ecoinfo/issues/35742008-10-27T18:49:37ZTimothy McPhillipsmcphillips@ecoinformatics.org
<p>A common workflow pattern is to take as input all of the files (or those of a particular type) in a directory on a researcher's computer system. For example, there are COMAD workflows that process all the FASTA files in a directory, creating a collection for each FASTA file and storing the contained DNA or protein sequences in the corresponding input collections.</p>
<p>Once the CollectionSource actor is able to automatically import the contents of files (see bug 3573), it will be extremely useful to refer to directories in the XML input to CollectionReader or CollectionComposer and have the actor import all of the files it finds there. Another useful feature would be the option of having CollectionSource descend into sub-directories, creating a nested collection for each and importing contained files into the corresponding subcollections. Whole directories of scientific data files could then easily serve as input to COMAD workflows.</p>
<p>These features eventually could make it much easier to stage data for input to a workflow run without requiring modification of the workflow specification itself.</p> Bug #3573 (New): Support for importing file contents automatically using CollectionSourcehttps://projects.ecoinformatics.org/ecoinfo/issues/35732008-10-27T18:32:05ZTimothy McPhillipsmcphillips@ecoinformatics.org
<p>The CollectionComposer and CollectionReader actors extend CollectionSource to read XML representations of the input to a COMAD workflow and translate them into data tokens, metadata tokens, collection delimiters, etc. Presently all data read in by CollectionComposer must be contained in the XML that is provided either as a parameter value to CollectionComposer or as an external file to CollectionReader. However, many workflows use data from other files and this data currently must be read and parsed by explicit actors elsewhere in the workflow. The input to a workflow would be clearer, and workflows simpler and more transparent, if files could be referred to in the XML processed by CollectionSource, and if CollectionSource were to automatically include the contents of these files in the workflow input.</p>
<p>A simple first step would be to enable CollectionComposer to read in text files either as a TextFile collection containing a single StringToken holding the contents of the file, or a TextFile collection containing one StringToken for each line of the text file. (Existing COMAD workflows demonstrate the usefulness of both approaches).</p>
<p>A second step would be to allow one to register format-specific parsers for CollectionSource to use when reading particular types of files. For example, a FASTA file parser could be plugged in that would create a FASTA collection filled with (e.g., DNA) Sequence tokens, and a Nexus file parser could create a Nexus collection containing CharacterMatrix, WeightVector, and phylogenetic Tree tokens.</p> Bug #3568 (New): support for writing COMAD-style trace files from the Provenance Recorderhttps://projects.ecoinformatics.org/ecoinfo/issues/35682008-10-24T23:48:40ZTimothy McPhillipsmcphillips@ecoinformatics.org
<p>I have heard rumors that there are plans to enable the general-purpose Provenance Recorder in Kepler to (optionally) write out it's records of a run using the trace file format employed by the COMAD framework. This would be extremely helpful because one could then view the provenance captured during any workflow run via the provenance browser.</p>
<p>I expect that this also would highlight information that cannot be stored in a COMAD trace or represented in the provenance browser, and so lead to enhancements of these to make them more generally useful.</p> Bug #1997 (New): Support Getting Metadata for Darwin Core search result itemhttps://projects.ecoinformatics.org/ecoinfo/issues/19972005-03-01T18:49:40ZJing Taotao@nceas.ucsb.edu
<p>In data search result panel, we add a new right click button for getting<br />metadata for the item. But currently we only support eml200 and eml201. We need<br />also support DarwinCore search result too. First we need create a style sheet to<br />transfer Darwincore xml to html.</p>