Kepler: Issueshttps://projects.ecoinformatics.org/ecoinfo/https://projects.ecoinformatics.org/ecoinfo/ecoinfo/favicon.ico?14691340362015-07-08T11:40:04ZEcoinformatics Redmine
Redmine Bug #6795 (New): parameters in composite actor not configured properlyhttps://projects.ecoinformatics.org/ecoinfo/issues/67952015-07-08T11:40:04ZVincenzo Forchivforchi@eso.org
<p>If I:<br />- add a StringParameter to a composite actor<br />- configure the composite actor<br />- change the value to something<br />- press enter<br />The value is surrounded with double quotes</p>
<p>If I click on commit everything behaves as expected.</p> Bug #6481 (New): strange issue with MultipleTabDisplay actor and with Display like actors in gene...https://projects.ecoinformatics.org/ecoinfo/issues/64812014-03-25T08:02:42ZOwsiak Michalmichal.owsiak@man.poznan.pl
<p>Hi there,</p>
<p>I will paste here description that already went to Jianwu via e-mail, but I think it describes the issue in details:</p>
<p>Some time ago we have published our patches to MultipleTabDisplay actor so it is consistent with new way of redirecting output.</p>
<p>The issue here is, that we have problem with running Kepler workflows in non-gui mode.</p>
<p>The issue lays in missing "output" port inside Multiple Tab Display actor.</p>
<p>The problem is that in GUI mode, everything works just fine (see the screen shoot).</p>
<p>In non-gui mode, we are getting an exception (see the attachment - exception.txt).</p>
<p>Command line used for starting workflow is:</p>
<p>./kepler.sh -runwf -nogui -nocache ~/Desktop/testMTD.xml</p>
<p>Problematic workflow was reduced, and now, we have smallest case that triggers exception.</p>
<p>The workflow is inside: testMTD.xml.</p>
<p>The problem occurs in case Multiple Tab Display actor is inside Composite actor. If the MTD is placed on main workflow, everything is just fine.</p>
<p>If we start testMTD workflow in GUI mode - everything is OK.<br />If we start it following way:</p>
<p>./kepler.sh -runwf -nogui -nocache testMTD.xml</p>
<p>We are getting exception.</p>
<p>I have done some initial debugging, and it looks like issue lies in reading output port from the actor. It is not reported in the code.</p>
<p>Entity.java class has method: getPort. In this method, if I take a look at the code, I can see that variable "_portList" doesn't contain "output" port.</p>
<p>This is what I can see while running Kepler in Eclipse (non-gui mode of Kepler)</p>
<p>[<br /> ptolemy.actor.TypedIOPort
{.testMTD.CompositeActor.MultipleTabDisplay.input},<br /> ptolemy.actor.TypedIOPort
{.testMTD.CompositeActor.MultipleTabDisplay.trigger}<br />]</p>
<p>This is the code, that should return "output" port.</p>
<pre><code>public Port getPort(String name) {<br /> try {<br /> _workspace.getReadAccess();<br /> return (Port) _portList.get(name);<br /> } finally {<br /> _workspace.doneReading();<br /> }<br /> }</code></pre>
<p>"_portList" doesn't contain output port, even though, this port is visible in GUI mode, and - in fact - works fine (take a look at screen shot).</p>
<p>The question here is. Where should we look for the bug? Do you have any suggestions?</p> Bug #6453 (New): Dock icon changes to coffee cuphttps://projects.ecoinformatics.org/ecoinfo/issues/64532014-03-11T21:29:03ZRich Morinrdm@cfcl.com
<p>As Kepler starts up, the normal (Mac OS X) Dock icon is replaced by a (Java) coffee cup. This is unexpected, unhelpful, and possibly confusing.</p> Bug #6175 (New): GenericJobSubmission actor sometimes runs job without completing data transfer.https://projects.ecoinformatics.org/ecoinfo/issues/61752013-10-26T00:03:51Zjianwu jianwujianwu@sdsc.edu
<p>A Kepler user at UCSD found that the GenericJobSubmission actor in her workflow started job submission when only partial file is copied. It looks the actor mistakenly thinks the file is ssh-copied completely when file transferred in the cluster is still about 1 MB less than the file on the local machine. The scheduler of the cluster is SGE.</p>
<p>When we use 'SSH File Copier' actor, the file is copied completely.</p> Bug #6167 (New): Model Context Menu should have the enableBackwardTypeInference choicehttps://projects.ecoinformatics.org/ecoinfo/issues/61672013-10-23T01:07:20ZChristopher Brookscxh@eecs.berkeley.edu
<p>Ptolemy II now supports backward type inference. The way this is enabled is that the top level container has a parameter called "enableBackwardTypeInference" that is set to true or false.</p>
<p>In Ptolemy II's Vergil, this is visible by right clicking on the background of the top level model.</p>
<p>This functionality is not present in the devel tree of Kepler.</p>
<p>The workaround is to drag in a Parameter, name it "enableBackgroundTypeInference" and set the value to true.</p> Bug #5834 (New): Fix Kepler Javadoc warningshttps://projects.ecoinformatics.org/ecoinfo/issues/58342013-01-30T16:12:23ZChristopher Brookscxh@eecs.berkeley.edu
<p>Running "ant javadoc" produces warnings about javadoc problems.<br />The Kepler nightly build at Berkeley now reports these warnings, see<br /><a class="external" href="http://sisyphus:8079/hudson/job/kepler/warnings31">http://sisyphus:8079/hudson/job/kepler/warnings31</a></p>
<p>It would be good to fix these warnings.</p> Bug #4300 (New): Animate at Runtime" checkbox stays checked when director is replacedhttps://projects.ecoinformatics.org/ecoinfo/issues/43002009-08-07T23:27:12ZTimothy McPhillipsmcphillips@ecoinformatics.org
<p>If you enable run-time animation of a workflow and then swap in a different director, the "Animate at Runtime" menu item remains checked. However, the next run of the workflow will not be animated; apparently the newly inserted director does not know about the animation?</p> Bug #4285 (New): workflow canvas does not repack to fill empty space when top window is resizedhttps://projects.ecoinformatics.org/ecoinfo/issues/42852009-08-07T21:03:24ZTimothy McPhillipsmcphillips@ecoinformatics.org
<p>On Ubuntu 9.04/JRE 1.5_0_18 the workflow canvas does not repack automatically when the main window is resized. For example, maximizing the window leaves grey space to the right of the canvas. Resizing by dragging the edges or corners of the main window has the same effect. Similarly, making the window smaller can hide the scrollbars. I observe this both in Kepler 1.0 and at the trunk.</p> Bug #4095 (New): Java Package Ontology (e.g. to facilitate Ptolemy component access)https://projects.ecoinformatics.org/ecoinfo/issues/40952009-05-20T23:35:10ZBertram Ludaescherludaesch@ucdavis.edu
<p>Ptolemy has a number of neat components that are currently very hard to get to. <br />For example, MonitorReceiverContents under ptolemy.vergil.actor.lib is nice for certain runtime monitoring and "debugging" purposes (thanks Edward! :)</p>
<p>If we created a <strong>virtual</strong> "Java Package Ontology" (JPO), i.e., one that simply considers maps containment as concept subsumption, then this would allow us to browse and search (!) any and all otherwise unannotated components (Ptolemy or Kepler) easily.</p>
<p>It seems this would make the good (bad) old habit of InstantiateComponent / InstantiateAttribute superfluous.</p>
<p>Bertram</p> Bug #4046 (New): ComadTest should report more details when detecting an errorhttps://projects.ecoinformatics.org/ecoinfo/issues/40462009-05-01T20:05:58ZTimothy McPhillipsmcphillips@ecoinformatics.org
<p>The ComadTest actor is used to create automated tests of COMAD features. Because it is often useful to include several instances of ComadTest in the same test workflow, the ComadTest actor should report its name when it throws an exception. Ideally it also would indicate something about how the data stream it received during the current workflow run does not match what it received during training--perhaps the element name and line number of the first mismatch in the trace?</p> Bug #3671 (New): Configurable workspace directory for holding workflows, data, and run productshttps://projects.ecoinformatics.org/ecoinfo/issues/36712008-11-13T19:04:28ZTimothy McPhillipsmcphillips@ecoinformatics.org
<p>In bug 3558 I requested that a new directory be created on the user's system for each workflow run, and that outputs of the run, trace files, etc, be placed there. In bug 3585 I asked for an API that would make it easy for actors to write output files to this 'run' directory.</p>
<p>But where should these run directories themselves go? I believe we should allow users to specify a directory for holding their 'workspace' in a location of their choosing. In the workspace could go a directory for holding the workflows they develop and use for a particular project (we've done this before in the Kepler/ppod release, but the directory location was fixed), another directory for holding workflow runs, etc.</p>
<p>One alternative would be to hide all this somewhere inside .kepler in the user's home directory. However, I don't think this is the best approach for two reasons. First, the point is to make it easy for users to find their workflows, data, and workflow run products, and to load the latter into other tools for visualization and further analysis. The .kepler directory is hidden and should be used for things that would distract the user if made more prominent. Second, in practice the .kepler directory is frequently deleted (sometimes when installing a new version of Kepler, for example). A user's work should not be deleted at such times, so .kepler should be used only for things that can be regenerated as needed (e.g. data caches).</p>
<p>Another alternative would be to store everything discussed here in a database. However, many workflows (a) generate large numbers of large data files that would be awkward to place in a database, and (b) users often want immediate file-system access to these output files anyway because the other tools they use to review and further analyze their results expect the data to be stored in files. There shouldn't be an extra step of exporting workflow run products from a database to a directory of files after each workflow run in such cases.</p>
<p>I also think users should have the option of creating multiple workspaces, each with their own directories of workflows and runs. A workspace browser in Kepler could make it easy to view workflows or runs from a particular workspace or all of them at once.</p>
<p>Note that all this has ramifications for distributed execution. Following execution on multiple nodes, the files expected to be found in a local run directory will need to be copied automatically from each compute node.</p> Bug #3585 (New): Provide API for creating output files during a workflow runhttps://projects.ecoinformatics.org/ecoinfo/issues/35852008-10-30T18:47:57ZTimothy McPhillipsmcphillips@ecoinformatics.org
<p>I suggested in bug 3558 that Kepler should create a directory on the user's system for each workflow run and store the trace(s) of the run there along with other files produced during the run.</p>
<p>To make use of this facility easy for actor developers, we could provide an API for easily creating output files (e.g., graphics files containing plots of data) in this location (or copying them from temporary directories elsewhere) and ensuring that such files are named uniquely in that directory.</p>
<p>Note that while the default implementation could leave these run directories on the user's machine, an alternative implementation could transfer data files created during a run from these directories to some other data store, load the trace files into a DBMS to make them queryable, etc.</p> Bug #3576 (New): support for accessing cascading metadata from within CompositeCoactorhttps://projects.ecoinformatics.org/ecoinfo/issues/35762008-10-27T22:04:24ZTimothy McPhillipsmcphillips@ecoinformatics.org
<p>The CompositeCoactor class extends TypedCompositeActor (and implements Coactor) to provide a mechanism for implementing coactors from SDF sub-workflows of conventional actors. Data is extracted from the read scope using input ports named according to the types of data to be extracted, e.g., a port named 'StringToken' will extract a single string token out of the current read scope and provide it as input to the subworkflow on each firing; a port named 'StringToken+' will provide an array token containing one or more string tokens extracted from the read scope on each firing, etc.</p>
<p>Currently, metadata or annotations applied to the top-level collection in a scope-match can also be extracted by specifying the key for the metadata element required (e.g., by naming a port 'StringToken [key=filename]'). One thing that can't be done is to access metadata applied to collections above the read scope and cascading down to it. This function would be very useful for reusing information across multiple invocations of a composite coactor.</p> Bug #3575 (New): A representation of COMAD collections on the file-systemhttps://projects.ecoinformatics.org/ecoinfo/issues/35752008-10-27T21:33:47ZDaniel Zinndzinn@ucdavis.edu
<p>It would be useful if there were a representation of Collections in the file<br />system. In particular, I could imagine using directories to represent<br />collections (say named with a running number and the name of the<br />collection). Then we could use files to represent data items and<br />mata-data (for collections and data); For each data Token we could name<br />the file the same as the Type of the Token (with a leading running id),<br />and store a serialized version of the data token in it (which would be<br />easy for strings, for example). We could use the same file-name with a<br />suffix of, say, .METADATA:owner to store the metadata with the key owner<br />inside (possibly also with the type of the metadata in the string of the<br />filename).</p>
<p>This would not only make collections browse-able via standard<br />file-system tools, but since there exist distributed filesystems (such<br />as the hadoop filesystem) the size of these collections can easily scale<br />up to TBs of data.</p>
<p>This is somewhat similar to request <a class="issue tracker-1 status-1 priority-2 priority-default" title="Bug: Support for importing file contents automatically using CollectionSource (New)" href="https://projects.ecoinformatics.org/ecoinfo/issues/3573">#3573</a>, but aims more towards a general 'storage'-backend for COMAD 2 collections.</p>
<p>I am not proposing that some intermediary result should be represented as directories (though my request does not exclude this either). I am just requesting that besides the XML-representation of COMAD collections (that we currently have, right?) it would be good to have a representation that is file-system based. Similar to the XML representation, which is not materialized within a workflow run (ie, during the actors), the directory-representation need not to be materialized inside the workflow. Instead it should be a user-friendly way of browsing (and even creating) COMAD collections with ordinary file-manipulation tools. You can then copy or move content from one collection (=directory) to some other collection. This representation can then be used as inputs for workflows and can be an output format (to have a closed-loop system).</p>
<p>Besides gaining the power of simple file(system)-manipulation tools back for COMAD collections, this representation can be stored on a distributed file-system (ie hadoop fs) and the collection with the data can so easily hold terabytes of data.</p>
<p>What I am proposing are two actors, one that reads a (special) directory into a COMAD collection and one that can save any comad collection (stream) into a specially formated directory.</p> Bug #3574 (New): Support for importing directory contents using CollectionSourcehttps://projects.ecoinformatics.org/ecoinfo/issues/35742008-10-27T18:49:37ZTimothy McPhillipsmcphillips@ecoinformatics.org
<p>A common workflow pattern is to take as input all of the files (or those of a particular type) in a directory on a researcher's computer system. For example, there are COMAD workflows that process all the FASTA files in a directory, creating a collection for each FASTA file and storing the contained DNA or protein sequences in the corresponding input collections.</p>
<p>Once the CollectionSource actor is able to automatically import the contents of files (see bug 3573), it will be extremely useful to refer to directories in the XML input to CollectionReader or CollectionComposer and have the actor import all of the files it finds there. Another useful feature would be the option of having CollectionSource descend into sub-directories, creating a nested collection for each and importing contained files into the corresponding subcollections. Whole directories of scientific data files could then easily serve as input to COMAD workflows.</p>
<p>These features eventually could make it much easier to stage data for input to a workflow run without requiring modification of the workflow specification itself.</p>