Kepler: Issueshttps://projects.ecoinformatics.org/ecoinfo/https://projects.ecoinformatics.org/ecoinfo/ecoinfo/favicon.ico?14691340362005-10-19T22:37:24ZEcoinformatics Redmine
Redmine Bug #2234 (Resolved): creating KAR file loses port semantic annotationshttps://projects.ecoinformatics.org/ecoinfo/issues/22342005-10-19T22:37:24ZMatt Jonesjones@nceas.ucsb.edu
<p>I created an R actor on the canvas, annotated it and its ports with terms from<br />the ontologies, and then saved the actor as a KAR file. When I drag the new<br />actor from the tree to the canvas, the clone that is put on the canvas lacks its<br />port annotations. Looking in .kepler/actorLibrary, the library file is missing<br />the annotaitons for the port as well. Both the actorLibrary and the cloned<br />copies should have all all of the original annotations.</p>
<p>This was tested against the Kepler CVS HEAD on 19 October 2005 ~ 2pm.</p> Bug #2044 (Resolved): R actor for means and error by grouphttps://projects.ecoinformatics.org/ecoinfo/issues/20442005-03-11T23:37:40ZMatt Jonesjones@nceas.ucsb.edu
<p>This bug is to create an actor that wraps an R script (using the generic R actor<br />described in bug <a class="issue tracker-1 status-3 priority-5 priority-highest closed" title="Bug: need R actor (Resolved)" href="https://projects.ecoinformatics.org/ecoinfo/issues/1342">#1342</a>) for calculating the mean, standard deviation, and<br />standard error of a numeric variable that is passed into the script. Optionally<br />the stats should be calculated using a grouping variable (by group), with one<br />mean/std/se for each group.</p>
<p>There are two inputs:<br /> 1) dataValue (type double)<br /> 2) group (type string)</p>
<p>The actor will accululate all data passed in as a series of {dataValue, group}<br />tuples and pass this along to R to calculate the mean, stdev, and stderr by<br />group. If group is not present then the tuples will be {dataValue}. When the R<br />script is finished, a matrix of {group, mean, stdev, stderr} will be passed back<br />to the R actor and then emitted on the output port. So for N tokens that come<br />in on the input port the actor will output on 1 token on the output port after<br />all input data has been received (this is a grouping operation).</p> Bug #2043 (Resolved): create suite of common statistical actors using Rhttps://projects.ecoinformatics.org/ecoinfo/issues/20432005-03-11T23:18:49ZMatt Jonesjones@nceas.ucsb.edu
<p>We need to create a suite of commonly useful analytical actors using the R actor<br />that is described in bug <a class="issue tracker-1 status-3 priority-5 priority-highest closed" title="Bug: need R actor (Resolved)" href="https://projects.ecoinformatics.org/ecoinfo/issues/1342">#1342</a>. These actors should require zero 'plumbing' by<br />the user -- all they should need to do is hook up their data to the ports and<br />hit run. These will include some common operations such as means and error by<br />group, linear regression, anova, t-test, etc. Each of these will be described<br />in their own bug and linked to this bug as a blocker.</p> Bug #1891 (Resolved): writedesign for staged implementation of grid-enabled keplerhttps://projects.ecoinformatics.org/ecoinfo/issues/18912005-01-20T19:01:19ZMatt Jonesjones@nceas.ucsb.edu
<p>During the January 2004 meeting we discussed grid-enabling kepler. Matt and<br />Ilkay agreed to write this up as a design document.</p> Bug #1748 (Resolved): "Get metadata" menu item only works for EML 2.0.0 documentshttps://projects.ecoinformatics.org/ecoinfo/issues/17482004-10-27T09:56:43ZMatt Jonesjones@nceas.ucsb.edu
<p>Currently the "Get metadata" function only works for EML 2.0.0. The version is<br />hardcoded in the code, but instead it should be determined from the namespace of<br />the EML document and then used to lookup the appropriate XSLT stylesheet to be<br />used. This is a fairly straightforward change, but EML200DataSource needs to be<br />extended to provide access to the namespace information.</p> Bug #1747 (Resolved): provide display of full metadata for EML200DataSource actorshttps://projects.ecoinformatics.org/ecoinfo/issues/17472004-10-27T09:46:47ZMatt Jonesjones@nceas.ucsb.edu
<p>Current EML data source actors only display their title. This is not enough<br />information to evaluate and use a data set. Need to display the full EML record<br />on request.</p> Bug #1717 (Resolved): EML200 data source throws error on 'Look Inside'https://projects.ecoinformatics.org/ecoinfo/issues/17172004-10-13T21:37:06ZMatt Jonesjones@nceas.ucsb.edu
<p>After dragging an EML200 data source from the result list window to the canvas,<br />if one clicks on 'Look Inside' to configure it, it throws an exception. This<br />does not happen with the EML200 Simple Plot Example -- but happens consistently<br />with some other data packages. Exception is pasted below.</p>
<p>Steps to reproduce:</p>
<p>1) Open kepler and launch EML2 Simple plot workflow<br />2) Click on data tab, search for 'mollusc'<br />3) Choose the first of the GCE mollusc data package and drag to canvas<br />4) Right-click and choose 'Look Inside'</p>
<p>The error generated is:<br />ptolemy.kernel.util.IllegalActionException: Cannot graphically edit a model that<br />is not a CompositeEntity. Model is a<br />org.ecoinformatics.seek.datasource.eml.eml2.Eml200DataSource
{.eml-simple-plot-new.Mollusc population abundance monitoring: Fall 2001<br />mid-marsh and creekbank infaunal and epifaunal mollusc abundance based on<br />collections from GCE marsh, monitoring sites 1-}<br /> in .configuration.directory.effigy.Mollusc population abundance monitoring:<br />Fall 2001 mid-marsh and creekbank infaunal and epifaunal mollusc abundance based<br />on collections from GCE marsh, monitoring sites 1-.graphTableau<br /> at ptolemy.vergil.actor.ActorGraphTableau.<init>(ActorGraphTableau.java:98)<br /> at<br />ptolemy.vergil.actor.ActorGraphTableau$Factory.createTableau(ActorGraphTableau.java:162)<br /> at<br />ptolemy.actor.gui.PtolemyTableauFactory.createTableau(PtolemyTableauFactory.java:98)<br /> at ptolemy.actor.gui.TableauFactory.createTableau(TableauFactory.java:122)<br /> at ptolemy.actor.gui.Configuration.createPrimaryTableau(Configuration.java:193)<br /> at ptolemy.actor.gui.Configuration._openModel(Configuration.java:672)<br /> at ptolemy.actor.gui.Configuration.openModel(Configuration.java:458)<br /> at ptolemy.actor.gui.Configuration.openModel(Configuration.java:416)<br /> at<br />ptolemy.vergil.actor.ActorController$LookInsideAction.actionPerformed(ActorController.java:632)<br /> at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:1786)<br /> at<br />javax.swing.AbstractButton$ForwardActionEvents.actionPerformed(AbstractButton.java:1839)<br /> at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:420)<br /> at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:258)<br /> at javax.swing.AbstractButton.doClick(AbstractButton.java:289)<br /> at javax.swing.plaf.basic.BasicMenuItemUI.doClick(BasicMenuItemUI.java:1113)<br /> at<br />javax.swing.plaf.basic.BasicMenuItemUI$MouseInputHandler.mouseReleased(BasicMenuItemUI.java:943)<br /> at java.awt.AWTEventMulticaster.mouseReleased(AWTEventMulticaster.java:231)<br /> at java.awt.Component.processMouseEvent(Component.java:5100)<br /> at java.awt.Component.processEvent(Component.java:4897)<br /> at java.awt.Container.processEvent(Container.java:1569)<br /> at java.awt.Component.dispatchEventImpl(Component.java:3615)<br /> at java.awt.Container.dispatchEventImpl(Container.java:1627)<br /> at java.awt.Component.dispatchEvent(Component.java:3477)<br /> at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:3483)<br /> at java.awt.LightweightDispatcher.processMouseEvent(Container.java:3198)<br /> at java.awt.LightweightDispatcher.dispatchEvent(Container.java:3128)<br /> at java.awt.Container.dispatchEventImpl(Container.java:1613)<br /> at java.awt.Window.dispatchEventImpl(Window.java:1606)<br /> at java.awt.Component.dispatchEvent(Component.java:3477)<br /> at java.awt.EventQueue.dispatchEvent(EventQueue.java:456)<br /> at<br />java.awt.EventDispatchThread.pumpOneEventForHierarchy(EventDispatchThread.java:201)<br /> at<br />java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:151)<br /> at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:145)<br /> at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:137)<br /> at java.awt.EventDispatchThread.run(EventDispatchThread.java:100)</p> Bug #1655 (In Progress): DIALOGS: Implement New UI for Sources Dialog (was "integrate ecogrid reg...https://projects.ecoinformatics.org/ecoinfo/issues/16552004-08-16T19:25:37ZMatt Jonesjones@nceas.ucsb.edu
<p>Currently EcoGrid access in Kepler works but the EcoGrid node that is queried is<br />a single configurable parameter. This needs to be changed to dynamically access<br />the EcoGrid registry and get the list of nodes to be queried from the registry.<br /> Then the kepler ecogrid query would be launched against these nodes, possibly<br />with the ability for the user to configure which nodes to search.</p>
<p>When searching multiple nodes, the results from each node need to be integrated.<br />The code for integrating the resultsets needs to accomodate different namespaces<br />(e.g., EML and Darwin Core) in the result set. The EcoGrid web interface will<br />also need this capability, so we can hopefully use the same resultset<br />integration code in both applications.</p>
<p>The current registry provides minimal metadata about each node, but eventually<br />we expect to have rich metadata (such as supported query namespaces, load,<br />coverage, etc). Once this metadata is available, the kepler client can be<br />smarter about which nodes are searched, possibly suggesting an appropriate<br />subset of nodes from the list.</p> Bug #1579 (Resolved): need sub-sampling actorhttps://projects.ecoinformatics.org/ecoinfo/issues/15792004-05-25T16:36:07ZMatt Jonesjones@nceas.ucsb.edu
<p>We need an actor that can perform various sub-sampling functions such as<br />jackknife and bootstrapping operations. Most ecologists use R and similar<br />systems for this type of work. We specifically need to do some<br />spatially-explicit sub-sampling for the GARP pipeline. In addition, we could<br />probably create a sub-sampling actor as a composite actor that uses the ptolemy<br />signal processing actors.</p> Bug #1548 (In Progress): consolidating data access user interfaceshttps://projects.ecoinformatics.org/ecoinfo/issues/15482004-04-30T18:52:02ZMatt Jonesjones@nceas.ucsb.edu
<p>Currently Kepler contains several distinct methods for binding data sources to a<br />workflow. These include the EML200DataSource actor, the JDBC data source<br />actor(s), the incipient EcoGrid access interfaces, the GridFTP actor, and<br />probably others. Each of these exposes the data in a different way, and is<br />therefore multiply representing data in a confusing way. We need to consolidate<br />these approaches to find a single UI that can encapsualte all of the data access<br />approaches.</p>
<p>This proposal is to use and adapt the user interface described in<br />kepler/docs/dev/screenshots and related design documents to data access in<br />EcoGrid, GridFTP, JDBC, and other sources. This would allow a user to view data<br />uniformly in the workflow, regardless of which data access protocol is used to<br />get the data. This would also allow the user to specify subsetting constraints<br />(WHERE clause) uniformly, and to choose which attributes from the joined<br />relations are exposed to the workflow. Finally, it would allow us to use richer<br />metadata descriptions of underspecified data sources (like those found at the<br />other end of JDBC connections) so that the user (and ultimately the SEEK SMS<br />system) can reason about these data sources effectively.</p> Bug #1546 (Resolved): dynamic data and actor views using ontologieshttps://projects.ecoinformatics.org/ecoinfo/issues/15462004-04-30T18:06:14ZMatt Jonesjones@nceas.ucsb.edu
<p>Current lists of actors (and planned data sets) are static in that the tree is<br />statically written into a MoML model and displayed. This severely limits the<br />user's ability to find appropriate actors (and data sets) as the number of<br />actors grows. The current tree is a combination of functional and project<br />oriented folders, with no consistent classification.</p>
<p>This proposal is to generate dynamic views of the actors and data sets by<br />organizing the actors into trees using simple ontologies and controlled<br />vocabularies. Each actor (in its MoML code) and each data set (in its metadata<br />description) would contain term references that are drawn from one or more<br />ontologies. For example, an actor might be classified as belonging to the Class<br />"SimulationModel" while another actor might belong to the Class<br />"AnalyticalModel". If both AnalyticalModel and SimulationModel are subclasses<br />of "Model", then we could display a dynamically generated tree like this:</p>
<pre><code>Model
|__ SimulationModel
|__ AnalyticalModel
|__ NumericalModel
|__ IndividualBasedModel</code></pre>
<p>with each of the Actors displayed at the appropriate node in the tree. Of<br />course, if SimulationModel has subclasses itself, those could either be<br />collapsed to show all models under SimulationModel, or additional levels of the<br />tree can be added.</p>
<p>The same scenario applies to data sets, allowing people to browse data according<br />to a particular classification ontology. For example, data could be classified<br />as applying to certain types of measurements:<br /> PhysicalMeasurement<br /> ChemicalMeasurement<br /> BiologicalMeasurement
|__ MolecularMeasurement
|__ CellularMeasurement
|__ TissueMeasurement
|__ OrganismMeasurement
|__ PopulationMeasurement
|__ CommunityMeasurement
|__ EcosystemMeasurement</p>
<p>Although this example is somewhat contrived, it illustrates the type of ontology<br />one might use. Need to talk to some domain scientists to determine an<br />appropriate set of classifications for data.</p>
<p>Switching classification schemes would be done dynamically, on-the-fly. The set<br />of ontologies that are available for display would need to somehow be limited to<br />a meaningful set (all of the classes in even a small, simple ontology would<br />overwhelm the user). This could probably be set through a configuration. In<br />addition, the ontologies would need to be stored in a Kepler-accessible<br />location, possibly included with the release.</p> Bug #1545 (Resolved): need SAS actorhttps://projects.ecoinformatics.org/ecoinfo/issues/15452004-04-30T17:50:07ZMatt Jonesjones@nceas.ucsb.edu
<p>Need an actor that can execute SAS jobs. Initially start by implementing this<br />locally, possibly utilizing the commandline actor. In addition, we might want<br />to build an (authenticated for licensing) web service that allows these jobs to<br />be executed on remote nodes.</p> Bug #1544 (Resolved): complete web service actorhttps://projects.ecoinformatics.org/ecoinfo/issues/15442004-04-30T17:45:55ZMatt Jonesjones@nceas.ucsb.edu
<p>Fix web service actor code --> Ilkay, Chad</p>
<p>This involves completing the code for a wider variety of data types, and making<br />sure that we support arrays.</p>
<p>1) New architecture for the class<br />2) Multiple output<br />3) Use java APIs<br />4) Add additional type support --> To support arrays of {int, short, long,<br />string, double, float, boolean}; Date</p> Bug #1359 (Resolved): Test bug for the SDM Users group, please ignorehttps://projects.ecoinformatics.org/ecoinfo/issues/13592004-03-04T16:43:23ZMatt Jonesjones@nceas.ucsb.edu
<p>This is a bugzilla test, please ignore.</p> Bug #1143 (In Progress): need SAS actorhttps://projects.ecoinformatics.org/ecoinfo/issues/11432003-09-15T18:03:34ZMatt Jonesjones@nceas.ucsb.edu
<p>Need to be able to run SAS jobs from within Ptolemy. Initially this could be a<br />port of the Monarch engine that hadles this.</p>
<p>Later on this could be run by having SAS on a server exposed as a Grid Service,<br />and therefore accessible through a GridService agent in Ptolemy. I'll post a<br />separte bug for creating GridService agents.</p>
<p>To close this bug, simply create an initial mechanism for running SAS jobs from<br />within Ptolemy.</p>