Metacat: Issueshttps://projects.ecoinformatics.org/ecoinfo/https://projects.ecoinformatics.org/ecoinfo/ecoinfo/favicon.ico?14691340362007-02-09T04:27:21ZEcoinformatics Redmine
Redmine Bug #2764 (Resolved): LDAP client should handle referral failure correctlyhttps://projects.ecoinformatics.org/ecoinfo/issues/27642007-02-09T04:27:21ZJing Taotao@nceas.ucsb.edu
<p>Today, a knb website user reported that she could not register an ldap account through the knb web site. The result turned out is that a new referral, which is not up, was added to the ldap server. The failure of referral cause ldapweb.cgi failed. It seems that Morpho couldn't get ldap tree correctly either.</p>
<p>The reason of the failure is the ldap client couldn't handle the situation that referral ldap is down.</p> Bug #2748 (Resolved): MetaCatServlet.handleUploadAction() can cause data file deletion in the dat...https://projects.ecoinformatics.org/ecoinfo/issues/27482007-01-25T19:42:43ZChris Jonescjones@nceas.ucsb.edu
<p>During the upload of data documents to Metacat 1.6.x, data documents that have been previously uploaded can be deleted from Metacat's file storage area when the same file is uploaded on a second attempt. In MetaCatServlet.handleUploadAction(), DocumentImpl.registerDocument() is called after the data file has been created in the filesystem. If for some reason registerDocument() throws an exception (for instance if the docid and revision is already taken), then the data file is deleted, regardless of whether or not it happened in a previous transaction.</p>
<p>This can be critical since an entire Metcat data store could be deleted by calling action=upload on the existing data docids residing in the catalog. The existence of the data files remains registered in the catalog tables, but the file will be physically gone from the data store.</p> Bug #2747 (Resolved): AuthLdap.getGroups() doesn't follow referrals correctly when building group...https://projects.ecoinformatics.org/ecoinfo/issues/27472007-01-25T19:09:46ZChris Jonescjones@nceas.ucsb.edu
<p>EML allows for both user and group-based access control, but as of Metacat 1.6.x, access control for groups is only partially functional. The problem arises in searching for groups that are defined in LDAP databases that are referrals in the main ecoinformatics LDAP tree.</p>
<p>Given the following two EML access directives:</p>
<p><access order="allowFirst" scope="document" <br /> authSystem="ldap://ldap.ecoinformatics.org:389/dc=ecoinformatics,dc=org"><br /> <allow><br /> <principal><br /> cn=marine,dc=ecoinformatics,dc=org<br /> </principal><br /> <permission>read</permission><br /> </allow><br /></access></p>
<p>and</p>
<p><access order="allowFirst" scope="document" <br /> authSystem="ldap://ldap.ecoinformatics.org:389/dc=ecoinformatics,dc=org"><br /> <allow><br /> <principal><br /> cn=data-managers,o=PISCOGROUPS,dc=ecoinformatics,dc=org<br /> </principal><br /> <permission>read</permission><br /> </allow><br /></access></p>
<p>a search for groups will succeed for the group cn=marine, but will fail for the cn=data-managers group, and all other subsequent groups. This occurs after a NamingException is thrown when searching for group names in LDAP databases that are part of the ecoinformatics ldap tree as referrals.</p> Bug #2738 (Resolved): Announcement for when server will be unavialblehttps://projects.ecoinformatics.org/ecoinfo/issues/27382007-01-19T22:28:50ZCallie Bowdishbowdish@nceas.ucsb.edu
<p>I think we need some kind of policy regarding what do to do when the KNB server needs is going to go offline to fix something. Maybe at least a 10-minute warning would be good. This way people can save where they are at or finish what they are doing. Currently we have a problem with the KNB Data Catalog Map is only fixed by restarting Apache. We need some kind of announcement system when the server is not going to be available.</p>
<p>Yesterday I saw a government site have at the top of the database page, in red, an announcement that their server was going to be down for repairs during a certain period. That is good for letting people know ahead of time. What can we do for when people who currently are online using the server? Is it possible to announce to those people too? A pop up box might be blocked or unavailable.</p> Bug #2716 (Resolved): KNB Data Catalog Map does not display pointshttps://projects.ecoinformatics.org/ecoinfo/issues/27162007-01-08T19:35:34ZCallie Bowdishbowdish@nceas.ucsb.edu
<p>Metacat data bounds is not working working for the KNB Data Catalog Map. <a class="external" href="http://knb.ecoinformatics.org/index_map.jsp">http://knb.ecoinformatics.org/index_map.jsp</a> displays the map with no points representing the datasets. The option box Dataset Bounds text is crossed out.</p>
<p>Here is the errors in the server log that jing saw:<br />org.vfny.geoserver.wms.WmsException: java.util.NoSuchElementException: Could not locate FeatureTypeConfig 'metacat:data_bounds'<br />at org.vfny.geoserver.wms.requests.GetMapKvpReader.findLayer(GetMapKvpReader.java:1215)<br />at org.vfny.geoserver.wms.requests.GetMapKvpReader.parseLayersParam(GetMapKvpReader.java:1190)<br />at org.vfny.geoserver.wms.requests.GetMapKvpReader.parseLayersAndStyles(GetMapKvpReader.java:688)<br />at org.vfny.geoserver.wms.requests.GetMapKvpReader.parseMandatoryParameters(GetMapKvpReader.java:389)<br />at org.vfny.geoserver.wms.requests.GetMapKvpReader.getRequest(GetMapKvpReader.java:226)<br />at org.vfny.geoserver.servlets.AbstractService.doGet(AbstractService.java:318)</p>
<p>jing and some error about the file couldn't find:<br />/var/www/org.ecoinformatics.knb1/knb/data/metacat_shps/data_points.shp</p> Bug #2691 (Resolved): Update knbweb module to use maphttps://projects.ecoinformatics.org/ecoinfo/issues/26912006-12-07T00:31:18ZMatthew Perryperry@nceas.ucsb.edu
<p>The knbweb module needs to be integrated with our mapbuilder interface.</p> Bug #2689 (Resolved): Upgrade to Geoserver 1.4https://projects.ecoinformatics.org/ecoinfo/issues/26892006-12-07T00:27:42ZMatthew Perryperry@nceas.ucsb.edu
<p>The current (12/06/2006) cvs head uses an early beta of geoserver 1.4. Though it works flawlessly for our purposes, it would be best to upgrade to geoserver 1.4 final to resolve any outstanding bugs and make it easier to upgrade in the future and support.</p> Bug #2675 (Resolved): column "infinity" does not existhttps://projects.ecoinformatics.org/ecoinfo/issues/26752006-11-22T17:29:45ZChad Berkleyberkley@nceas.ucsb.edu
<p>When uploading certain xml files to metacat via the ecogrid, I get a message that says:<br /><error><br />ERROR: column "infinity" does not exist<br /></error></p>
<p>I'm not sure why it's looking for this column. You can reproduce it from kepler by trying to upload the Current Time actor to the library. Here is a full error from kepler:</p>
<p>Here's the full error:</p>
<p>[java] got lsid client<br /> [java] checking if lsid urn:lsid:kepler-project.org:actor:2:1 is already registered<br /> [java] EcogridUtils: The time to create instance is =========== 0<br /> [java] is registered? false<br /> [java] Creating transport KAR file at /Users/berkley/.kepler/cache/tmp/tmp.kar<br /> [java] done writing KAR file to /Users/berkley/.kepler/cache/tmp/tmp.kar<br /> [java] uploading kar file with id urn:lsid:kepler-project.org:kar:7:1<br /> [java] session id: 4EB5CA645287A4E729BCD30072EBCABA<br /> [java] EcogridUtils: The time to create instance is =========== 0<br /> [java] uploaded kar file with id urn:lsid:kepler-project.org:kar:7:1<br /> [java] uploading actor metadata with id urn:lsid:kepler-project.org:actor:2:1<br /> [java] session id: 4EB5CA645287A4E729BCD30072EBCABA<br /> [java] EcogridUtils: The time to create instance is =========== 0<br /> [java] repository: name=keplerRepository, repository=localhost:8080, username=uid=kepler,o=unaffiliated,dc=ecoinformatics,dc=org<br /> [java] org.kepler.objectmanager.repository.RepositoryException: java.rmi.RemoteException: <?xml version="1.0"?><br /> [java] <error><br /> [java] ERROR: column "infinity" does not exist<br /> [java] </error></p>
<pre><code>[java] at org.kepler.objectmanager.repository.EcogridRepository.put(EcogridRepository.java:176)<br /> [java] at org.kepler.gui.UploadToRepository.upload(UploadToRepository.java:273)<br /> [java] at org.kepler.gui.UploadToRepository.access$000(UploadToRepository.java:75)<br /> [java] at org.kepler.gui.UploadToRepository$UploadSwingWorker.construct(UploadToRepository.java:449)<br /> [java] at util.SwingWorker$2.run(SwingWorker.java:122)<br /> [java] at java.lang.Thread.run(Thread.java:613)</code></pre> Bug #2670 (Resolved): Test Metacat version with updates does not link to the "create a new accoun...https://projects.ecoinformatics.org/ecoinfo/issues/26702006-11-14T21:48:26ZCallie Bowdishbowdish@nceas.ucsb.edu
<p>This is not a bug for the production version of Metacat(1.6.0). The "Head" version of Metacat does not links to the form page where a user can create a new account. The productions server goes to this location:</p>
<p><a class="external" href="http://knb.ecoinformatics.org/cgi-bin/ldapweb.cgi?cfg=knb">http://knb.ecoinformatics.org/cgi-bin/ldapweb.cgi?cfg=knb</a></p>
<p>The test server does not link to the form. I only get the top of the page without the form. It has the "Register for the Knowledge Network for Biocomplexity (KNB)!" title and informaion but no form.</p>
<p>On the test Metacat server the link from the KNB homepage that says create a new account goes to <a class="external" href="http://ldap.ecoinformatics.org/cgi-bin/ldapweb.cgi">http://ldap.ecoinformatics.org/cgi-bin/ldapweb.cgi</a>. This does not work.</p>
<p>On the production server the link goes to: <a class="external" href="http://knb.ecoinformatics.org/cgi-bin/ldapweb.cgi?cfg=knb">http://knb.ecoinformatics.org/cgi-bin/ldapweb.cgi?cfg=knb</a> . This works.</p> Bug #2669 (Resolved): Mapbuilder incompatible w/ Safari, Operahttps://projects.ecoinformatics.org/ecoinfo/issues/26692006-11-14T19:48:00ZMatthew Perryperry@nceas.ucsb.edu
<p>Mapbuilder, our current web mapping client, uses the browser to do XSLT transforms on the client side. A number of browsers do not allow XSLT to be accessed from javascript, most notably Safari, Opera and Konqueror.</p>
<p>See <a class="external" href="http://developer.apple.com/internet/safari/faq.html#anchor21">http://developer.apple.com/internet/safari/faq.html#anchor21</a></p>
<p>This is a fundamental problem and the mapbuilder developers have stated that they have no plans to support browsers without javascript XSLT.</p>
<p>We have two options to remedy this:</p>
<p>- Switch to a different web mapping client<br />- Detect the browser and alert the user when their browser is not compatible (and don't display the broken map of course).</p> Bug #2648 (Resolved): Update broken LTER link in web templateshttps://projects.ecoinformatics.org/ecoinfo/issues/26482006-11-07T23:17:29ZDuane Costadcosta@lternet.edu
<p>All instances of the following LTER link in metacat's web templates:</p>
<pre><code><a class="external" href="http://sql.lternet.edu/scripts/intranet/sendmemypassword.pl">http://sql.lternet.edu/scripts/intranet/sendmemypassword.pl</a></code></pre>
<p>should be replaced by the following link:</p>
<pre><code><a class="external" href="http://savanna.lternet.edu/sendpassword.php">http://savanna.lternet.edu/sendpassword.php</a></code></pre> Bug #2554 (Resolved): Store the spatial data cache outside servlet contexthttps://projects.ecoinformatics.org/ecoinfo/issues/25542006-09-13T19:33:41ZMatthew Perryperry@nceas.ucsb.edu
<p>Currently the spatial data cache is stored within the servlet context. Geoserver's configuration hardcodes the shapefile path so storing it within the context allows you to specify relative paths in the geoserver config files. However, if you reinstall metacat, the cache will be replaced and will need to be regenerated on every "ant install". This is obviously a problem for developers who might rebuild frequently.</p>
<p>The proposed solution involves configuring the shp paths in build.properties and updating the geoserver config files at build-time. Doing this with ant tokens would be straighforward but we'd ideally want another solution (XML-DOM based since all geoserver configuration files are xml).</p>
<p>The paths would also need to be in metacat.properties so they could be accessed by the spatial harvester class.</p> Bug #2552 (Resolved): Spatial query class to use geotools against the spatial cachehttps://projects.ecoinformatics.org/ecoinfo/issues/25522006-09-11T22:22:19ZMatthew Perryperry@nceas.ucsb.edu
<p>Currently the spatial query is run with a standard metacat squery. Besides being inefficient, it is also inaccurate since it doesn't take into account some fundamental quirks in spatial relationships (the international dateline, multiple polygons representing the same feature, odd shaped polygons or holes).</p>
<p>The idea would be to write a class that, given a spatial query (bbox or point) would use geotools to query the actual spatial cache and return a list of matching docids.</p> Bug #2551 (Resolved): Generalized spatial xpaths for mutliple schemashttps://projects.ecoinformatics.org/ecoinfo/issues/25512006-09-11T22:16:43ZMatthew Perryperry@nceas.ucsb.edu
<p>The spatial harvester is currently generic enough to handle any xml document with west,east,north and south xpaths. These are configured in metacat.properties.</p>
<p>However, they are single values and thus a metacat instance can only support spatial options for one schema at a time.</p>
<p>We need to make this generic enough so that multiple supported schemas can be in the same database, their geographic coverages all represented in the spatial cache.</p> Bug #2550 (Resolved): Dateline and polar handling for pointshttps://projects.ecoinformatics.org/ecoinfo/issues/25502006-09-11T22:08:15ZMatthew Perryperry@nceas.ucsb.edu
<p>Unfortunately for cartographers the world is not flat. When a feature crosses the dateline or the polar regions, the cartesian coordinate system and all the assumptions that go with it are invalid. For instance the west bounding coordinate would be greater than the east bounding coordinate for a bounding box that crossed the international date line.</p>
<p>This is taken care of in the polygon code by splitting such polygons into multi-polygons.</p>
<p>We need to update the point centroid generation code to reflect this reality as well.</p>