use 2.3.0 for this next release of metacat.
Fixed a bug that the configuration page shows complete even though the DataONE configuration hadn't done yet.
Add a util class for dataone configuration.
Add code the display the SchemaModification exception.
Fixed a catch clause syntax which is only compatible with java 1.7.
Add code the overwrite the schema.xml in the solr-home/conf.
Fixed a bug that the group information couldn't be retrived from the session.
Change the guid.ezid.uritemplate.metadata property value to the hostname only, with no context
correctly configure metacat-index to use metacat context/deployment location. https://projects.ecoinformatics.org/ecoinfo/issues/6138
Reviewed code for all uses of FileInputStream, checking to see if the method should be closing the stream, and if so, closing it in the method as well as in the finally clause to ensure we don't leak file descriptors.
Sparate the action reindex and reindexall.
Closing some more streams that were left open. This Bug #6136 seems to be pervasive and is going to require an extensive audit to find all of the places where streams are not closed properly.
Refactor to use IOUtils.closeQuietly() which handles nulls and streams that are already closed.
Added close() to finally block for readFromFileSystem() call.
Closing FileOutputStream handles so that the OS limits on filehandles are not exceeded.
Add code to delete existing records for a id when we try to add the id to the index_event table.
do not modify existing SystemMetadata on MN.update() if something goes wrong during content insertion. https://projects.ecoinformatics.org/ecoinfo/issues/6101
Refer to metacat.war deployments since those are now the default. https://projects.ecoinformatics.org/ecoinfo/issues/6082
use UTF-8 if request encoding not given. https://projects.ecoinformatics.org/ecoinfo/issues/6100
require authenticated session when minting a DOI/other identifier. https://projects.ecoinformatics.org/ecoinfo/issues/6086
include the dataone.subject (node admin) in the list of administrators. This allows full administrative access to objects when using certificates+d1 api. https://projects.ecoinformatics.org/ecoinfo/issues/6086
use optional template for registering DOIs at a given target. https://projects.ecoinformatics.org/ecoinfo/issues/6092
only attempt to generate OREs for objects that we know not to have them already. https://projects.ecoinformatics.org/ecoinfo/issues/6061
Generate an ORE when sci metadata is added via the Metacat API.https://projects.ecoinformatics.org/ecoinfo/issues/6061
better checking for ORE maps when publishing DOIs (need to update the packages that contain sci meta). https://projects.ecoinformatics.org/ecoinfo/issues/6061
implement ORE check method to actually query the MN for OREs that reference the given pid.https://projects.ecoinformatics.org/ecoinfo/issues/6061
change hazelcast group name to match the current context. https://projects.ecoinformatics.org/ecoinfo/issues/5624
check both previous and current data revisions when updating packages. https://projects.ecoinformatics.org/ecoinfo/issues/5647
avoid SQL errors when processing very old objects of type: "-//ecoinformatics.org//eml-access-2.0.0beta6//EN"
If the pathquery engine is disabled, the xml path index queue will be disabled as well.
If the xpath query is disabled, the query, squery and spatial_query action will be disabled as well.
Add code to throw an exception if the pathquery is not enabled.We also need to disable building index if the pathquery is disabled.
use consistent file names and zip content names. Opted for "-" separator so that the zip writer does not remove the unique part of the filename. https://projects.ecoinformatics.org/ecoinfo/issues/6054
copy the original systemMetadata when publishing a revision in order to avoid overwriting the original values - the shared map is listening! https://projects.ecoinformatics.org/ecoinfo/issues/6014
include user's fullName when validating a session. also, allow cookie session to be used if not passed in directly as a parameter
Add code to modify the web.xml in metacat-index context if the metacat's name is not knb.
include filename in the package download, though we can;t really use the PID because of all the potential characters in it that are not valid for filesystems.
type the doctype="metadata" objects as "FGDC-STD-001-1998" formatId for rendering XSLT and for DataONE SystemMetadata.
use custom FileInputStream subclass to delete the temporary bagit zip when the inputstream is closed (after someone has downloaded the zip).
use ObjectFormatCache instead of ObjectFormatService because we are not calling it as a CN.
allow running the Harvester client without a source code checkout. (D. Blankman comments)
include the localid when rendering the view (used in stylesheets)
use ObjectFormatCache.getInstance().getFormat() instead of the CN service (the MN does not typically act as a CN!)
add slash for harvesterRegistration redirect.
include mn.publish() REST endpoint handling. https://projects.ecoinformatics.org/ecoinfo/issues/6024
comment out the index queue call when archive() is called - I think it is causing the duplicate events for the listener. https://projects.ecoinformatics.org/ecoinfo/issues/6030
implement the view service (uses existing skin-based dbtransform) - and include the REST endpoint. https://projects.ecoinformatics.org/ecoinfo/issues/6028
use StreamSource instead of StringReader for method signature -- can be used with different sources this way. https://projects.ecoinformatics.org/ecoinfo/issues/6019
clean up DBTransform in preparation for "view" service. https://projects.ecoinformatics.org/ecoinfo/issues/6019
include GET /package/{pid} endpoint in MN service. https://projects.ecoinformatics.org/ecoinfo/issues/6027
MN.getPackage() - test with ORE that includes 2 data files and a "metadata" file (still should be using EML for that test). https://projects.ecoinformatics.org/ecoinfo/issues/6026
First pass at MN.getPackage() implementation using Bagit library from LOC. https://projects.ecoinformatics.org/ecoinfo/issues/6026
add method for publishing existing object (usually assumed to be scimeta) with a DOI. https://projects.ecoinformatics.org/ecoinfo/issues/6014
add Metacat servlet action to force the reindexing of one or more or all pids in the system. https://projects.ecoinformatics.org/ecoinfo/issues/5945
only use MapStore/MapLoader for saving/loading IndexEvent objects. No need to use a listener since there is only the single node -- all entries are persisted to DB using the hazelcast.xml config we have for the map. https://projects.ecoinformatics.org/ecoinfo/issues/5944
add MapStore/Loader test for the IndexEvents -- adding and removing events in the DB table through hazelcast. https://projects.ecoinformatics.org/ecoinfo/issues/5944
support a "force replication delete all action" during replication. This is used when we want Metacat to remove the content from the other target replicas because the DataONE delete() action was called (more powerful than just "archive").
add simple test for the IndexEventDAO class -- adding, removing, listing events in the DB table. https://projects.ecoinformatics.org/ecoinfo/issues/5944
upgrade to Metacat 2.1.0 on the trunk. This includes a new index_event table for storing indexing events that need to be reprocessed. https://projects.ecoinformatics.org/ecoinfo/issues/5944
stub for storing IndexEvent objects in Metacat (from metacat-index processing). https://projects.ecoinformatics.org/ecoinfo/issues/5944
do not force a get() during refresh (causing EML-defined data access rules to be lost when inserting EML docs about data files). note that this reverses a change that was meant to trigger indexing, but now we are using a new queue to share index events with metacat-index and so should not be necessary.
do not use tmp file to return an inputstream on read() operations - just read from the file we already have. https://projects.ecoinformatics.org/ecoinfo/issues/6009
use standard File.createTempFile() method for uploaded data files and delete them when we are done with them. https://projects.ecoinformatics.org/ecoinfo/issues/6008
correct regex for whitespace in D1 identifier.
use an independent ISet<SystemMetadata> structure to communicate objects that should be indexed by metacat-index. https://projects.ecoinformatics.org/ecoinfo/issues/5943
do not create solr-home if there is no template to compy into that directory (need to be able to create it later if/when someone decides to use and deploy metacat-index). https://projects.ecoinformatics.org/ecoinfo/issues/6006
do not attempt to copy solr-home template from metacat-index webapp if it does not exist. This would be in cases where metacat-index is not deployed. https://projects.ecoinformatics.org/ecoinfo/issues/6006
Solr will be enabled if it is in the db.enabledEngines.
do not require PortalCertificateManager be configured. Fix NPE because session was not created when using old sessionid-based authentication. https://projects.ecoinformatics.org/ecoinfo/issues/5942
handle client certificates, portal certificates and jsessionid as three ways to prove you are an uthenticated user. https://projects.ecoinformatics.org/ecoinfo/issues/5942
Use some contants from the EnabledQueryEngines.
Updated documentation, and added modification date to the sitemap index file entries.
Remove unused import.
Mofdified Sitemap class to also generate the sitemap index file that is needed when more than one sitemap file is provided.
use ContentTypeInputStream interface (and ByteArray implementation) to specify the desired content-type of the InputStream returned by MN.query().
load the evicted SM back into the map on a "Refresh" so that listeners hear the update. (metacat-index, for example)
switch back to log4j statements now that I am sure certificate delegation is working.
use System.out.println until the oa4mp logging issue is resolved.
add logging for portal certificate look up process.
use relative path for oa4mp_client.xml (within servlet context). https://projects.ecoinformatics.org/ecoinfo/issues/5936
first pass at integrating CILogon/MyProxy certificates in Metacat. Configuration is specific to mn-demo-4.test.dataone.org for the time being (this will cause localhost deployments to fail webapp deployment). https://projects.ecoinformatics.org/ecoinfo/issues/5936
Updated Sitemap generation to use latest version of the sitemap protocol schemas.
Use the SolrQueryServiceController to get the spec version and index schema information.
Change the package of SolrQueryReponseWriterFactory and SolrQueryResponseTransformer.
Use the new query(SolrParams param) method of the SolrQueryServiceController.
Use the SolrQueryServiceController class to handle the query.
Move the cod which transformed the query response to the inputstream to the metacat-common module.Remove some obsoleted imports.
Move the code to generate the QueryResponseWriter to the metacat-common module. So it can be shared with the metacat-index module.
organize imports
remove extra lines from returned <docid/> block. https://projects.ecoinformatics.org/ecoinfo/issues/5932
Allow use of PID instead of docid in the Perl registry. At least for reading/editing and deleting existing content. Does not create content using a pid. https://projects.ecoinformatics.org/ecoinfo/issues/5932
initialize the SOLR home directory if it does not already exist.
Only after reloading the core, the query result can reflect the change made in metacat-index module.
Fixed a bug to put "OR" correctly in the query. And remove the user "authorized_user" from the rightsholder clause in the query.
Use the set of subjects to replace the user and groups for the solr query.
escape special XML characters when constructing a pathquery from user input (&). https://projects.ecoinformatics.org/ecoinfo/issues/3017
adjust action=zip behavior to use full docids and entity names (data files) for the zip entry. Also uses the given qformat to render the metadata. https://projects.ecoinformatics.org/ecoinfo/issues/3816
Add the rightsHolder in the access filter.
adjust action=zip behavior to use full docids when checking for permissions/existence. https://projects.ecoinformatics.org/ecoinfo/issues/3816
Add code to handle query for the http solr server.