implement the view service (uses existing skin-based dbtransform) - and include the REST endpoint. https://projects.ecoinformatics.org/ecoinfo/issues/6028
use StreamSource instead of StringReader for method signature -- can be used with different sources this way. https://projects.ecoinformatics.org/ecoinfo/issues/6019
clean up DBTransform in preparation for "view" service. https://projects.ecoinformatics.org/ecoinfo/issues/6019
In determining the time arrange, the equality was removed.
Add code to handle failed ids.
Remove the EventLog write.
include GET /package/{pid} endpoint in MN service. https://projects.ecoinformatics.org/ecoinfo/issues/6027
Add the EventLog code.
MN.getPackage() - test with ORE that includes 2 data files and a "metadata" file (still should be using EML for that test). https://projects.ecoinformatics.org/ecoinfo/issues/6026
It will throw an exception if the subprocessor can't handle the document.
Check if the all components of a resource map have been processed before processing the resource map.
First pass at MN.getPackage() implementation using Bagit library from LOC. https://projects.ecoinformatics.org/ecoinfo/issues/6026
add method for publishing existing object (usually assumed to be scimeta) with a DOI. https://projects.ecoinformatics.org/ecoinfo/issues/6014
Fixed a bug that the event log can't save the real lastest process date.
Change the date format.Remove the replication part of log4j.
Use a new date format.
Add a log4j properties file.
Add a file to specify the log4j as the logger.
add Metacat servlet action to force the reindexing of one or more or all pids in the system. https://projects.ecoinformatics.org/ecoinfo/issues/5945
only use MapStore/MapLoader for saving/loading IndexEvent objects. No need to use a listener since there is only the single node -- all entries are persisted to DB using the hazelcast.xml config we have for the map. https://projects.ecoinformatics.org/ecoinfo/issues/5944
add MapStore/Loader test for the IndexEvents -- adding and removing events in the DB table through hazelcast. https://projects.ecoinformatics.org/ecoinfo/issues/5944
support a "force replication delete all action" during replication. This is used when we want Metacat to remove the content from the other target replicas because the DataONE delete() action was called (more powerful than just "archive").
add simple test for the IndexEventDAO class -- adding, removing, listing events in the DB table. https://projects.ecoinformatics.org/ecoinfo/issues/5944
Add the code that only the ids with the correct system metadata modification time will be added to the index queue.
Use the hazelcast event log.
Add code to get and set the last process date.
merging upgrade scripts from 2.0 branch to trunk. https://redmine.dataone.org/issues/3847
scripts for 2.1.0 upgrade
upgrade to Metacat 2.1.0 on the trunk. This includes a new index_event table for storing indexing events that need to be reprocessed. https://projects.ecoinformatics.org/ecoinfo/issues/5944
stub for storing IndexEvent objects in Metacat (from metacat-index processing). https://projects.ecoinformatics.org/ecoinfo/issues/5944
move IndexEvent into metacat-common. Perparation for Metacat responding to events and writing them to a persistent store. https://projects.ecoinformatics.org/ecoinfo/issues/5944
do not force a get() during refresh (causing EML-defined data access rules to be lost when inserting EML docs about data files). note that this reverses a change that was meant to trigger indexing, but now we are using a new queue to share index events with metacat-index and so should not be necessary.
do not use tmp file to return an inputstream on read() operations - just read from the file we already have. https://projects.ecoinformatics.org/ecoinfo/issues/6009
use standard File.createTempFile() method for uploaded data files and delete them when we are done with them. https://projects.ecoinformatics.org/ecoinfo/issues/6008
correctly delete data file when we are done with it. https://projects.ecoinformatics.org/ecoinfo/issues/6007
include filename in the filepart part. https://projects.ecoinformatics.org/ecoinfo/issues/6007
send the original filename in the upload() method since that is supported by the Metacat API. https://projects.ecoinformatics.org/ecoinfo/issues/6007
remove the unique string when generating data file metadata. https://projects.ecoinformatics.org/ecoinfo/issues/6007
debugging. https://projects.ecoinformatics.org/ecoinfo/issues/6007
use File::Temp to write data files in registry. https://projects.ecoinformatics.org/ecoinfo/issues/6007
correct regex for whitespace in D1 identifier.
refactor IndexEventLog a bit to simplify type/action information. prep for serializing IndexEvent objects to Metacat. https://projects.ecoinformatics.org/ecoinfo/issues/5944
remove serial number from indexeventlog - it is not used elsewhere in the api. https://projects.ecoinformatics.org/ecoinfo/issues/5944
correct spelling for index.eventlog.classname property
use an independent ISet<SystemMetadata> structure to communicate objects that should be indexed by metacat-index. https://projects.ecoinformatics.org/ecoinfo/issues/5943
consolidate SystemMetadata map retrieval in preparation for using a different structure for objects to index.
adding ability to remove event from the [error] queue.
do not create solr-home if there is no template to compy into that directory (need to be able to create it later if/when someone decides to use and deploy metacat-index). https://projects.ecoinformatics.org/ecoinfo/issues/6006
do not attempt to copy solr-home template from metacat-index webapp if it does not exist. This would be in cases where metacat-index is not deployed. https://projects.ecoinformatics.org/ecoinfo/issues/6006
Add code to implment set and get the last processed date.
It will make the index only for those objects which were modified after the marked time.
Add set and get the lastprocessedDate in the IndexEventLog.Remove the code to write the successful event.
Add the dataone repository.
The "war" target will build the metacat-index.war as well.
Log the timed index jobs.
Add the code to log the failed events.
Add a temporary file log for debugging.
Use commons-io 2.4
Add a new property for the index event log class name.
Add a serial number for the event.Add method to set events to be archived.
Add a new class variable - isArchived for class IndexEvent.
Update the documentation about those classes.
Add a event and eventlog for the index.
Use the identifier set to get the list of ids in the member node.
The returned ISet should be Identifier.
Add code to test getIdentifierSet method.
Add method to get identifier set.
Add a new property to specify the interval of a Timer to run the thread generating solr index.
Set up a Timer to run the regenerating solr index task periodically.
Use the ";" as the seperator to replace "," in the resource name spaces.
Add code to handle delete data package information when delete a pid in the solr index.
Add two static methods to get the SystemMetadata and data object InputStream for the specified id.
Change the code since the ApplicationController's constructor was changed.
Add code to check if the metacat.properties is available.
If solr is not enabled, it would not be running.
Solr will be enabled if it is in the db.enabledEngines.
Use ";" to separate db.enabledEngines.
Use the ";" as the seperator for properties.
do not require PortalCertificateManager be configured. Fix NPE because session was not created when using old sessionid-based authentication. https://projects.ecoinformatics.org/ecoinfo/issues/5942
change the waiting time to 10 seconds and attempts to 600 for the hazelcast.
Use another thread in the Servlet init method to wait hazelcast.
Make the target init depend on build-metacat-common.
Put the waiting mechanism for the hazelcast at the first place.
handle client certificates, portal certificates and jsessionid as three ways to prove you are an uthenticated user. https://projects.ecoinformatics.org/ecoinfo/issues/5942
Use some contants from the EnabledQueryEngines.
Temporarily remove the code to disable solr engine if it isn't listed in the property file.
exlude /lib/maven from the war file
If the solr engine is disabled, the metacat index will do nothing.
Use the new name of a method.
Rename a method to isEnabled.
Updated documentation, and added modification date to the sitemap index file entries.
Remove unused import.
Mofdified Sitemap class to also generate the sitemap index file that is needed when more than one sitemap file is provided.
Remove the junit test for an obsoleted class.
Remove the obsoleted class.
Add a junit test class for EnabledQueryEngines.
Add a test base class.
Add a class to repsent the enabled engine list.
Change the junit version to 4.8
use ContentTypeInputStream interface (and ByteArray implementation) to specify the desired content-type of the InputStream returned by MN.query().