redirect to the registry immediately on successful login. https://projects.ecoinformatics.org/ecoinfo/issues/5951
redirect successful login to the registry page.
Add a class to compare ids in the solr and metacat.
include the result-template stuff - - causing js errors in perl registry without it.
use correct path to the header template. https://projects.ecoinformatics.org/ecoinfo/issues/5951
include metacatui "skin" for use in Perl registry rendering and eventually EML/metadata rendering. https://projects.ecoinformatics.org/ecoinfo/issues/5951
use new metacatUI footer. https://projects.ecoinformatics.org/ecoinfo/issues/5951
use new metacatUI header. https://projects.ecoinformatics.org/ecoinfo/issues/5951
Besides the getArchvied() method, the getObsoletedBy method was added to determine if the object is archvied or not.
Add code to handle deleted ids.
use fluid layout from metacatui/bootstrap css. https://projects.ecoinformatics.org/ecoinfo/issues/5951
remove custom classes from login form, simplify template, move js functions to separate file. prep for UI-refresh. https://projects.ecoinformatics.org/ecoinfo/issues/5951
Stubs for 'metacatui' rendering. Initial target is to support Perl registry styling, but also general structured html views on metadata. https://projects.ecoinformatics.org/ecoinfo/issues/5951
Include the 'metacatui' portion of the path to .js file. https://projects.ecoinformatics.org/ecoinfo/issues/5951
Initial copy of original Perl templates - before making any changes. https://projects.ecoinformatics.org/ecoinfo/issues/5951
Use schedule method to start the index.
use tagged/released version of d1-portal project. https://projects.ecoinformatics.org/ecoinfo/issues/5936
include mn.publish() REST endpoint handling. https://projects.ecoinformatics.org/ecoinfo/issues/6024
Add the code to write the error message to the log in the itemRemvoed method.
comment out the index queue call when archive() is called - I think it is causing the duplicate events for the listener. https://projects.ecoinformatics.org/ecoinfo/issues/6030
implement the view service (uses existing skin-based dbtransform) - and include the REST endpoint. https://projects.ecoinformatics.org/ecoinfo/issues/6028
use StreamSource instead of StringReader for method signature -- can be used with different sources this way. https://projects.ecoinformatics.org/ecoinfo/issues/6019
clean up DBTransform in preparation for "view" service. https://projects.ecoinformatics.org/ecoinfo/issues/6019
In determining the time arrange, the equality was removed.
Add code to handle failed ids.
Remove the EventLog write.
include GET /package/{pid} endpoint in MN service. https://projects.ecoinformatics.org/ecoinfo/issues/6027
Add the EventLog code.
MN.getPackage() - test with ORE that includes 2 data files and a "metadata" file (still should be using EML for that test). https://projects.ecoinformatics.org/ecoinfo/issues/6026
It will throw an exception if the subprocessor can't handle the document.
Check if the all components of a resource map have been processed before processing the resource map.
First pass at MN.getPackage() implementation using Bagit library from LOC. https://projects.ecoinformatics.org/ecoinfo/issues/6026
add method for publishing existing object (usually assumed to be scimeta) with a DOI. https://projects.ecoinformatics.org/ecoinfo/issues/6014
Fixed a bug that the event log can't save the real lastest process date.
Change the date format.Remove the replication part of log4j.
Use a new date format.
Add a log4j properties file.
Add a file to specify the log4j as the logger.
add Metacat servlet action to force the reindexing of one or more or all pids in the system. https://projects.ecoinformatics.org/ecoinfo/issues/5945
only use MapStore/MapLoader for saving/loading IndexEvent objects. No need to use a listener since there is only the single node -- all entries are persisted to DB using the hazelcast.xml config we have for the map. https://projects.ecoinformatics.org/ecoinfo/issues/5944
add MapStore/Loader test for the IndexEvents -- adding and removing events in the DB table through hazelcast. https://projects.ecoinformatics.org/ecoinfo/issues/5944
support a "force replication delete all action" during replication. This is used when we want Metacat to remove the content from the other target replicas because the DataONE delete() action was called (more powerful than just "archive").
add simple test for the IndexEventDAO class -- adding, removing, listing events in the DB table. https://projects.ecoinformatics.org/ecoinfo/issues/5944
Add the code that only the ids with the correct system metadata modification time will be added to the index queue.
Use the hazelcast event log.
Add code to get and set the last process date.
merging upgrade scripts from 2.0 branch to trunk. https://redmine.dataone.org/issues/3847
scripts for 2.1.0 upgrade
upgrade to Metacat 2.1.0 on the trunk. This includes a new index_event table for storing indexing events that need to be reprocessed. https://projects.ecoinformatics.org/ecoinfo/issues/5944
stub for storing IndexEvent objects in Metacat (from metacat-index processing). https://projects.ecoinformatics.org/ecoinfo/issues/5944
move IndexEvent into metacat-common. Perparation for Metacat responding to events and writing them to a persistent store. https://projects.ecoinformatics.org/ecoinfo/issues/5944
do not force a get() during refresh (causing EML-defined data access rules to be lost when inserting EML docs about data files). note that this reverses a change that was meant to trigger indexing, but now we are using a new queue to share index events with metacat-index and so should not be necessary.
do not use tmp file to return an inputstream on read() operations - just read from the file we already have. https://projects.ecoinformatics.org/ecoinfo/issues/6009
use standard File.createTempFile() method for uploaded data files and delete them when we are done with them. https://projects.ecoinformatics.org/ecoinfo/issues/6008
correctly delete data file when we are done with it. https://projects.ecoinformatics.org/ecoinfo/issues/6007
include filename in the filepart part. https://projects.ecoinformatics.org/ecoinfo/issues/6007
send the original filename in the upload() method since that is supported by the Metacat API. https://projects.ecoinformatics.org/ecoinfo/issues/6007
remove the unique string when generating data file metadata. https://projects.ecoinformatics.org/ecoinfo/issues/6007
debugging. https://projects.ecoinformatics.org/ecoinfo/issues/6007
use File::Temp to write data files in registry. https://projects.ecoinformatics.org/ecoinfo/issues/6007
correct regex for whitespace in D1 identifier.
refactor IndexEventLog a bit to simplify type/action information. prep for serializing IndexEvent objects to Metacat. https://projects.ecoinformatics.org/ecoinfo/issues/5944
remove serial number from indexeventlog - it is not used elsewhere in the api. https://projects.ecoinformatics.org/ecoinfo/issues/5944
correct spelling for index.eventlog.classname property
use an independent ISet<SystemMetadata> structure to communicate objects that should be indexed by metacat-index. https://projects.ecoinformatics.org/ecoinfo/issues/5943
consolidate SystemMetadata map retrieval in preparation for using a different structure for objects to index.
adding ability to remove event from the [error] queue.
do not create solr-home if there is no template to compy into that directory (need to be able to create it later if/when someone decides to use and deploy metacat-index). https://projects.ecoinformatics.org/ecoinfo/issues/6006
do not attempt to copy solr-home template from metacat-index webapp if it does not exist. This would be in cases where metacat-index is not deployed. https://projects.ecoinformatics.org/ecoinfo/issues/6006
Add code to implment set and get the last processed date.
It will make the index only for those objects which were modified after the marked time.
Add set and get the lastprocessedDate in the IndexEventLog.Remove the code to write the successful event.
Add the dataone repository.
The "war" target will build the metacat-index.war as well.
Log the timed index jobs.
Add the code to log the failed events.
Add a temporary file log for debugging.
Use commons-io 2.4
Add a new property for the index event log class name.
Add a serial number for the event.Add method to set events to be archived.
Add a new class variable - isArchived for class IndexEvent.
Update the documentation about those classes.
Add a event and eventlog for the index.
Use the identifier set to get the list of ids in the member node.
The returned ISet should be Identifier.
Add code to test getIdentifierSet method.
Add method to get identifier set.
Add a new property to specify the interval of a Timer to run the thread generating solr index.
Set up a Timer to run the regenerating solr index task periodically.
Use the ";" as the seperator to replace "," in the resource name spaces.
Add code to handle delete data package information when delete a pid in the solr index.
Add two static methods to get the SystemMetadata and data object InputStream for the specified id.
Change the code since the ApplicationController's constructor was changed.
Add code to check if the metacat.properties is available.