Remove the bean named eml.fileID which used the ResolveSolrField class.
calculate geohash_3 to three places (typo)
use NSEW for the bounding box geohash calculation from EML - all versions
Using 1.3.0-SNAPSHOT from d1_cn_index_processor
Add beans to support geohashes
handle null Boolean in SM.archived field
use Matthew Jones for test creator since he has an ORCID in their staging environment.
augment annotation indexing test/sample to include orcid annotation. https://projects.ecoinformatics.org/ecoinfo/issues/6267https://projects.ecoinformatics.org/ecoinfo/issues/6423
include characteristic_sm field with SPARQL query
switch to index standard since it is more likely we will be able to determine this from our existing EML attribute information. https://projects.ecoinformatics.org/ecoinfo/issues/6253
Do a more thorough check that the characteristic annotation was successfully indexed as expected (semtools)
switch to the OpenAnnotation (OA) model for annotating datapackages with measurements/characteristics (semtools)
bump the poms to 2.4.2
test that obsoleted objects remain indexed, but are marked as obsoleted. https://projects.ecoinformatics.org/ecoinfo/issues/6424
use rangeOfDates | singleDateTime to populate the beginDate and endDate index fields. https://projects.ecoinformatics.org/ecoinfo/issues/6285
include ID field as a minimum for indexing additional fields.
correctly include stacktrace for error debugging.
return null if there is no existing SolrDoc for the given pid.
index singleDateTime value into both begin and end date fields in solr. https://projects.ecoinformatics.org/ecoinfo/issues/6285
uncomment the original tests now that the "field" test is working.
check for existing index document before trying to use existing fields.
allow indexing of RDF documents - provide a sparql query that will return values for the field name. Using measurement_sm initially (a dynamic multivalued solr field). https://projects.ecoinformatics.org/ecoinfo/issues/6253
check for existing documents - don't assume it exists.
Unify solr indexing with an IndexTask that is added to the queue -- allows us to send more than just the systemMetadata to the indexer. Initially this is for READ event counts for each document. https://projects.ecoinformatics.org/ecoinfo/issues/6346
move metacat trunk to 2.4.0-SNAPSHOT
prep for 2.3.1 release
use 2.3.0 without SNAPSHOT pre-release.
Renamed the test class.
Rename the IndexGenerator to IndexGeneratorTimerTask.
Fixed a bug that when a data file was archived, the solr index for the metadata object still kept the "documents" element.
made the delete method synchronized.
If an object was archived, the solr index will be removed for it.
use 2.3.0 for this next release of metacat.
make sure all versions are using 2.2.2 of some sort -- thinking of making this release a 2.3.0 release because we will be branching/tagging from the trunk, not the 2.2.x branch.
Use the setting from the metacat-common component.
Use the d1_cn_index_processor 1.2.0 version.
Remove those files. They will be get from the d1_cn_index_processor 1.2.0 jar.
combine the index code for failed ids and other ids.
Clean up the code.
The IndexGenerator will index the obsoleted data objects as well.
Remove the obsoletes chain from the update method in the SolrIndex class.
When an object is archvied, the solr index will not be removed.
merge from 2.2 branch: remove the index queue item when it is being processed. https://projects.ecoinformatics.org/ecoinfo/issues/6117
Add a patch for d1_cn_index_processor 1.1.2 version. So it can index taxon information. Those files will overwrite the ones in the d1_cn_index_process-1.1.2.jar.
Refer to metacat.war deployments since those are now the default. https://projects.ecoinformatics.org/ecoinfo/issues/6082
remove any index event errors if the pid has successfully been reindexed. https://projects.ecoinformatics.org/ecoinfo/issues/6089
Change the parameters order of the constructor. We maybe reuse some code from d1_cn_processor.
Add a plugin to copy the solr-home from the metacat-common to the target/classes.
Remove the solr-home to the metacat-common.
Change the version to 2.2.0-SNAPSHOT and the dependency version of metacat-common to 2.2.0-SNAPSHOT.
[merge from branch to keep trunk up to date with upgrade history] prep for Metacat 2.1.1 release
Remove the file and it will got from the D1.
Modified the documentation.
Add a junit test to test resourcemap subprocessor.
Use the ResourceMapException when a component of a resource map isn't found in the solr index.
Add a ResourceMapException.
Add the property of dataone.hazelcast.location.clientconfig.
Make the method getSolrindex to be public.
change the configuration path.
Change the configuration path according to the change of the class.
Use the class path configuration of spring to replace the file configuration. We can reuse the application context files in the d1_cn_index_processor jar.
Remove the application context files (except the resource map one). It will use the ones in the d1_cn_index_processor.
Add a new property for the log class name.
Add a constructor.
Remove the constructor.
use v2.1.0 for all metacat release components for consistency
remove all -SNAPSHOT artifacts in favor of released versions in preparation for Metacat v2.1.0 release
fixed a bug that the setup method deleted a result file.
Add a method to count how many documents in a specified solr server.
Remove a logFile method.
use the v1.1.2 d1-cn-index-processor
use the v1.1.x branch ResourceMap class for metacat-index
The exceptions will be caught during the looping of deleting the solr index.
Remove the code to write some debug information into a temporary file.
Use the ResourceMapFactory rather than the ResourceMap constructor to build a resource map.
rollback: we do want to use 1.2.0-SNAPSHOT from d1 indexing.
Write the ids from metacat into a temporary file.
use tagged version of cn-index-processor library
Move a file to the temp dir.
Add a method to write ids which will be indexed into a file.
Fixed a bug that the missed id is not writtent to the file.
Add a class to compare ids in the solr and metacat.
Besides the getArchvied() method, the getObsoletedBy method was added to determine if the object is archvied or not.
Add code to handle deleted ids.
Use schedule method to start the index.
Add the code to write the error message to the log in the itemRemvoed method.
In determining the time arrange, the equality was removed.
Add code to handle failed ids.
Remove the EventLog write.
Add the EventLog code.
It will throw an exception if the subprocessor can't handle the document.
Check if the all components of a resource map have been processed before processing the resource map.
Fixed a bug that the event log can't save the real lastest process date.
Change the date format.Remove the replication part of log4j.
Use a new date format.
Add a log4j properties file.