Use the new query(SolrParams param) method of the SolrQueryServiceController.
Use the SolrQueryServiceController class to handle the query.
Move the cod which transformed the query response to the inputstream to the metacat-common module.Remove some obsoleted imports.
Move the code to generate the QueryResponseWriter to the metacat-common module. So it can be shared with the metacat-index module.
organize imports
use correct test for PID param.
remove extra lines from returned <docid/> block. https://projects.ecoinformatics.org/ecoinfo/issues/5932
Allow use of PID instead of docid in the Perl registry. At least for reading/editing and deleting existing content. Does not create content using a pid. https://projects.ecoinformatics.org/ecoinfo/issues/5932
initialize the SOLR home directory if it does not already exist.
Only after reloading the core, the query result can reflect the change made in metacat-index module.
Fixed a bug to put "OR" correctly in the query. And remove the user "authorized_user" from the rightsholder clause in the query.
Use the set of subjects to replace the user and groups for the solr query.
escape special XML characters when constructing a pathquery from user input (&). https://projects.ecoinformatics.org/ecoinfo/issues/3017
adjust action=zip behavior to use full docids and entity names (data files) for the zip entry. Also uses the given qformat to render the metadata. https://projects.ecoinformatics.org/ecoinfo/issues/3816
Add the rightsHolder in the access filter.
adjust action=zip behavior to use full docids when checking for permissions/existence. https://projects.ecoinformatics.org/ecoinfo/issues/3816
Add code to handle query for the http solr server.
Use a new class to handle the solr query engine description request.
Add double quotes to surrend the user or group names in the access fq. This will fix the issue if the names have white spaces.
Add the access query filter.
Allow use of server-side XSLT for SOLR queries that include "wt=<qformat>". https://projects.ecoinformatics.org/ecoinfo/issues/5812
Use 2.0.7 version number in configuration/upgrade/docs (trunk, even though we will not be releasing 2.0.7 from trunk, we want to have the upgrade scripts included here)
Allow null SM.submitter (per schema). There were null values in cn-dev (and probably elsewhere since it is technically allowed in the schema. But with a null value, we need to have a null Subject for the SM.submitter field, not a Subject with a null getValue() return. Encountered this when testing for: https://projects.ecoinformatics.org/ecoinfo/issues/5929.
add space to prevent syntax error when additional clause is appended. https://projects.ecoinformatics.org/ecoinfo/issues/5929.
CHange replication 'update' query to use a LEFT JOIN so that the performance of the replication update action is improved, which had been causing an HTTP timeout for large metacat installations. See https://projects.ecoinformatics.org/ecoinfo/issues/5929.
Add the code to read the index field information from the schema.xml.
Add code to handle the solr index information. we still need to figure out how to get the information.
Add the solr engine to the engine list.
use maven to manage most jar dependencies in Metacat.Exceptions include: LSID, Datamamager (EML),
Add code to handle solr query.
Add a class to handle solr query.
Remove those obsolete index classes.
Merging the METACAT_2_0_6_BRANCH changes for [M|C]NodeService into the trunk.
allow verification date to be updated for replicas (patch from Skye). https://redmine.dataone.org/issues/3699
select only distinct guids (synch may have failed more than once for any given guid)https://redmine.dataone.org/issues/3539
include xml_revisions.do not allow removal of server_location = 1 documents (these are not replicas).https://redmine.dataone.org/issues/3539
include size and format datcite elements (optional) and use more general resourceType without formatId in them (Dataset/metadata and Dataset/data). http://schema.datacite.org/meta/kernel-2.2/doc/DataCite-MetadataKernel_v2.2.pdf
lookup the title for EML files when registering DOIs.lookup the creator from DataONE CN (if available).add EML-based test. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5513
Set the session to null so that the call uses the CN certificate when calling MN.systemMetadataChanged();
To keep all nodes up to date with regard to system metadata changes, add the broadcastSystemMetadataChange() method that finds replica MNs in the node list and calls systemMetadataChanged(). Modify setReplicationStatus() and updateReplicationMetadata() to fire this off when a replica status changes to completed. We may decide to inform MNs at other times too, but this is a conservative amount of calls going to the MNs for now.
refactor DOI registration into separate class. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5513
refactor using ezid-client changes that split field names and values into separate enums. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5513
Correctly mint and register DOIs in teh MN API implementation. Add tests to exercise minting and creating. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5513
register DOIs with minimal DataCite metadata. still need to determine which details to include and when, but the plumbing is in place as we refine those rules. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5513
class for removing failed/invalid replicas from target nodes that previously held replicated content (KNB/LTER/PISCO/etc). https://redmine.dataone.org/issues/3539
disable EZID/DOI minting by default since we do not yet have a means of tracking minted DOIs and augmenting metadata for them when we actually receive the object in a subsequent create() or update() call. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5753
tweak to pathquery/generic xpath handling
ready Metacat for 2.0.6 release (docs, db version, build files etc).
group user_owner clause as "AND (... OR .... OR ....)" to handle multiple pathquery <owner> elements. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5880
accidentally added
typo
Search and indexing with Lucene/SOLRRequires a manually configured SOLR installationNot currently used by the rest of metacat
generate ID from UUID. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5840
make sure serial version is included or set on MN.update().http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5793
Quick fix for bad handling of non-default data/backup directories.
remove indexing task from the queue when we are updating the document
move DocInfo parsing into utilities project so that it can be used by Morpho as well as Metacat.http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5737
use default count = 1000 for CN.listObjects rather than -1 (because now -1 will cause an SQL error)
default replicaStatus to true for the CN.listObject call
make sure to call lock() on the SM when updating rightsholder (like every other method that gets a lock object from HZ).
CN.search() id not implemented by metacat -- making that explicit and also testing for it.
default replicaStatus (aka "show replicas in results") to true rather than false
add debug statements for listObject slice debugging
additional db indexes for pathquery performancehttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5696
Do not set headers until response is ready to send (5756)
use dual query for query slicing - one for count, another for the actual records when requested.https://redmine.dataone.org/issues/3065
get total (or subtotal when non-slicing params are present) count as a separate query from the field selection query.
include Skye's suggestions about correctly limiting by D1 Event types
first pass at DOI minting using the EZID service in mn.generateIdentifier()http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5755
Fix a minor bug in listObjects() where total was set incorrectly when total was set incorrectly when count=0. The definition of total in the D1 architecture docs says 'The total number of entries in the source list from which the slice was extracted.' With count=0, we assume the total is the total count from the entire object store. Needs testing.
remove empty package
rollback the delete() when there is an error performing part of it -- don't want to end up with partial delete.
use Identifier object not String when retrieving SM from the HZ map to set archived during delete()
for MN.update() we needed to pass the original pid, not the new pid
do not reject any schemes -- all handled the same at the moment.
simple autogen-based implementation of MN.generateIdentifier(). does not support DOIs, ARKs, etc. It does support including a fragment, returning an identifier like "<fragment>.2012113010215298206"
add link for reference on how to do record limits in oracle
limit /log and /object calls to configurable maximum count for paging. defaults to existing Metacat value of 7000
use RDBMS-specific features to limit the resultset for paging the object list -- postgres and oracle have implementations. we don''t really support mssql so I skipped that one.
use RDBMS-specific features to limit the resultset for paging -- postgres and oracle have implementations. we don''t really support mssql so I skipped that one.
include debug msg about removing docid from index queue. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5750
remove document from the indexing queue when delete is called. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5750
clean up index queue code before tackling index/delete race condition. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5750
no need to mark SM as archived now that DocumentImpl.delete() does it.https://redmine.dataone.org/issues/3406
mark documents as archived=true when they are deleted using the Metacat API.https://redmine.dataone.org/issues/3406
look up the archived value when retrieving SystemMetadata record.https://redmine.dataone.org/issues/3405
surround returned query in CDATA to prevent parsing of xml within xml
In migrating to Hazelcast 2.4.x, replace deprecated methods. Use Hazelcast.newHazelcastInstance() rather than Hazelcast.init(). For other deprecated static methods, use the HazelcastInstance equivalent calls.
In CNodeService.updateReplicationMetadata(), we are setting the replicaVerifiedDate() when we update or wholesale add a new replica. However, in setReplicationStatus(), we only do so when there's a new entry. Change setReplicationStatus() to also update the replicaVerifiedDate on updates of existing entries to be more consistent with other changes. This affects node prioritization based on this date timestamp. Thanks to Skye for pointing this out.
To attempt to address performance and stability WRT Hazelcast communication, we're upgrading to the 2.x series of Hazelcast. remove the 1.9.x jar files, and add the 2.4.1-SNAPSHOT jars. Modify HazelcastService to handle the minor change in the ItemListener interface (now passes ItemEvent<Identifier> as an argument)....
implement query description for pathquery -- only tells callers about the pre-indexed paths we have in Metacat since there are an infinite number of "fields" when storing arbitrary XML, but we really don't want people using non-indexed paths for performance reasons anyway. I've typed all the fields as String, even though some are not just strings and can be used for numeric or data comparisons.
Implement MNQuery for "pathquery" engine. Optionally include guid in the pathquery results (https://redmine.dataone.org/issues/3083)
update pub_date when the length of that field is != 4 (use date_created in this scenario). There were 2 entries that had "193" as the pub_date.
replace new lines in creator with spaces. set blank " " titles and creators to "unknown". use "Baltimore Ecosystem Study LTER" for publisher on all BES objects.
include John Kunze's latest suggestions for improved metadata -- a lot of clean-up, especially on characters in the file. Note UTF-8 encoding of the script.
use ObjectFormatInfo libclient utility to look up mimeType and filename extension during get() calls. Configurable mapping file is deployed by default to /var/metacat/dataone where it can then be augmented as needed. This location is controlled in the metacat.properties file (which is injected into the DataONE Settings values during weapp intitialization)....
add count for the total processed pids (from ISet iterator)
handle /object?count=0 queries using simpler (quicker) sql https://redmine.dataone.org/issues/3065
allow getlog action to use docid parameters that do not include revision. In these cases, the latest revision will be used.
handle case where we do not have a pathexpr to checkhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5696