CN.search() id not implemented by metacat -- making that explicit and also testing for it.
default replicaStatus (aka "show replicas in results") to true rather than false
add debug statements for listObject slice debugging
Do not set headers until response is ready to send (5756)
use dual query for query slicing - one for count, another for the actual records when requested.https://redmine.dataone.org/issues/3065
get total (or subtotal when non-slicing params are present) count as a separate query from the field selection query.
include Skye's suggestions about correctly limiting by D1 Event types
first pass at DOI minting using the EZID service in mn.generateIdentifier()http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5755
Fix a minor bug in listObjects() where total was set incorrectly when total was set incorrectly when count=0. The definition of total in the D1 architecture docs says 'The total number of entries in the source list from which the slice was extracted.' With count=0, we assume the total is the total count from the entire object store. Needs testing.
remove empty package
rollback the delete() when there is an error performing part of it -- don't want to end up with partial delete.
use Identifier object not String when retrieving SM from the HZ map to set archived during delete()
for MN.update() we needed to pass the original pid, not the new pid
do not reject any schemes -- all handled the same at the moment.
simple autogen-based implementation of MN.generateIdentifier(). does not support DOIs, ARKs, etc. It does support including a fragment, returning an identifier like "<fragment>.2012113010215298206"
add link for reference on how to do record limits in oracle
limit /log and /object calls to configurable maximum count for paging. defaults to existing Metacat value of 7000
use RDBMS-specific features to limit the resultset for paging the object list -- postgres and oracle have implementations. we don''t really support mssql so I skipped that one.
use RDBMS-specific features to limit the resultset for paging -- postgres and oracle have implementations. we don''t really support mssql so I skipped that one.
include debug msg about removing docid from index queue. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5750
remove document from the indexing queue when delete is called. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5750
clean up index queue code before tackling index/delete race condition. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5750
no need to mark SM as archived now that DocumentImpl.delete() does it.https://redmine.dataone.org/issues/3406
mark documents as archived=true when they are deleted using the Metacat API.https://redmine.dataone.org/issues/3406
look up the archived value when retrieving SystemMetadata record.https://redmine.dataone.org/issues/3405
surround returned query in CDATA to prevent parsing of xml within xml
In migrating to Hazelcast 2.4.x, replace deprecated methods. Use Hazelcast.newHazelcastInstance() rather than Hazelcast.init(). For other deprecated static methods, use the HazelcastInstance equivalent calls.
In CNodeService.updateReplicationMetadata(), we are setting the replicaVerifiedDate() when we update or wholesale add a new replica. However, in setReplicationStatus(), we only do so when there's a new entry. Change setReplicationStatus() to also update the replicaVerifiedDate on updates of existing entries to be more consistent with other changes. This affects node prioritization based on this date timestamp. Thanks to Skye for pointing this out.
To attempt to address performance and stability WRT Hazelcast communication, we're upgrading to the 2.x series of Hazelcast. remove the 1.9.x jar files, and add the 2.4.1-SNAPSHOT jars. Modify HazelcastService to handle the minor change in the ItemListener interface (now passes ItemEvent<Identifier> as an argument)....
implement query description for pathquery -- only tells callers about the pre-indexed paths we have in Metacat since there are an infinite number of "fields" when storing arbitrary XML, but we really don't want people using non-indexed paths for performance reasons anyway. I've typed all the fields as String, even though some are not just strings and can be used for numeric or data comparisons.
Implement MNQuery for "pathquery" engine. Optionally include guid in the pathquery results (https://redmine.dataone.org/issues/3083)
use ObjectFormatInfo libclient utility to look up mimeType and filename extension during get() calls. Configurable mapping file is deployed by default to /var/metacat/dataone where it can then be augmented as needed. This location is controlled in the metacat.properties file (which is injected into the DataONE Settings values during weapp intitialization)....
add count for the total processed pids (from ISet iterator)
handle /object?count=0 queries using simpler (quicker) sql https://redmine.dataone.org/issues/3065
allow getlog action to use docid parameters that do not include revision. In these cases, the latest revision will be used.
handle case where we do not have a pathexpr to checkhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5696
simplify the xml_access query, and instead use guid to check for permission. Now the docid/rev join (to get most recent version for search results) happens "higher up" in the query.http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5696
pass parameters to the getLog action for rendering in xslt
remove morpho.jar -- moved needed classes into shared utilities project. (currently building form utilities trunk -- be sure to 'ant fullclean' to get the latest utilities.jar built)
Update d1_common_java and d1_libclient_java to the newest jar files. Add methods to CNodeService to throw NotImplemented exceptions for query(), listQueryEngines(), and getQueryEngineDescription() since these API calls are handled outside of metacat.
do not allow updates to orphan another branch of revision history. https://redmine.dataone.org/issues/3338
Change the set and get methods for the replication verified date to use java.sql.Timestamp rather than java.util.Date via setTimestamp(), not setDate(). The hh:mm:ss.sss was previously getting truncated.
include the subjects we are testing for authentication.https://redmine.dataone.org/issues/2778
remove the max(rev) clause in favor of a more straight-forward join to xml_documents (that will have the max rev). http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5696
include inverted sendParameters() method that uses the keys as values, and the values as keys so that multiple docid parameters can be specified for the zip download. This was a regression when moving to standard httpclient rather than the roll-your-own version we had been using. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5718
shorten the systemmetadata* table names for Oracle's 30 character limit. move version to 2.0.5. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5717
use correct docid format when checking for existing mappings.
use CDATA for docname field in docInfo so that XML parser ignores the content that can contain characters like "&
use SchemaLocationResolver to fetch remote entries for the xml_catalog -- we want to be able to fetch included xsd files as well as use any error handling it provides for checking the schemas.
when performing query, make sure we are using the access rules of the latest revision of a given docid, otherwise we may include documents that used to be public but have been made private in subsequent revisions.http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5696
correct the number of prepared statement parameters when inserting to xml_revisions table. Errors like the following were showing in the replication log file:knb 20120831-19:42:38: [ERROR]: DocumentImpl.writeReplication - Failed to create access rule for package: john.15950.1 because The column index is out of range: 12, number of columns: 11. [ReplicationLogging]
include WHERE in the sql where clause - encountered by SAEON's node admin, Alex Niehaus.
create docid-guid mapping during replication if it does not exist. we were [incorrectly] assuming that there would be SM coming with the document info that would fill this information in, but for traditional non-MN Metacat deployments there is no SM to provide a mapping. In this case we use the docid as the guid.
stream the replication "update" response rather than building up a complete list in a stringbuffer. prompted by findings on t he CN: https://redmine.dataone.org/issues/3141
make sure data objects correctly use force replicate with action "insert" https://redmine.dataone.org/issues/3138
when updating a document on a remote server, we still need to use the previous docid to check that the user has permissions to do so (rather than the new id that is obsoleting the old id). This was discovered by M Servilla at LTER.
remove unused "dataonelogger"
allow SM resynch to be executed any time, not just during start up.https://redmine.dataone.org/issues/3116
change to debug log level when processing shared/local pids)
only lock the missing pid event if we know we have it locally to contribute.https://redmine.dataone.org/issues/3117
Add locking to the itemAdded() method so ideally only one CN will respond to the request for a 'wanted' pid from the cluster. The lock is on a string, not the pid, and so won't conflict with system metadata locking. The string is based on the pid, with "missing-" as a prefix.
only publish to the missing pid "wanted list" when resynching system metadata. we were seeing redundant entry added/updated events when looking up the shared systemmetadata first.
print the missing pid count, not the total shared pid count so we know how many will be processed.
change the system metadata resynch approach: nodes will publish PIDs that they are missing after inspecting the shared identifier set. other nodes will be listening for the "wanted" pids and will put their local copy of SystemMetadata on the shared SM map. This should dramatically decrease the hazelcast chatter during a resynch and targets only the pids that are missing from any of the various nodes.
logging for processing identifier set on restart.
remove possibility for infinite loop in case data replication is not configured for the server and a data file is encountered (yikes!)
added logging debug statements to see where the replication timeout might be occurring.
check for null archived flag in ORE SMhttps://redmine.dataone.org/issues/3046
check if the caller is the Node admin (the member node calling itself) as well as the existing check for the CN calling the service. Both of those callers should be given full admin rights.
use local Set processing to determine which pids (if any) should be contributed to the shared set by this node during the resync. Should save time rather than checking each and every pid against the shared set.
move the hzIdentifiers initialization into the resync thread so that it does not affect start up time. cleaned up unused methods and superfluous code.
only load local pids into hzIdentifiers if t hey do not already exist in the shared set. increase logging severity and detail of messages emitted during this process to get a better sense of what is taking so long.
utility methods to update/reserialize existing ORE maps that were generated with older foresite (and included bad dateTime strings).https://redmine.dataone.org/issues/3046
On the coordinating Nodes, we often get McdbDocNotFoundExceptions for data (doctype == 'BIN') documents because they are not synchronized to the CNs. Change the logging to only print the stack trace during load() and loadAll() when log debug is enabled.
check for invalid (!) pids. thanks, M. Reyes for catching thishttps://redmine.dataone.org/issues/3047
only look up the client timeout property once, not every time we make a callhttps://redmine.dataone.org/issues/3078
improve content type handling during the get() callshttps://redmine.dataone.org/issues/3070
check for whitespace in identifiers during create() and update()https://redmine.dataone.org/issues/3047
configurable replication client timeouthttps://redmine.dataone.org/issues/3078
order the listObjects() results by identifier to mitigate random paged resultshttps://redmine.dataone.org/issues/3065
correct the parameter/value setting in the prepared statements for retrieving log information.
use docid, not the guid when returning the accesscontrol block
include dataone.ore.downloaddata as a configurable property in case MNs (like LTER) want to have the process download externally-stored data files described in an EML data package.
set date SM modified when we are setting obsoletes/obsoletedBy/archived values. This way the CN can actualy pick up the changes in revision history.
log error when looking up non-existent local SM rather than completely bombing out of the resynch thread.
look up docid using mapped guid when checking permission on described data fileAddresses: http://support.nceas.ucsb.edu/rt/Ticket/Display.html?id=7490
use docid (not guid) when instantiating the PermissionController. Was getting an error with DOI-ified identifier and the metacat getaccesscontrol action:https://knb.ecoinformatics.org/knb/metacat?action=getaccesscontrol&docid=Collinge.3.28<error>AccessControlForSingleFile.getACL() - MCDB error when getting ACL: No guid registered for docid doi:10.5063/AA/Collinge.3.28...
make sure we have non-null values where jibx serialization expects them for LogEntry
use secure Metacat context URL for D1 registrationhttps://redmine.dataone.org/issues/3030
first pass: DataONE-specific log retrieval to avoid java-based post-processing.
set archived flag (true) when we set the obsoletedBy value in the ORE system metadata
use the localId for obsoletes/obsoletedBy ORE system metadata (https://redmine.dataone.org/issues/2964)
Print the stack trace when the MMP cannot be resolved.
report errors during XML->HTML transformhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5618
Oops, previous commit suffered from a happy trigger finger. During deleteReplicationMetadata(), don't delete the replica on the replica Member Node. Call CN.delete() for that functionality. This call just updates sytem metadata (according to the API description).
Minor logging change.
Add debug logging to delete() to understand why we're getting InsufficientKarmaException.
Since we already have determined access via isAuthorized() and isAdminAuthorized(), act as the Metacat administrator during calls to DocumentImpl.delete() in archive(), passing in null username and group.
restrict getLogRecrods (both MN and CN) to be called only by admin users (the CN)https://redmine.dataone.org/issues/2855