fix a bug in MNodeService.replicate() where the checksum value was being compared to the computed checksum object, not its value.
use UTC serialization for log entries so that the timestamp, not just the date, is preservedhttps://redmine.dataone.org/issues/2257
Update the D1Admin class to set the dataone.contactSubject property. I've added the property to the http request to be added to the JSP form, but for now am setting the property using the dataone.subject field value. Not sure if we want to expose the contact subject in the form yet or not.
In MN.getCapabilities(), the required contact subject was not being added to the node instance from the dataone properties. Add it in.
generate ORE maps only once -- and persist the flag to the main backup properties so that subsequent Metacat upgrades remember this value.
use RC-1 Dataone jars
Added DOI generation to the 2.0.0 upgrade process. To succeed, this script must be run on a fresh 2.0.0 database, or on a 1.9.5 version database, as those are the only ways to get the needed foreign keys to be marked as deferrable. The identifier conversion must be turned on by setting correct properties in metacat.properties. See the comments in GenerateGlobalIdentifiers for details. By default, conversion is set to false in the properties file. If you want to convert an instance to use DOIs, be sure to set metacat.properties up BEFORE running through the Metacat configuration and database upgrade.
Refactoring classes that throw generic Exception class to throw their more specific subclasses so that new exceptions are not hidden behind generic messages. Makes debugging easier.
try to read the local document before making the localid->guid mapping (in cases where we fail to read the data locally like if it is referenced in an EML file but does not exist on this Metacat instance)
Ensure we have the object and sysmeta params for MN.create(). We were getting a fatal SAX parsing error encapsulated in a ServiceFailure when a science metadata object param was null. Cut it off at the pass after parsing the MMP entity.
Use the Collections class from java.util.
Remove null field tests in the IdentifierManager class. Schema-level required fields are checked on serialization/deserialization using JibX during the REST resource handler classes. Other required fields are checked in MNodeService and CNodeService, higher in the stack.
For MNs that haven't set the archived flag to false on create(), set it here. Also, ensure that the CN sync code sets the authoritative and origin member node fields.
On MN.create(), set the archived flag to be false. This field isn't required in the schema, but is needed by the DataONE indexer once objects are sync'd.
-generate system meta for all docids, even those not originating on the server (replicas from the past)-generate ORE docs and download remote data only for those documents that originated on this server being upgraded.http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5522
refactor generate system meta loop to the factory class -- to be reused in sysmeta and ORE generationhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5522
When managing obsoletes/obsoletedBy system metadata fields, set the archived flag to false initially, and set it to true on system metadata for objects that a revision obsoletes.
do NOT generate ORE maps or download data when we do the initial System Metadata generation -- this is deferred until D1 registration.
make more generic so that a custom list of IDs can be passed in.
check that the resourceMap (based on Id only) does not currently exist in the local metacat when generating OREs
insert OR update system metadata -- no need to do an update right after initial insert...
call the System Metadata generator during upgrade to 2.0.0
In IdentifierManager.updateSystemMetadata(), add a check for invalid system metadata (fields that throw a NullPointerException on access) to ensure that system metadata is populated correctly. Updated calling classes to handle the exception.
Properly initialize the servlet context when starting alternate servlets, which makes sure that the configuration files have been loaded and config properties are available.
Handle SQLExceptions when trying to save system metadata locally.
Convert SQLExceptions to RuntimeExceptions for Hazelcast MapStore operations.
In IdentifierManager, throw SQLExceptions rather than just logging them, and let them be handled higher up in the stack.
use new endpoint/method:http://mule1.dataone.org/ArchitectureDocs-current/apis/CN_APIs.html#CNReplication.deleteReplicationMetadata
use PUT /obsoletedBy/{pid} for CNCore.setObsoletedBy per our discussion today
Keep the hzIdentifiers set in sync with the Metacat systemmetadata table. If entries are added/updated in the hzSystemMetadata map, make sure the identifier is in the set. If (for some administrative reason) the entry is removed, remove the identifier from the set. This usually doesn't happen.
When loading all keys from Metacat into the hzSystemMetadata map, also load identifiers into the hzIdentifiers set if they are not already there. Although entries may be evicted from the map, the list of identifiers will remain. The list will have a fairly small memory footprint since it's just identifiers.
Add support for the distributed Set of unique identifiers in the storage cluster called 'hzIdentifiers'. This set is a persistent total list of all identifiers (even when entries in the hzSystemMetadata map are evicted). It reflects the state of the identifiers in the postgresql systemmetadata table, but is distributed across the cluster. Add the getIdentifiers() method, which returns the ISet of identifiers.
include new methods needed for replication (in new d1 jars)https://redmine.dataone.org/issues/2203
add method: setObsoletedBy (https://redmine.dataone.org/issues/2185)augement new method: deleteReplicationMetadata
remove method: assertRelationhttps://redmine.dataone.org/issues/2158
add method: deleteReplicationMetadataremove method: assertRelationupdate the D1 jarshttps://redmine.dataone.org/issues/2187https://redmine.dataone.org/issues/2158
serialize the Identifier for the systemMetadata being registeredhttps://redmine.dataone.org/issues/2204
Simplify setReplicationStatus() to not call updateReplicationMetadata() if a replica doesn't exist. Just create it and update the system metadata, which we already have a lock for.
Minor null checks to avoid NPEs when calling replicate()
Don't throw a NotAuthorized exception in isAdminAuthorized() - just return false.
do not download and save remote data resources which are HTML but are not expected to be such (login or info/splash pages before data content).http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5522
Update the CN methods to throw a VersionMismatch where the API changed (where serialVersion is a required parameter). These were previously throwing an InvalidRequest exception.Change the exception handling for calls to Hazelcast to catch a RuntimeException (not Exception) so we don't catch exceptions that we purposefully throw....
Use a Logger instead of System.out for SystemMetadataMap.
Don't lock() on the map.get() in isNodeAuthorized() (this assumes that the CN has queued the task already). Add more lock/unlock debug statements, and fix setReplicationStatus() - I missed a finally statement to unlock the pid.
Modify CNReplication methods setReplicationStatus(), updateReplicationMetadata() and setReplicationPolicy() to allow administrative access from a Coordinating Node by calling isAdminAuthorized().
Add isAdminAuthorized() to D1NodeService to check if the operation is being requested from a CN. Consult the NodeList from the CN and test the NodeType of the given node and the X509 certificate Subject. Perhaps we should expand this to also check for service-level access in the future.
store D1 configuration properties in the main backup so that they persist between upgrades.
In registerSystemMetadata(), lock the pid prior to calling map.containsKey(pid) since a put to the map could occur between the check and the subsequent put().
update authoritative member node id when we change it (reconfiguration) and when we initially register as a MN with the CN.
Correctly deserialize the BaseException subclass in handling calls to setReplicationStatus()
Use Lock instead of ILock to be consistent across classes.
After reviewing CNodeService and D1NodeService prompted by Robert comparing the Hazelcast locking with the d1_synchronization locking, I've made a number of changes that will prevent locking problems:
1) Multiple methods contained try/catch blocks that would:...
only delete replicated data files (server_location != 1)
use inherited access control from EML for the data file we download from a remote sourcehttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5522
download remote data and save locally when it is referenced by an EML package, then include it in the ORE map.http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5522
When the requested count in a call to listObjects() is 0, return an empty object list, not a full one. Fixes https://redmine.dataone.org/issues/2122
Minor formatting for querySystemMetadata().
exapnd permissions on the exisiting access rule not on the permission being checked. (hierarchical permissions)
upgrade routine to purge empty replicated data files so that they can be re-replicatedhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5536
Make sure the local id isn't null when we try to get the object from the local instance.
Simplify the error handling, and throw the exception once the CN is updated with the new status.
Set the replica status to failed (not invalidated) when we get exceptions trying to read the object bytes. Not much of a difference, but only the CN, in theory, is supposed to be able to set the invalidated status.
Set the replication status to invalidated when we have a localId, but getting the object bytes fails for any reason.
Only call super.create() if there's no localId found on the MN (ie a replica is there from an out of band process).
Get the object inputstream from the local metacat instance using MetacatHandler.get() rather than MN.getReplica() so we don't throw an InvalidToken exception when passing in a null Session. The D1Client object is never used for this local call.
interpret permissions as hierarchicalhttps://redmine.dataone.org/issues/2150
remove flag for independent system metadata replication -- these entries are replicated along with the data/metadata objects or via hazelcast when the actual object is not on the server.
process the current revision, not the latest!use direct object/system metadata insertion for ORE maps.
allow other Metacat process (system metadata and ORE generation) to directly insert objects and system metadata without having to go through the MN/CN methods.
sort the docids so that "old" revisions are processed before newer ones
only attempt to unlock a lock if it was created (in the finally block)
new jars with many changes -- including new CN methods: ping, describe, listChecksumAlgorithm. Removed MN.setAccessPolicy. Refactored CN.setOwner() to CN.setRightsHolder().
refresh the SystemMetadata entry for EML and referenced data files when parsing EML access rules -- this ensures our in-memory system metadata map is up to date WRT the DB entries.
add revision history to the generated ORE objects -- we use the revision history of the EML package as a basis because the each ORE revision mirrors the revision of the EML package. Add a placeholder for checking if an equivalent ORE map exists in the DataONE infrastructure - this will be a call to CN.search() that looks at the solr index for OREs based on the EML package ID.
Update the parameter names expected for listObjects() to reflect the MN API changes in the architecture docs.
Change the query semantics such that we implement the MN.listObjects() where the lower datetime bound is inclusive (greater than or equal to" and the upper datetime bound in exclusive (less than). This allows easier paging in client applications.
In the call to MNReplication.replicate(), call back to CNReplication.setReplicationStatus() and set the status to failed when we get local exceptions, exceptions from the source MN when calling getReplica(). Send back an exception with a description when setting the status. Add a private setReplicationStatus() method to refactor these calls out.
Modify CNresourceHandler.setReplicationStatus() to use the new API signature, including the failure BaseException that is parsed out of the MMP as a file section. Log the exception message. Since this is an asynchronous call, ReplicationManager won't see a failed status, but the MNAuditTask eventually will.
Add collectReplicationStatus() to CNResourcHandler to return the BaseException or it's subclass, if any, provided in the the call to setReplicationStatus. The exception will be reported on the CN.
Change setReplicationStatus() to drop serialVersion and report the failure exception message in the CN log.
set SystemMetadata.archived=true on MN.deleteThere is ongoing discussion on what the exact behavior should be here, but this mimics Metacat's delete-as-archive action.http://redmine.dataone.org/issues/882
In MNodeService.replicate(), check to see if we have a replica (via an out of band channel) before we call sourceMN.getReplica().
only create guid->docid mapping during metadata replication if it does not already existhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5520
do not treat access change as an update -- it should not attempt to retrieve the contents of the objecthttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5520
only create guid->docid mapping during data replication if it does not already existhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5520
remove xml_acccess.docid reference (oops)http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5560
updated D1 API -- removed Permission.REPLICATE and associated parameters
process system metadata before access rules (access control is now driven by GUID so the mapping needs to be there)
Change the key of query result cache. The key now has the real search value.
include SerialVersion in describe responsehttps://redmine.dataone.org/issues/2135NOTE: d1 jars should be replaced once all schema changes are finalized and the generate d1_common code is committed to svn
include 'archived' system metadata element in backing DB store
If a member node cannot be found in the node list matching the targetNodeSubject given in isNodeAuthorized(), throw a ServiceFailure exception.
Minor reformatting for readability.
update with latest d1_common/d1_lib (includes latest schema changes)
only handle 100 (consecutive!) docId generations per millisecond, otherwise the generated docid part is bigger than Long.MAX_VALUE and Metacat cannot fully handle that.
check previous revision when attempting to update access control with EML 2.0.x docshttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5560
remove old access rules for a data object when they are being updated by rules contained in an EML document. Now the OnlineDataAccessTest EML 2.1.0 tests pass.http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5560
construct the proper previousDocId when checking for update permission
for now, look up SystemMetadata directly from the table otherwise we won't have the latest access information. Need to refresh the in-memory copy everytime we edit the access policy via Metacat (includes EML parser)
check previous revision for permissions to update (includes data described by EML)