Simplify setReplicationStatus() to not call updateReplicationMetadata() if a replica doesn't exist. Just create it and update the system metadata, which we already have a lock for.
Minor null checks to avoid NPEs when calling replicate()
Don't throw a NotAuthorized exception in isAdminAuthorized() - just return false.
do not download and save remote data resources which are HTML but are not expected to be such (login or info/splash pages before data content).http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5522
Fixed formatting.
Moving Metacat Sphinx RST documentation from docs/dev to docs/user directory.
Merged most recent changes from trunk into the RST converted version of the Administrator's Guide. Now the Sphinx/RST version is up to date rlative to the most recent word document, and is now the active copy. The MS Word document will be deprecated and removed. All future changes should be made to the RST version.
Update the CN methods to throw a VersionMismatch where the API changed (where serialVersion is a required parameter). These were previously throwing an InvalidRequest exception.Change the exception handling for calls to Hazelcast to catch a RuntimeException (not Exception) so we don't catch exceptions that we purposefully throw....
Use a Logger instead of System.out for SystemMetadataMap.
Don't lock() on the map.get() in isNodeAuthorized() (this assumes that the CN has queued the task already). Add more lock/unlock debug statements, and fix setReplicationStatus() - I missed a finally statement to unlock the pid.
Modify CNReplication methods setReplicationStatus(), updateReplicationMetadata() and setReplicationPolicy() to allow administrative access from a Coordinating Node by calling isAdminAuthorized().
Add isAdminAuthorized() to D1NodeService to check if the operation is being requested from a CN. Consult the NodeList from the CN and test the NodeType of the given node and the X509 certificate Subject. Perhaps we should expand this to also check for service-level access in the future.
store D1 configuration properties in the main backup so that they persist between upgrades.
In registerSystemMetadata(), lock the pid prior to calling map.containsKey(pid) since a put to the map could occur between the check and the subsequent put().
update authoritative member node id when we change it (reconfiguration) and when we initially register as a MN with the CN.
add description about what becoming a Member Node entails
Correctly deserialize the BaseException subclass in handling calls to setReplicationStatus()
Use Lock instead of ILock to be consistent across classes.
After reviewing CNodeService and D1NodeService prompted by Robert comparing the Hazelcast locking with the d1_synchronization locking, I've made a number of changes that will prevent locking problems:
1) Multiple methods contained try/catch blocks that would:...
Converted the metacat-properties chapter to RST format. Still need to merge innewer changes from the trunk, as I was accidentally working from the 1.9.4branch for this whole conversion process.
only delete replicated data files (server_location != 1)
use inherited access control from EML for the data file we download from a remote sourcehttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5522
Removing unused screenshots that are duplicates of the others in the admin doc.
Converted Harvester chapter to RST.
download remote data and save locally when it is referenced by an EML package, then include it in the ORE map.http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5522
remove systemmetadata replication option -- it is no longer a separate document in metacat
Added stub documents for chapters on DataONE and OAI-PMH (to be converted fromDuane's Word doc).
Small word choice change.
Improved formatting for index.
Added AuthInterface chapter, and a License chapter.
Converted Event Logging and Sitemaps chapters to RST.
Fixed table layout on geoserver and submission chapters. Converted Replicationchapter to RST.
Completed 'Submission' page conversion, and also converted GeoServer docs toRST format.
Partial conversion of the accessing and submitting metadata section to RST.More coming later.
include the EML and data tests in the suite
debugging data locking test
cannot check for deleted data since it is forever available (archived)
Updated the configuration section, converted word doc to RST.
Updated the Installation chapter, coverted to RST.
When the requested count in a call to listObjects() is 0, return an empty object list, not a full one. Fixes https://redmine.dataone.org/issues/2122
Minor formatting for querySystemMetadata().
Updated contributors.
Modified index to fix typo.
Edited introduction to Metacat admin guide, inserted figure.
Screenshots from the Metacat admin guide.
Updating Sphinx doc structure in prep for moving metacat admin guide to Sphinx.
exapnd permissions on the exisiting access rule not on the permission being checked. (hierarchical permissions)
defer to super class member variables
Upgrade to Hazelcast-1.9.4.5 to try to solve CLIENT_CONNECTION_LOST problems seen on the Coordinating Node.
mark client/servlet API and EarthGrid API as deprecatedhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5517
upgrade routine to purge empty replicated data files so that they can be re-replicatedhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5536
format the execution time to be a date.
Use "post" to replace "get" to fix caching issue on IE.
Make the display of the schedule table fit the IE browser.
Make sure the local id isn't null when we try to get the object from the local instance.
Simplify the error handling, and throw the exception once the CN is updated with the new status.
Set the replica status to failed (not invalidated) when we get exceptions trying to read the object bytes. Not much of a difference, but only the CN, in theory, is supposed to be able to set the invalidated status.
Set the replication status to invalidated when we have a localId, but getting the object bytes fails for any reason.
Only call super.create() if there's no localId found on the MN (ie a replica is there from an out of band process).
Get the object inputstream from the local metacat instance using MetacatHandler.get() rather than MN.getReplica() so we don't throw an InvalidToken exception when passing in a null Session. The D1Client object is never used for this local call.
interpret permissions as hierarchicalhttps://redmine.dataone.org/issues/2150
Use submitLoginFormIntoDivAndReload to replace submitLoginFormIntoDiv js function.
remove flag for independent system metadata replication -- these entries are replicated along with the data/metadata objects or via hazelcast when the actual object is not on the server.
update documentation to reflect changes to replication (client certificate)http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5516
include SSL settings for client certificate-based replication
do not include the "v1" in the base url for the target MN
New D1 jars with a minor CNode.setReplicationStatus() bugfix.
The kar id will have version number.
process the current revision, not the latest!use direct object/system metadata insertion for ORE maps.
allow other Metacat process (system metadata and ORE generation) to directly insert objects and system metadata without having to go through the MN/CN methods.
sort the docids so that "old" revisions are processed before newer ones
only attempt to unlock a lock if it was created (in the finally block)
update tests to comply with these chenages:new jars with many changes -- including new CN methods: ping, describe, listChecksumAlgorithm. Removed MN.setAccessPolicy. Refactored CN.setOwner() to CN.setRightsHolder().
new jars with many changes -- including new CN methods: ping, describe, listChecksumAlgorithm. Removed MN.setAccessPolicy. Refactored CN.setOwner() to CN.setRightsHolder().
refresh the SystemMetadata entry for EML and referenced data files when parsing EML access rules -- this ensures our in-memory system metadata map is up to date WRT the DB entries.
Using a branch name for the utilities project. This branch is a copy of the trunk and it uses the BSD license.We will move this branch to a tag soon.
add revision history to the generated ORE objects -- we use the revision history of the EML package as a basis because the each ORE revision mirrors the revision of the EML package. Add a placeholder for checking if an equivalent ORE map exists in the DataONE infrastructure - this will be a call to CN.search() that looks at the solr index for OREs based on the EML package ID.
Update the parameter names expected for listObjects() to reflect the MN API changes in the architecture docs.
Change the query semantics such that we implement the MN.listObjects() where the lower datetime bound is inclusive (greater than or equal to" and the upper datetime bound in exclusive (less than). This allows easier paging in client applications.
for test to compile, provide BaseException param for setReplicationStatus. I used a NotAuthorized instance.
adjust after refactoring tests that use EML queries
In the call to MNReplication.replicate(), call back to CNReplication.setReplicationStatus() and set the status to failed when we get local exceptions, exceptions from the source MN when calling getReplica(). Send back an exception with a description when setting the status. Add a private setReplicationStatus() method to refactor these calls out.
Modify CNresourceHandler.setReplicationStatus() to use the new API signature, including the failure BaseException that is parsed out of the MMP as a file section. Log the exception message. Since this is an asynchronous call, ReplicationManager won't see a failed status, but the MNAuditTask eventually will.
Add collectReplicationStatus() to CNResourcHandler to return the BaseException or it's subclass, if any, provided in the the call to setReplicationStatus. The exception will be reported on the CN.
Change setReplicationStatus() to drop serialVersion and report the failure exception message in the CN log.
Add new D1 jars with update CNreplication API changes to SetReplicationStatus().
query for deleted metadata when testing that replication communicated the deletion. to check data, we try to update the data object on the target node (which should fail)
add test for data locking
delete data and eml on the home Metacat and check that the change propagates to the target
set SystemMetadata.archived=true on MN.deleteThere is ongoing discussion on what the exact behavior should be here, but this mimics Metacat's delete-as-archive action.http://redmine.dataone.org/issues/882
In MNodeService.replicate(), check to see if we have a replica (via an out of band channel) before we call sourceMN.getReplica().
actually include the test in the suitehttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5520
EML replication test with insert, update and set accesshttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5520
only create guid->docid mapping during metadata replication if it does not already existhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5520
do not treat access change as an update -- it should not attempt to retrieve the contents of the objecthttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5520
Change the ecogrid tag to 1.2.2.RC5.
only create guid->docid mapping during data replication if it does not already existhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5520
remove xml_acccess.docid reference (oops)http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5560
test update and set access during replication from A->Bhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5520
updated D1 API -- removed Permission.REPLICATE and associated parameters