Add collectReplicationStatus() to CNResourcHandler to return the BaseException or it's subclass, if any, provided in the the call to setReplicationStatus. The exception will be reported on the CN.
Change setReplicationStatus() to drop serialVersion and report the failure exception message in the CN log.
set SystemMetadata.archived=true on MN.deleteThere is ongoing discussion on what the exact behavior should be here, but this mimics Metacat's delete-as-archive action.http://redmine.dataone.org/issues/882
In MNodeService.replicate(), check to see if we have a replica (via an out of band channel) before we call sourceMN.getReplica().
only create guid->docid mapping during metadata replication if it does not already existhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5520
do not treat access change as an update -- it should not attempt to retrieve the contents of the objecthttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5520
only create guid->docid mapping during data replication if it does not already existhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5520
remove xml_acccess.docid reference (oops)http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5560
updated D1 API -- removed Permission.REPLICATE and associated parameters
process system metadata before access rules (access control is now driven by GUID so the mapping needs to be there)
Change the key of query result cache. The key now has the real search value.
include SerialVersion in describe responsehttps://redmine.dataone.org/issues/2135NOTE: d1 jars should be replaced once all schema changes are finalized and the generate d1_common code is committed to svn
ROLLBACK: check for non-public session in Metacat before showing the registry formhttp://bugzilla.ecoinformatics.org/process_bug.cgi
check for non-public session in Metacat before showing the registry formhttp://bugzilla.ecoinformatics.org/process_bug.cgi
include 'archived' system metadata element in backing DB store
add ; to end of update command
only update accessfileid for our new guid-based records
move latest postgres access upgrade statements to oracle scripthttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5560
include revision clause when updating the accessfileid on the xml_acccess table
remove docid column in favor of guidhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5560
If a member node cannot be found in the node list matching the targetNodeSubject given in isNodeAuthorized(), throw a ServiceFailure exception.
Minor reformatting for readability.
update with latest d1_common/d1_lib (includes latest schema changes)
only handle 100 (consecutive!) docId generations per millisecond, otherwise the generated docid part is bigger than Long.MAX_VALUE and Metacat cannot fully handle that.
check previous revision when attempting to update access control with EML 2.0.x docshttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5560
remove old access rules for a data object when they are being updated by rules contained in an EML document. Now the OnlineDataAccessTest EML 2.1.0 tests pass.http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5560
construct the proper previousDocId when checking for update permission
for now, look up SystemMetadata directly from the table otherwise we won't have the latest access information. Need to refresh the in-memory copy everytime we edit the access policy via Metacat (includes EML parser)
check previous revision for permissions to update (includes data described by EML)
use correct "rev" column in xml_revisions table
refactor Metacat access handling to be on a per-revision basis so that it more closely aligns with the DataONE approachhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5560
To avoid id generation conflicts happening at the same millisecond, append a 5 character random string to the end of the docid.
retry: add node name in the correct order for predicate navigationhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5561
add node name in the correct order for predicate navigationhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5561
handle queries with predicates correctly.when docids are used in the return field look up, we need to make sure they are included in the values in the correct order for their corresponding parameter place holders (?)
close prepared statement only if not nullhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5562
fixed a bug that using a wrong table name - acces_log.
fixed a bug that using acces_log table name.
Use accessblock in setaccess method. So user can grant/revoke public readable access.
ensure that the revision list is ordered ascending in case someone changes the sql query without realizing that it matters...
set the byte size of the ORE map before adding it
set/update the obsoletes/obsoletedBy fields in system metadata so that we always have a complete revision history for each object.Note: ORE maps do not have revision history...yet(?)
order the revision list, ascending.
removing unused class -- can't find a reference to it and it's causing compilation errors for me.
for "all" permission, return a list of READ, WRITE, CHANGE_PERMISSION
generating ORE maps and creating/updating system metadata now. There are some Permission conversion issues to be worked out yet
look up access policy by guid or local idTODO: resolve the Metacat/EML "all" permission as it relates to DataONE (there is only READ, WRITE, CHANGE_PERMISSION). for now I am using CHANGE_PERMISSION when it is a Metacat "all"
make exception/error reporting clearer -- was getting lock messages when perhaps that was not the correct exception.
look up all docids is now a static method (ORE/SystemMetadata generation)
Add log statements for each call to ILock.unlock() for debugging.
evict the HazelCast SystemMetadata entry if we update the access control rules via Metacat's legacy API, otherwise stale SystemMetadata stays in memory instead of being looked up from the backing table store.
optionally include ORE generation/insertion into Metacat when generating SystemMetadatahttps://redmine.dataone.org/issues/2056
Set a default HazelcastInstance after init() is called, and use this instance in getLock() to acquire a lock in the cluster.
no need to cast docInfo entries to String -- they are all strings
set revision history, the create/update dates and the owner/submitter (correctly)
use shared method for looking up "docInfo" map -- both in Metacat replication and in D1 system metadata generation
make default formatting a little bit easier to read
reformat code -- no changes
refactor SystemMetadata creation into separate class from the MetacatHandler -- this will be shared by upgrade code and normal metacat api.
include all document revisions when generating "missing" system metadataTODO: revision graph captured in obsoletes/obsoletedBy
When using ILock.lock(), get a lock on the string value of the Identifier, not the Identifier object itself. Hazelcast locking won't work otherwise.
Use the Hazelcast ILock mechanism to lock the system metadata identifier rather than using IMap.lock(pid).
simplify SystemMetadata generation -- will be done during Metacat upgrade for D1 features/support.
clean up populator; use IOUtils library to do string<->stream conversions
Set sessionid for clientVeiwBean when it handle a request.Always set the order type to "allowFirst".
verify checksum when retrieving replica from another member node.https://redmine.dataone.org/issues/1794
make sure to get/put system metadata to the HZ map instead of using IdentifierManager directlyverified changes for: https://redmine.dataone.org/issues/1999
match documentation for the MN.describe() headerhttps://redmine.dataone.org/issues/1904
configure synch schedule in the admin screenhttps://redmine.dataone.org/issues/1933
look-up sych schedule from metacat properties instead of hardcoding themhttps://redmine.dataone.org/issues/1933
when comparing D1 Subject objects, use the equals() method not direct string comparisonhttps://redmine.dataone.org/issues/2050
access nodeList list correctlyhttps://redmine.dataone.org/issues/2049
When read a FGDC document, Metacat will add a new parameter enableFGDCediting params for the xml transforming.
set the uploade file size -1.
allow unknown content sizeshttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5543
run replicate() in a separate thread so that we don't wait for potentially large data objects to be moved around the system.
Call replicate() asynchronously.
Use status.toLowerCase() to deal with ReplicationStatus conversion issues. This needs to be reviewed.
Use Subject.equals() when comparing DNs rather than CertificateManager.equalsDN(). Don't lock the pid in isNodeAuthorized() to debug for timeout issues. Minor debugging changes.
give the Metacat admin users FULL permissions on all data/docshttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=4728
replication control panel now fully implemented as an admin configuration screenhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5528
move replication configuration actions to the admin servlet and out of the replication servlethttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5528
save SystemMetadata when replicating data and metadata -- this way if/when the node decides to be a DataONE MN it already has the information needed for each object
Minor logging for isNodeAuthorized(), and compare subjects properly. Change this to Subject.compareTo() when it is vetted.
check for authenticated and verified user permissions
throw NotAuthorized when there is no session
Catch RuntimeExceptions thrown by Hazelcast as opposed to general Exceptions to we don't catch exceptions we're trying to throw.
get params from multipart params for systemMetadataChanged call
generalize exception handling -- add cause detail
remove DataONE schema reference in xml_catalog
Changes to setReplicationStatus and isNodeAuthorized(), working out minor bugs in replication.
include exception cause when throwing new exception (combine RuntimeException in Exception handling -- they are almst identical)
throw InvalidToken when session is null
correct typo
Send the correct node id (the target node) when calling setReplicationStatus()
get pid from normal params, not the URL -- the client should include them in the params -- and not as a serialized "object" since it is just a string value
check obsoletes and obsoletedBy PIDs when updating objects
delete system metadata when MN.delete() is called.