include pidFilter handling - only matches the complete pid. Issues a warning in the Metacat logs when pidFilter cannot be applied but allows the call to getLogs() to return as though there was no pidFilter given.https://redmine.dataone.org/issues/2798
use at least one thread on single-processor machines.https://redmine.dataone.org/issues/2800
Change the database.maximumConnections property to 100. PostgreSQL's docs says it can handle "a few hundred", and would need to be increased from the default 100 max_connections. For DataONE optimization, we increase max_connections, however there are more processes making connections other than metacat, so I'll reduce metacat's default share.
script for re-applying missing FK constraints on KNB production DB.http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5608
include TRACE level debugging for specific classes we want to have performance metrics for.
Add a few logging statemnts for round trip replication metrics.
add trace statements for measuring time to complete SM generation.
new D1 jars:prevent NPEs from the object format cache when formatId.value is null. This came up during PISCO testing
default replication policy set to 0.
instead of generating SM and ORE maps during dataone configuration/MN registration, moved this all to the replication admin screen where we can target generation for specific nodes. That way it's more controlled as to when and where we generate DataONE required content....
include all EML versions (had been only eml 2.1 for testing)
new d1 jars for: remove exception from method decl - was not matching the interface def and not compiling.
Append more information such as user name and group to the validating session response.
remove exception from method decl - was not matching the interface def and not compiling.
add "Generate System Metadata" button to the replication server list display. When clicked, we generate SM for records belonging to that source server. This is only enabled when DataONE has been configured.https://redmine.dataone.org/issues/2762
expose serverLocation parameter to run GenerateSystemMetadata for different replication parters as needed.https://redmine.dataone.org/issues/2740
only generate system metadata for original objects.https://redmine.dataone.org/issues/2721
test for running concurrent Metacat queries to mimic Kepler data search.http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5518
check if person's equivalentIdentity list is null before processing recursivelyhttps://redmine.dataone.org/issues/2689
D1 common lib AuthUtils update
include testSynchronizationFailed() and call as the CN subject so that it is authorized.
use MN (self) as the Session.subject so that the MN.delete() call is successful.
handle authorization for delete() differently for CN vs MN.On the CN, only the CN (or tbd admin user) can call it.On the MN, both the CN (or admin user) and the same MN can call it.
comment out testDelete because it requires acting as the MN comment out testSynchronizationFailed because it requires acting as the CN
uncomment the MN tests (I bet this was an oversight during local testing)
add Session-less archive() method
jars with CN/MN.archive() libclient implementations
only admin users can call MN/CN.delete(). This is limited to any CN and only the MN that is calling itself
update the sysmeta data modified when setting archived=truehttps://redmine.dataone.org/issues/882
handle CN.archive() rest call: PUT /archive/{pid}https://redmine.dataone.org/issues/2678
correct log about 'archive' being called
handle 'archive' rest callshttps://redmine.dataone.org/issues/2678
updated d1 jars
[optionally] do not archive the xml_documents and xml_nodes to *_revisions when 'deleting' a document. This will effectively guarantee that the document/data cannot be retrieved after delete.NOTE: D1 system metadata will persist (for now) so that the ID cannot be reused with the DataONE API but Metacat calls may allow the ID to be reused -- may need to reconsider this behavior....
optionally remove the document/data file from the filesystem completely when 'deleting' it.https://redmine.dataone.org/issues/2677
newer d1 jars that include shared AuthUtilsmethod for isAuthorized() consistencyhttps://redmine.dataone.org/issues/2661
implement MN and CN.archive() method -- really just the existing delete() methods.https://redmine.dataone.org/issues/2674https://redmine.dataone.org/issues/2675
call MN.delete() for each replica when CN.delete() is calledhttps://redmine.dataone.org/issues/2676
defer to AuthUtils for flattening out the equivIdent subject list.https://redmine.dataone.org/issues/2661
check normal access control rules for getSystemMetadata before deferring to MN replica information that may grant MNs additional access to the SM.https://redmine.dataone.org/issues/2656
include Session-less interface methods and updated jars that define them.
use a shared ExecutorService for replicate() calls.https://redmine.dataone.org/issues/2623
remove extraneous pid and permission parameters from isAdminAuthorized() method and make public so that it can be called in other locations - namely before our asynchronous replicate() implementation on the MN.
check for empty null (missing) node.subjectList. This should probably be a required element in the D1 schema, but it appears not. (ORNL entry was missing subjects in cn-dev environment)
just use the e.getMessage() as e.getCause() may be null (seeing NPE when testing via the MN IT tester)
added 2.0.0 targeted bugs to the release notes and fleshed out other major enhancements in the list
no not record EML access rules that use the "denyFirst" permOrder.https://redmine.dataone.org/issues/2614
needed to initialize the nodeList that stores matching nodes (by subject) -- this was the source of a NPE when we had a matching node subject.
do not create docid-guid mapping unless we are supposed to write access rules for the data to the dbasehttps://redmine.dataone.org/issues/2572
As Ben suggested, don't compare to the node list if there are no replicas listed. This reduces the number of calls to listNodes() on the CN.
Minor logging change in throwing ServiceFailure when Hazelcast throws a RuntimeException.
Modify getSystemMetadata() to allow nodes that are listed as replicas to access the system metadata. Use the Session.Subject to find a list of nodes from the CN that match the subject, and compare those node ids to the listed replica node ids. Add listNodesBySubject() helper method to do so.
release notes for 2.0.0
correct typo for "dataone.mn.services.enabled" property on the admin screen checkbox
save backup properties before attempting node registration/update so that we don't "forget" the user input
add a parameter for optionally writing EML-embedded access control rules to the Metacat DB.https://redmine.dataone.org/issues/2584https://redmine.dataone.org/issues/2583
added comments and logging about https://redmine.dataone.org/issues/2572
generalize the exception handling because our actions are the same no matter what the specific error is during create - we just notify the CN that the replicate call failed
catch general Exception that may be thrown during MN.replicate() when creating the object locally. There are a few records that keep slipping off our radar with no explanation as to why they remain in "REQUESTED" status.
do not download data at this point
catch errors for each localid we are processing so that they do do prevent other ids from having ORE content generated
additional debug logging for tracking down MN replication errors
only 2.1.0 EML docs for ORE generation right now...
band-aid for CN-CN replication permOrder issue when access control is embedded in EML and the system metadata is replicated before the EML. we just log the inconsistency and allow the insert to succeed https://redmine.dataone.org/issues/2583
It looks like jk.conf and workers.properties were moved in the scripts dir: update the install docs accordingly.
Fixed a minor typo in the tomcat config section.
add comment about returning early when no system metadata can be found.removed extraneous check on the content type of the SM -- was unused.formatted indenting
for SystemMetadata events we first check the event for the SM value. If it returns null, we look it up from the shared map. It seems as if we don't always get a value with our events.
comment out: synchronize local system metadata on cn restart
synchronize local system metadata on cn restart
additional logging in MN.replicate()
double check "ecogrid" data urls for valid docid.rev - namely integer rev numbers - when parsing EML and also generating system metadata when necessary. Log the errors as warnings.
log calls to store() system metadata to the backing store
actually use the filter token for stmml-1.1 schema
register stmml-1.1 schema (distributed as part of EML 2.1.0) in an effort to avoid unnecessary network traffic or the failed retrieval of the stale XSD sitting on unofficial servers
Add the listener for LifecycleEvent state changes
synchronizeLocalStore() when the cluster has a LifecycleEvent state change to RESUMED.
refactor memberAdded code to separate method - synchronizeLocalStore for possible reuse
handle last group of ids (oops)
use range of the list for test system metadata
use non-random list for generating system metadata in test mode
include debug statements for systemMetadataReplicationStatus and systemMetadataReplicationPolicy SQL
change ordering of getLogRecords() parameter -- pidFilter is in the middle now
use 'formatId' for listObjects() parameterhttps://redmine.dataone.org/issues/2550
upgrade to latest RC in libclient and common jars -- includes updated getLogRecords and new mn.generateIdentifier method
-use MembershipListener to keep new members' backing store for system metadata synchronized with the shared system metadata map.-remove the unused InstanceListener interface
Modify deleteReplica() to use parameters parsed from the mime multipart entity rather than the request params. Need to check that the unit test uses MMP params. This partially addresses https://redmine.dataone.org/issues/2526.
Modify CN.setObsoletedBy() to use parameters parsed from the mime multipart entity rather than the request params. Need to check that the unit test uses MMP params. This partially addresses https://redmine.dataone.org/issues/2526.
Modify reserveIdentifier() to use parameters parsed from the mime multipart entity rather than the request params. Need to check that the unit test uses MMP params. This partially addresses https://redmine.dataone.org/issues/2526.
Don't throw a JibXException, but rather convert it to a ServiceFailure.
Modify owner() to set the rights holder from parameters parsed from the mime multipart entity rather than the request params. Need to check that the unit test uses MMp params. This partially addresses https://redmine.dataone.org/issues/2526.
Add a collectMultipartParams() convenience method to D1ResourceHandler to parse multipart parameters from the entity when the entity contains no file parts.
add logging statements when there is a problem calling setReplicationStatus
Get the serialVersion param from the MMP params map rather than the request object params map in setAccess().
Add a few more debugging statements to HazelcastService for troubleshooting hazelcast map concurrency.
handle case where EML access rule "permission" is not in our constrained list (EML 2.0.0 doc showed this with a "none" permission for public principal). we now omit this invalid access rule when interpreting it in system metadata -- effectively dropping that invalid access rule. "none" had been stored as a 0 in the DB xml_access table and would not have given or denied access for the document so I think it can safely be omitted for good. for example, see knb-lter-gce.101.2 with this rule:...
Use Jjava.util.Calendar rather than com.ibm ...
Also allow MNs to set the FAILED status in setReplicationStatus(). this was an oversight on my part, trying to keep MNs that truly did succeed from overriding the COMPLETED status with FAILED.
use Java-based temp file creation instead of Date (ms) timestamp to ensure uniqueness of the file and avoid re-use by two concurrent threads.