allow getlog action to use docid parameters that do not include revision. In these cases, the latest revision will be used.
handle case where we do not have a pathexpr to checkhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5696
simplify the xml_access query, and instead use guid to check for permission. Now the docid/rev join (to get most recent version for search results) happens "higher up" in the query.http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5696
include log stats for total 'read' events when rendering a package.
rework simple log stats so that there is no saxon requirement (xslt 2)
pass parameters to the getLog action for rendering in xslt
remove morpho.jar -- moved needed classes into shared utilities project. (currently building form utilities trunk -- be sure to 'ant fullclean' to get the latest utilities.jar built)
remove use of HttpMessage (in morpho.jar) in favor of standard httpclient methods for calling the servlet in tests
Update d1_common_java and d1_libclient_java to the newest jar files. Add methods to CNodeService to throw NotImplemented exceptions for query(), listQueryEngines(), and getQueryEngineDescription() since these API calls are handled outside of metacat.
do not allow updates to orphan another branch of revision history. https://redmine.dataone.org/issues/3338
Change the set and get methods for the replication verified date to use java.sql.Timestamp rather than java.util.Date via setTimestamp(), not setDate(). The hh:mm:ss.sss was previously getting truncated.
include the subjects we are testing for authentication.https://redmine.dataone.org/issues/2778
remove the max(rev) clause in favor of a more straight-forward join to xml_documents (that will have the max rev). http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5696
add note about sanparks/saeon spatial file download: http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5718
include inverted sendParameters() method that uses the keys as values, and the values as keys so that multiple docid parameters can be specified for the zip download. This was a regression when moving to standard httpclient rather than the roll-your-own version we had been using. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5718
integrate ecoinformatics login with the CIlogon identity mapping flow so that a user is directed through the process with no manual navigation needed (at least in the url bar). https://redmine.dataone.org/issues/1480
use version 2.0.5
shorten the systemmetadata* table names for Oracle's 30 character limit. move version to 2.0.5. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5717
ajax-ify the call to perform identity mapping (including errors)
look up CN url for portal servlet instead of hardcoding it.
use alternative syntax for xml_access table update since oracle does not use joins in a update statement that same way as postgres. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5717
correct Oracle syntaxhttp://bugzilla.ecoinformatics.org/show_bug.cgi?id=5717
use correct column (guid) for temporary index
use correct release date (September 2012)
use correct docid format when checking for existing mappings.
include sessionid when constructing the EML download link. pull from RELEASE_EML_UTILS_1_0_5 eml tag.http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5709
use CDATA for docname field in docInfo so that XML parser ignores the content that can contain characters like "&
include sessionid when constructing the EML download link. pull from RELEASE_EML_UTILS_1_0_5_RC1 eml tag.http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5709
use RELEASE_EML_UTILS_1_0_4 for EML stylesheets
use RELEASE_EML_UTILS_1_0_4_RC1 for EML stylesheets
include qformat parameter in docs for squeryremove mention of 'username' for getLastDocid -- should only use scope param
include missing identifier mappings during 2.0.4 upgrade (mappings may be missing due to previous replication between servers that do not house SystemMetadata)
use SchemaLocationResolver to fetch remote entries for the xml_catalog -- we want to be able to fetch included xsd files as well as use any error handling it provides for checking the schemas.
prep for 2.0.4 release
when performing query, make sure we are using the access rules of the latest revision of a given docid, otherwise we may include documents that used to be public but have been made private in subsequent revisions.http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5696
correct the number of prepared statement parameters when inserting to xml_revisions table. Errors like the following were showing in the replication log file:knb 20120831-19:42:38: [ERROR]: DocumentImpl.writeReplication - Failed to create access rule for package: john.15950.1 because The column index is out of range: 12, number of columns: 11. [ReplicationLogging]
include WHERE in the sql where clause - encountered by SAEON's node admin, Alex Niehaus.
use resourceMapLocation (resolve url for the ore map) as the datacite_relatedIdentifier_isPartOf property
use lowercase 'metadata' and 'data' for the resourceType
set publisher to the source system when publisher == creator (we want them to be different, even if just for appearances)
only include public (readable) DOIs in the final output
use "lastname, firstname" convention throughout
include more descriptive data file name for title of data records
include publisher given name correctly
create docid-guid mapping during replication if it does not exist. we were [incorrectly] assuming that there would be SM coming with the document info that would fill this information in, but for traditional non-MN Metacat deployments there is no SM to provide a mapping. In this case we use the docid as the guid.
include certificate export SSL options as an example (used heavily for DataONE and Metacat Replication)
stream the replication "update" response rather than building up a complete list in a stringbuffer. prompted by findings on t he CN: https://redmine.dataone.org/issues/3141
make sure data objects correctly use force replicate with action "insert" https://redmine.dataone.org/issues/3138
correct the update statement for setting archived flag on SM where document revision does not exist in the xml_documents table
sleep before updating and deleting test documents - otherwise their index entries may not be fully written and this causes errors (update and delete first attempt to remove index references, but if they are not in the DB yet then they are not removed but then they do get added and the FK constraints make the delete fail). Since we know indexing occurs in a separate thread with a configured delay, we just use this same delay in our testing.
when updating a document on a remote server, we still need to use the previous docid to check that the user has permissions to do so (rather than the new id that is obsoleting the old id). This was discovered by M Servilla at LTER.
remove unused "dataonelogger"
prep for 2.0.3 release
allow SM resynch to be executed any time, not just during start up.https://redmine.dataone.org/issues/3116
change to debug log level when processing shared/local pids)
only lock the missing pid event if we know we have it locally to contribute.https://redmine.dataone.org/issues/3117
Add locking to the itemAdded() method so ideally only one CN will respond to the request for a 'wanted' pid from the cluster. The lock is on a string, not the pid, and so won't conflict with system metadata locking. The string is based on the pid, with "missing-" as a prefix.
only publish to the missing pid "wanted list" when resynching system metadata. we were seeing redundant entry added/updated events when looking up the shared systemmetadata first.
print the missing pid count, not the total shared pid count so we know how many will be processed.
change the system metadata resynch approach: nodes will publish PIDs that they are missing after inspecting the shared identifier set. other nodes will be listening for the "wanted" pids and will put their local copy of SystemMetadata on the shared SM map. This should dramatically decrease the hazelcast chatter during a resynch and targets only the pids that are missing from any of the various nodes.
logging for processing identifier set on restart.
remove possibility for infinite loop in case data replication is not configured for the server and a data file is encountered (yikes!)
added logging debug statements to see where the replication timeout might be occurring.
use correct EZID account names for the three different nodes.https://redmine.dataone.org/issues/2815
align the final column headers with the datacite schema, as applicable.https://redmine.dataone.org/issues/2815
add block for finding and updating records that should be marked as archived.https://redmine.dataone.org/issues/3109
use DataCite isNewVersionOf/isPreviousVersionOf for revision history
include JCS jar as it is a runtime dependency for d1_libclient's object caching.
check for null archived flag in ORE SMhttps://redmine.dataone.org/issues/3046
check if the caller is the Node admin (the member node calling itself) as well as the existing check for the CN calling the service. Both of those callers should be given full admin rights.
add note about DataONE CA chain file when configuring MNs at Tier 2+
not every EML file has an ORE datapackage descriptor -- join only to those when setting the resourceMapId
correctly use document revision for object format and resource map joins.
use local Set processing to determine which pids (if any) should be contributed to the shared set by this node during the resync. Should save time rather than checking each and every pid against the shared set.
move the hzIdentifiers initialization into the resync thread so that it does not affect start up time. cleaned up unused methods and superfluous code.
use correct children of 'publisher' element
only load local pids into hzIdentifiers if t hey do not already exist in the shared set. increase logging severity and detail of messages emitted during this process to get a better sense of what is taking so long.
utility methods to update/reserialize existing ORE maps that were generated with older foresite (and included bad dateTime strings).https://redmine.dataone.org/issues/3046
include the resourceMapId for the metadata objects, not just the data files.
updated LDAP dump and corrected missing entries that had been removed from LDAP.
On the coordinating Nodes, we often get McdbDocNotFoundExceptions for data (doctype == 'BIN') documents because they are not synchronized to the CNs. Change the logging to only print the stack trace during load() and loadAll() when log debug is enabled.
check for invalid (!) pids. thanks, M. Reyes for catching thishttps://redmine.dataone.org/issues/3047
only look up the client timeout property once, not every time we make a callhttps://redmine.dataone.org/issues/3078
improve content type handling during the get() callshttps://redmine.dataone.org/issues/3070
check for whitespace in identifiers during create() and update()https://redmine.dataone.org/issues/3047
remove semtools skin as a configured skin -- will need to add that if we ever get back to deploying a semtools instance.
configurable replication client timeouthttps://redmine.dataone.org/issues/3078
order the listObjects() results by identifier to mitigate random paged resultshttps://redmine.dataone.org/issues/3065
correct the parameter/value setting in the prepared statements for retrieving log information.
use docid, not the guid when returning the accesscontrol block
handle null givenNames from the LDAP dump.
make sure we only get the publisher text content (not attribute value)
DOI registration:-include more revision history based on the identifier table not just the generated SM metadata-include ecogrid data urls for revisions (long query in xml_nodes_revisions table)
include new libclient jar that uses encoded pids in the resolve URLshttps://redmine.dataone.org/issues/3035
update D1 jars in preparation for 2.0.2 release. NOTE: still need libclient jar that includes ORE changes for encoding PIDs in resolve URLs
prep for 2.0.2 release by updating the version numbers.
include dataone.ore.downloaddata as a configurable property in case MNs (like LTER) want to have the process download externally-stored data files described in an EML data package.
updated foresite (snapshot) to include dateTime serialization fix.https://redmine.dataone.org/issues/3035
set date SM modified when we are setting obsoletes/obsoletedBy/archived values. This way the CN can actualy pick up the changes in revision history.