Change the version to 2.6.0
Validate the session during the modification stage, rather than just assuming a CGI session (support tokens too).
Also, fix the XML document validation issue where an <additionalParty> element is added prior to the <metadataProvider> element. This seems to be an intermittent issue, and may be due to more recent versions of perl returning hash contents more randomly than previous versions. The %orig hash passed in to personnelList() is assumed to be random now, and I just ensured the metadataProvider is first in the produced string....
Change the version of d1_common_java to 2.2.
Change the version of d1_cn_index_processor to 2.2
Change the version of d1_portal to 2.2
Changed the d1_libclient_java from 2.1 to 2.2
Change the number of fields in the test
Modify getCredentials() to support token-based credentials.
refs https://github.nceas.ucsb.edu/KNB/arctic-data/issues/42
Close the input stream object on the MN.replicate method.
use SM.fileName if we have it. https://projects.ecoinformatics.org/ecoinfo/issues/6970
ensure there is a file extension included for the data files in a package download. https://projects.ecoinformatics.org/ecoinfo/issues/6970
And don't forget the alternate Redhat instructions.
We ended up not being able to use IO::Socket::SSL, so I removed the import statement. Also, add the new Perl module dependencies to the installation documentation.
Add code to print the the stack trace on the getPackage method in order to help us to identity some tmp file issues.
1. Set knb to version 2.2. Centralize the lter tests into one class.
Change getUsername() to also support adding usernames into an EML access element from the authentication token's 'sub' claim.Also, add a bit of debugging output for tracing the flow of the XML generation.
Modify the Metacat.pm upload() method to use the correct Content-Type for the form. RFC 2388 specifies that the Content-Type should be "multipart/form-data", and that the Content-Disposition should be "form-data", along with the "name" parameter.
Also handle authentication tokens when uploading data (action=upload) by using Ben's RequestUtil.getSessionData() changes.
refs https://github.nceas.ucsb.edu/KNB/arctic-data/issues/43
dd the hasValidAuthToken() method to determine if the current token (if any) is valid. Use this and validateSession() within the script to determine if we need to call Metacat->login() or not. Add some minor debugging to work through the code stages using auth tokens....
Fix a couple of syntax issues.
Don't forget to set the token variable from the HTTP_AUTHORIZATION environment variable.
Add a setAuthToken() method. when the HTTP_AUTHORIZATION environment variable is set, set the value as the `auth_token_header` instance variable in the Metacat instance passed in. This requires that the Apache installation includes an HTTP rewrite rule to pass the header on to an CGI processing the request. Call this method whenever we instantiate a Metacat object....
Modify the Metacat.pm sendData() method to include the Authorization HTTP header when it's available as an instance variable.
refs https://github.nceas.ucsb.edu/KNB/arctic-data/issues/41
Add a new chechsum for the new solr schema.
Add beans for the iso index.
Add a bean file for the iso index.
Add more fields for the iso index.
In the isScienceMetacata method, the ServiceFailure exception shouldn't be caught anymore, since the code doesn't throw it.
Change its mime type from binary to text.
In order to access the JWT authentication token, we need to configure the Apache installation to pass the Authorization header on to CGI scripts. Do this with mod_rewrite.
allow Metacat API calls to be made by clients providing their identity with a DataONE auth token. https://github.nceas.ucsb.edu/KNB/arctic-data/issues/43
comment out (duplicate) EML download link since it does not apply in the CN context and the same functionality exists elsewhere in the MetacatUI rendering. https://redmine.dataone.org/issues/7639
Removed the sepcified path for the ldap ca file.
If the user doesn's specify the ldap ca file path on the metacat.properties, it will use the default one.
Remove the bypass button.
Update the ezid configuration screenshot.
Add the help icons on the page.
Use a new screenshot with the help icon.
Fixed a typo.
Add the ezid section.
Add the section for ezid.
Updated the screenshot for the configuration pages.
Add the items for the ezid backup.
Change the label.
fixed a label issue.
Add the java code to handle the ezid configuration.
Add a new property to show the status of configuration of the ezid service.
Add a page for configuring the ezid.
remove the password of the doi user.
The getPackage method should throw an exception since the id is a data object.
Put a detal code on the InvalidRequest exception in the getPackage method.
If the pid it is a package id in the getPackage method, the method will throw an InvalidRequest exception.
Make sure to close the prepared statement on the final statement.
Close the result and connection in the finally clause to make sure they being closed.
add warning when exception encountered loading SM into map.
Add the dataone create event mapping to the select clause.
map the metacat log event INSERT, upload and UPLOAD to the dataone log event "create"
comment out the party annotation-target - don't think we'll be using this anytime soon (had been for annotating with orcids)
set authoritative MN to origin MN if the client did not set it on mn.create. https://projects.ecoinformatics.org/ecoinfo/issues/6938
remove ant commands in harvester configuration instructions. https://projects.ecoinformatics.org/ecoinfo/issues/6937
include metacat context in the redirect after successful harvester registration login. https://projects.ecoinformatics.org/ecoinfo/issues/6936
Add a policy_id in the smReplicationPolicy table to help preserver the order of the nodes list.
Use the "order by" to preserve the nodes order in the replication policy.
Use the ServiceFailure to replace the InvalidRequest when it is the read-only mode (CN throws the exception).
In the replicate method, the checking of the read-only mode was moved from MNodeService class to the MNResourceHandler class since it is asynchronized.
The systemmetadataChanged method is asynchronized, so we put the read-only checking on the ResourceHandler class.
Add the code to check if the mn is on the read-only mode.
Add the code to check if the metacat is in the read-only mode.
In the clean method, the metacatui build directory will be deleted as well.
Add the code to handle the read-only mode.
Add a junit test.
Add a class to determine if the metacat is in the readonly mode.
Add a new property named application.readOnlyMode.
Add the check that only the administrator can shrink the connection pool.
'Change the verion of the production CN from the v1 to the v2.
Add two new test methods to test systemMetadataPIDExists and systemMetadataSIDExists.
Close the sql statements on the four methods - getGUID, getHeadPID, systemMetadataSIDExist and systemMetadataPIDExist.
Close some prepared sql statement in the summarize method.
include expiration configuration option for NCEAS accounts to prevent errors during account registration. https://projects.ecoinformatics.org/ecoinfo/issues/6917
merge from 2.0 branch: use updated node list information from cn-dev so that test will match current state of env. https://redmine.dataone.org/issues/7534
merge from 2.0 branch: initialize mock cn for test to run successfully.
merge from branch: only check for d1 rightsholder when checking permissions in original metacat code base, otherwise legacy access control tests in metcat begin to fail. https://redmine.dataone.org/issues/7560
include check for d1 rightsholder when checking permissions in original metacat code base. https://redmine.dataone.org/issues/7560
Set the metacat to 2.6.0 and add the db upgrade scripts.
Add the sql file for the upgrading.
merge jing's commit from the 2.5 branch to include the 2.5.1 upgrade utility in properties file.
Removed the jar file since it is replaced by the jar file from the maven.
merge from 2.5.x branch: neglected to replace solr schema during 2.5.0 upgrade - this will do it for 2.5.1.
Escape the user names, group name and other information in the xml format.
Change the metacatui to 1.8.1
Add the keyword "select" into the list.
Reset the xml_catalog_id sequence value to the max value of the table.
merge from branch: use hash for latest solr schema
add checks on archived flag to avoid NPE.
only consult fields to merge if there was an existing referenced doc
remove "al" prefix from subquery since we are only querying one table and do not need to use a prefix.
added 4 new schema fields so need to account for them in the test case.