Restore the database for a gzipped sql file.
Added the code to gzip the output file in the dumpall command. This will save disk space.
Modified the redirect rules which didn't work.
Add the code to add the attribute useHttpOnly='false' to the Context element in the context.xml file.
Fixed a bug the code can get the argument from the command line.
Added the code to change the permission of the backup file so the postgres user can read it.
Added the code to check if the metacat backup file exists.Added the code "exist 0" at the end of the scripts.
Added the .conf to those site files since the new apache server only loads the sites with the extension.
Add the code to vacuum the db.
Add the code to enable the 8009 port for ajp connection in the server.xml of tomcat7.
Added the code to modify the geo data directory in the metacat.properties file.
Fixed some bugs to restore postgresql data.
Add the code install some components for installing tomcat7
Use a variable to replace the hard code.
Use the version directly.
Fixed a typo.
put the date to an output file.
A script for vacuum the db.
Add a script to move the db from 8.3 to 9.1 base on a dumped file.
Use the pg_dumpall command to back up all information of the database.
Add the code to backup the cerificate/key of the apache server.
Removed the code to write the backup file to a dvd drive.
Added the code to copy Metacat and other web applications from the webapps directory in tomcat6 to tomcat7. It also update the value of deploy.applicationDir in the metacat.properties file as well.
Only insert new configuration lines once to the /etc/tomcat7/catalina.properties.
Used variables to replace hard code in sed command. Also we have to escape the variables.
include type="party" attribute for the divs that contain contact information (for annotating with ORCIDs)
Used variables to replace the hard code.
Add the code to modify the workers.properties file.
Fixed typo.
use http://tools.ietf.org/rfc/rfc3023 spec for conformsTo property. use the full xpath for EML dataTable and attribute selectors
Add a script to install openjdk 7 and tomcat 7.It also configures java, javac,keytools and tomcat7.
Remove an import.
Removed an import.
Added the junit tests to test the NotFoundException having the deleted information.
Added the code to inform users the pid was deleted in the NotFound exception.
Added the code to check if a not-found object was deleted in the isAuthorized method.
Move the code to get the object in front of the method to get the system metadata.
Add a test method to test the method determining if there is a delete event for the given id.
Add a utility method for determine if there is a delete event for a given id.
when we remove a slor index of a resource map, we don't need to know the content of the resource map. Instead, we will search the solr index to get information.
Remove the byte array field.
Removed the method which had the byte array attribute.
Remove the system metadata for data objects.
change the way to delete the solr index of a resource map.
Backup the /etc/apache2/site-enabled directory.
Remove the code to stop/start ldap server.Change the script name to stop/start tomcat.Also backup metacat.properties.
Replace the operator "=~' by "eq" in comparing the two password fields.
dd a new routine to check if the uid has been taken already in the production space during the creation process.
In the EML XSLT - Add classes to the entity details containers so they can be identified easily
add /token endpoint for annotatorJS/annotateIt.org integration. https://github.com/DataONEorg/sem-prov-design/issues/18
Add code to determine if a id exists in a solr server.
include resource="#xpointer(...)" attributes for sections that are potential targets for annotation. So far we have the various people (creator, contact, etc) and data table attributes. https://github.com/DataONEorg/sem-prov-design/issues/18
Persitence the system metadata object in the memory before deleting it from hazelcast.
Add the code to handle the delete of the resource map.
Add the code to handle to remove the resource map index.
Add codes to handle remove a source map solr index.
Add a field to contain the content of resource map.
Add a util class to judge if a namespace is a resource map file.
Create a valid URI by using all lowercase letters when creating a name for the triple model in the Rdf Xml Subprocessor. See bug: https://projects.ecoinformatics.org/ecoinfo/issues/6595
Make the delete method work.
Make the deleteSystemmetadata method really roll-backable.
Add the code to delete systemmetadata.
remove semantic annotation proposal - moved to github: https://github.com/DataONEorg/sem-prov-design/blob/master/docs/use-cases/semantics/semantic-annotation.md
In InsertORETest: Set the format ID of the metadata object to an EML formatId so that it gets indexed correctly.
Change the d1_cn_index_processor version from 1.3.0 snapshot to 2.0.0 snapshot.
Add the code to delete the records in the xml_accesssubtree table.
use configured auth.base rather than hard-coded dc=ecoinformatics,dc=org. https://projects.ecoinformatics.org/ecoinfo/issues/6592
Add a code to check if the pathquery engine is enabled in the checkIndexPaths method.
Adjust the number of schema fields since new ones were added.
When indexing annotations from RDFs, use the doc id to access the system metadata, not the model name since they are not always the same.
Add PROV relationships to the Solr schema. Populate the fields using the RdfXmlSubprocessor
Add wasDerivedFrom field to the Solr schema and use Sparql query to retrieve the value from the RDF
Replace the /u00A0 character encoding with space character instead since /u00A0 displays literally in browsers
Add a test class that inserts an ORE with PROV relationships
use mock CN for testing metacat implementations
remove unused tests
comment out myproxy servlets. https://redmine.dataone.org/issues/5742
convert v2 SM to v1 SM for the v1 service call response
Separate the target and source version for the java compilation.
update to use v2 types for indexing
Add target and source attribute to harvester, client and compile-lsid beside the target compile.
Add the attribute target="1.6" when it compiles the metacat code. The metacat.jar file can run in java 1.6 even though it was compiled in java 1.7.
For the existing uidnumber, we decrease the size of vector for sorting.
Login automatically via curl rather than manually entering the cookie info for the registry test script
Add the code to check if the existing highest uidNumber really exists.
In the getNextUidNumber method, a mechanism to look up the highest existing udiNumber was added.
Fix XML validation errors in the metacatui confirmData template for the registry. Add a test script that submits multiple datasets to the registry.
Create a lock file for the registry if one doesn't exist
Allow the registry form to specify a docid scope
Only lock the local docid file when creating a new docid, not when inserting, for faster upload times. Remove extra debug messages from testing.
Fix bug in the online registry where data files were not using the new docid creation process
Lock a local file while docids are being created so multiple docs can be uploaded at once
remove CN.systemMetadataChanged in favor of the CN.updateSystemMetadata method. Otherwise there's no good way to know where to fetch the auth copy from since the SM change might be to switch the authMN!
add support for v2 DataONE API.
remove old EML jar -- datamanager.jar has the EMLParser now and is pulled in with maven.
use css changes from EML project to render a PDF that fits on a printed page during export. Note that this also changes the default skin slightly (for the better, we think). https://projects.ecoinformatics.org/ecoinfo/issues/6053
remove configxml.jar as the ConfigXML class is now included in the utilities library.
handle login/logout when testing using metacat client (recent refactoring to use more standard http client code)