allow per-document reindexing to be initiated by any user (to support third-party annotations)
Write the input stream into the file system without alteration in dataone create and update methods.
look up annotations when reindexing a given pid. still very much a prototype in that we are looking up annotations from an external annotator-store. TODO: add pid filtering to query when annotateit.org supports it (pending upgrade on their site).
add the code to install xmlstarlet.
Find files of which type is link.
Keep /var/lib/postgresql/8.4/main directory.
Add the code to stop tomcat and apache.
Add the "-y" option to force removing the old postgresql without prompting a y/n question.
Restore the database for a gzipped sql file.
Added the code to gzip the output file in the dumpall command. This will save disk space.
Modified the redirect rules which didn't work.
Add the code to add the attribute useHttpOnly='false' to the Context element in the context.xml file.
Fixed a bug the code can get the argument from the command line.
Added the code to change the permission of the backup file so the postgres user can read it.
Added the code to check if the metacat backup file exists.Added the code "exist 0" at the end of the scripts.
Added the .conf to those site files since the new apache server only loads the sites with the extension.
Add the code to vacuum the db.
Add the code to enable the 8009 port for ajp connection in the server.xml of tomcat7.
Added the code to modify the geo data directory in the metacat.properties file.
Fixed some bugs to restore postgresql data.
Add the code install some components for installing tomcat7
Use a variable to replace the hard code.
Use the version directly.
Fixed a typo.
put the date to an output file.
A script for vacuum the db.
Add a script to move the db from 8.3 to 9.1 base on a dumped file.
Use the pg_dumpall command to back up all information of the database.
Add the code to backup the cerificate/key of the apache server.
Removed the code to write the backup file to a dvd drive.
Added the code to copy Metacat and other web applications from the webapps directory in tomcat6 to tomcat7. It also update the value of deploy.applicationDir in the metacat.properties file as well.
Only insert new configuration lines once to the /etc/tomcat7/catalina.properties.
Used variables to replace hard code in sed command. Also we have to escape the variables.
Used variables to replace the hard code.
Add the code to modify the workers.properties file.
Fixed typo.
use http://tools.ietf.org/rfc/rfc3023 spec for conformsTo property. use the full xpath for EML dataTable and attribute selectors
Add a script to install openjdk 7 and tomcat 7.It also configures java, javac,keytools and tomcat7.
Added the code to inform users the pid was deleted in the NotFound exception.
Added the code to check if a not-found object was deleted in the isAuthorized method.
Move the code to get the object in front of the method to get the system metadata.
Add a utility method for determine if there is a delete event for a given id.
Removed the method which had the byte array attribute.
Remove the system metadata for data objects.
change the way to delete the solr index of a resource map.
Backup the /etc/apache2/site-enabled directory.
Remove the code to stop/start ldap server.Change the script name to stop/start tomcat.Also backup metacat.properties.
Replace the operator "=~' by "eq" in comparing the two password fields.
dd a new routine to check if the uid has been taken already in the production space during the creation process.
add /token endpoint for annotatorJS/annotateIt.org integration. https://github.com/DataONEorg/sem-prov-design/issues/18
Persitence the system metadata object in the memory before deleting it from hazelcast.
Add the code to handle the delete of the resource map.
Make the delete method work.
Make the deleteSystemmetadata method really roll-backable.
Add the code to delete systemmetadata.
Add the code to delete the records in the xml_accesssubtree table.
use configured auth.base rather than hard-coded dc=ecoinformatics,dc=org. https://projects.ecoinformatics.org/ecoinfo/issues/6592
Add a code to check if the pathquery engine is enabled in the checkIndexPaths method.
convert v2 SM to v1 SM for the v1 service call response
update to use v2 types for indexing
For the existing uidnumber, we decrease the size of vector for sorting.
Login automatically via curl rather than manually entering the cookie info for the registry test script
Add the code to check if the existing highest uidNumber really exists.
In the getNextUidNumber method, a mechanism to look up the highest existing udiNumber was added.
Create a lock file for the registry if one doesn't exist
Allow the registry form to specify a docid scope
Only lock the local docid file when creating a new docid, not when inserting, for faster upload times. Remove extra debug messages from testing.
Fix bug in the online registry where data files were not using the new docid creation process
Lock a local file while docids are being created so multiple docs can be uploaded at once
remove CN.systemMetadataChanged in favor of the CN.updateSystemMetadata method. Otherwise there's no good way to know where to fetch the auth copy from since the SM change might be to switch the authMN!
add support for v2 DataONE API.
remove dependency on HttpMessage that was in the utilities project but is now removed in favor of newer (standard) http client library code.
Include PDF version of the metadata in the package download. https://projects.ecoinformatics.org/ecoinfo/issues/6053
take advantage of the ezidclient for multi-threaded/asynchronous DOI registration. This will be most useful for doing large batch updates and not so much for the one-at-a-time publish actions but works in either context. https://projects.ecoinformatics.org/ecoinfo/issues/6440
use a member instance of ezid service that only logs in every 24 hours (or other time TBD) instead of every time there is an interaction with the service. Saves us many calls when doing batch updates to ezid but keeps us from trying to use expired sessions. Motivated by https://projects.ecoinformatics.org/ecoinfo/issues/6440
prevent js scriptlets from running when we return error messages to the client by escaping any potentially harmful xml blocks. https://projects.ecoinformatics.org/ecoinfo/issues/6224
allow updates to all doi: prefixes - realized we are already restricting to specific replica servers when updating these. worst case is we try to update a registration for which we are not the owner. https://projects.ecoinformatics.org/ecoinfo/issues/6440
show the SM and ORE generation buttons even if they have not registered/configured dataone. many potential MNs want to see their generated SM before registering (and we want them to too!).
restrict DOI updates to DOIs that match our server shoulder -- may consider opening this up to any "doi:" prefix if this is too restrictive. https://projects.ecoinformatics.org/ecoinfo/issues/6440
use separate surName and givenNames to lookup ORCIDs.
all full-text queries for ORCID, but it isn't that great because we might have a"PISCO" creator that shows us in may different orcid profiles...false matches.
use HttpClient to query orcid so I can easily set headers and such -- getting 503s from their production server when I test on dev.nceas...odd
adjust tests for production service -- more "real" information shows additional return values from the query.
switch to the production ORCID server for looking up orcid matches for our creators.add test to summarize how many creator matches we can actually find. https://projects.ecoinformatics.org/ecoinfo/issues/6423
change the hazelcast group name to be the default "metacat" instance so that the metacat-index tests pass without additional local configuration, at least when running a default metacat deployment.
do not set archived=false for all CN.create calls. The CN will use create() even harvesting content that is new to it and needs to handle already-archived content. https://projects.ecoinformatics.org/ecoinfo/issues/6475
cache the imported models to avoid timeouts from remote hosts (or being locked out for too many requests in a given time period).
process all the returned annotation suggestions until we find one that is appropriately located in the subclass hierarchy for the given superclass.
use in-memory TDB dataset for querying annotations for indexing -- this comes with the same reasoning capabilities as the directory-based one, but has the benefit of not filling the directory with triples that will not be used again. prepping for d1 AHM
when indexing annotations directly, just use an in-memory triple store rather than TDB since we remove each graph as it is processed (and my TDB instance would get into the multi-GB range with a few runs, even if I removed the old models)
redirect "short form" metacat read URIs to the the new Metacat UI using the configured UI context. This translates the docid -> pid to use the correct identifier for the correct service. https://projects.ecoinformatics.org/ecoinfo/issues/6546
simplify lookup for classes and orcid. remove the "random" annotation code branches -- just too confusing to look at those bogus classes especially now that we have "real" generated annotations.
Add admin service to update DOI registrations by specifying a list of formatIds or DOIs, or update all.
use new method to override the CN URL when constructing a CNode instance. see https://redmine.dataone.org/issues/5142
first pass at direct EML->semantic index method. Still produces an RDF model, but does not persist it in Metacat, only in the triplestore. Allows us to re-run without adding stale RDF to the MN store.
Store the cn url in the backup.
switch to use FIleUpload instead of O'Reilly COS library for handling chunked file uploads. https://projects.ecoinformatics.org/ecoinfo/issues/6517
forgot to check in the actual class: first pass at allowing admins to update DOI registration. This only acts on EML objects at the moment and is meant to illustrate one mechanism for updating the DOIs. https://projects.ecoinformatics.org/ecoinfo/issues/6530
first pass at allowing admins to update DOI registration. This only acts on EML objects at the moment and is meant to illustrate one mechanism for updating the DOIs. https://projects.ecoinformatics.org/ecoinfo/issues/6530
correct the ORE lookup query syntax and add junit assertion to check that it continues to function as expected. https://projects.ecoinformatics.org/ecoinfo/issues/6529