cache the imported models to avoid timeouts from remote hosts (or being locked out for too many requests in a given time period).
process all the returned annotation suggestions until we find one that is appropriately located in the subclass hierarchy for the given superclass.
use in-memory TDB dataset for querying annotations for indexing -- this comes with the same reasoning capabilities as the directory-based one, but has the benefit of not filling the directory with triples that will not be used again. prepping for d1 AHM
when indexing annotations directly, just use an in-memory triple store rather than TDB since we remove each graph as it is processed (and my TDB instance would get into the multi-GB range with a few runs, even if I removed the old models)
redirect "short form" metacat read URIs to the the new Metacat UI using the configured UI context. This translates the docid -> pid to use the correct identifier for the correct service. https://projects.ecoinformatics.org/ecoinfo/issues/6546
simplify lookup for classes and orcid. remove the "random" annotation code branches -- just too confusing to look at those bogus classes especially now that we have "real" generated annotations.
Add admin service to update DOI registrations by specifying a list of formatIds or DOIs, or update all.
use new method to override the CN URL when constructing a CNode instance. see https://redmine.dataone.org/issues/5142
first pass at direct EML->semantic index method. Still produces an RDF model, but does not persist it in Metacat, only in the triplestore. Allows us to re-run without adding stale RDF to the MN store.
Store the cn url in the backup.
switch to use FIleUpload instead of O'Reilly COS library for handling chunked file uploads. https://projects.ecoinformatics.org/ecoinfo/issues/6517
forgot to check in the actual class: first pass at allowing admins to update DOI registration. This only acts on EML objects at the moment and is meant to illustrate one mechanism for updating the DOIs. https://projects.ecoinformatics.org/ecoinfo/issues/6530
first pass at allowing admins to update DOI registration. This only acts on EML objects at the moment and is meant to illustrate one mechanism for updating the DOIs. https://projects.ecoinformatics.org/ecoinfo/issues/6530
correct the ORE lookup query syntax and add junit assertion to check that it continues to function as expected. https://projects.ecoinformatics.org/ecoinfo/issues/6529
index the ORE after we submit the metadata for indexing. https://projects.ecoinformatics.org/ecoinfo/issues/6520
include BioPortal lookup for Entity matches using the data table description. TODO: only associate measurements to the entity observation if they apply.
Index the document after it has been inserted.
Index the document after document is written to the db.
check for null entities and/or attributes (typically when otherEntity is being used in EML).
remove extra space in log message
attribute the datapackage to the creator (using orcid if we can find it). https://projects.ecoinformatics.org/ecoinfo/issues/6267https://projects.ecoinformatics.org/ecoinfo/issues/6423
add test for BioPortal annotator service.
refactor web service calls to bioportal and orcid outside of the annotator class. test with orcid sandbox server. include orcid uri for the annotations being generated (we can index these and drive our searches on these values down the road). related to this: https://projects.ecoinformatics.org/ecoinfo/issues/6423 and also some semtools tasks.
remove leading '?' in the query parameter for MN.query() implementation. We want it to match CN behavior/expectations and comply with the DataONE specification for the interface. https://projects.ecoinformatics.org/ecoinfo/issues/6488
Use OBOE-SBC ontology for looking up concepts (it contains subclasses of our OBOE Characteristic and Standard superclasses). Restrict annotations to only subclasses that fit the OBOE model. Correct the xpointer and individual naming conventions so they are unique, but express the exact entity/attribute being annotated.
remove my api key. oops
add comment/pointer to BioPortal annotation service.
Include method to look up annotation classes from BioPortal. We still have OBOE-SBC in there, and theyhave the SWEET ontology. The suggestions returned are not perfect, but they can be better than nothing. Ideally, we'd only query a few ontologies so we don't end up using terms from medical ontologies that aren't really appropriate for our domain. https://projects.ecoinformatics.org/ecoinfo/issues/6256
Add xpointer FragmentSelectors to each annotation.Split attribute label into tokens to attempt matching to OBOE concepts.
include code to generate random annotations for UI testing. Effective, but can be confusing to see so many unrelated concepts on duplicate EML packages.
include SSLVerify* directives for client certificates and a pointer for getting the DataONE chain files.
Remove the code to lookup alias dn in the getGroups method.
Rather than directly to modify the env, we use context.addToEnv.This fixed a bug in non-tls env, the alias log-in doesn't work.
first pass at generating annotations from EML attribute information. uses the OpenAnnotation model that the metacat-index tests assume which allows us to populate dynamic index fields for the annotation class[es]. There is still much to be done with finding appropriate concepts for each attribute. https://projects.ecoinformatics.org/ecoinfo/issues/6256
Edited the replicaPolicies script to print out a list of IDs that have a different authoritative member node, the number of successes, and failures at the end.
Add comments to bash script to explain its function and dependencies
Added a bash script to call /replicaPolicies/{pid} via the DataONE API for all objects in a MN or a list of ids.
support content from all serverLocations when summarizing entity info (semtools)
allow "+" in solr query syntax. https://projects.ecoinformatics.org/ecoinfo/issues/6435
include read events when re-indexing obsoleted objects. https://projects.ecoinformatics.org/ecoinfo/issues/6424
Set the userManagementURL property.
update to use 2.4.1 so the trunk has all artifacts for upgrades.
simple upgrade scripts for version 2.4.1
In the authenticate method, if metacat can't get user info, the login still can be successful.
change a log information.
In the getALiasedName method, the referral set to ignore. Since the alias name is the local referral, we need to set it to ignore.
recursively submit obsoleted objects for indexing when instructed. https://projects.ecoinformatics.org/ecoinfo/issues/6424
First pass at a class for summarizing attribute information for analysis. (semtools) https://projects.ecoinformatics.org/ecoinfo/issues/6256
merge recent upgrade changes from 2.4 branch
look up guid when done setting access by docid so we can sync and refresh accesspolicy on MN and CN.
additional logging for set access
get guid from online id for call to SyncAccessPolicy
setAccessAction: get guid from passed in id for calls to SyncAccessPolicy, HazelcastService.refreshSystemMetadataEntry
example of how we can look up pid (guid) given a metacat docid.
remove sensorML from the catalog since we don't actually ship it (yet?)
Add in Darwin Core schema support into xml_catalog, and insert it on upgrade as well. The schemas are cached in lib/schema/dwc, and Matt and Ben noted that the tdwg_basetypes.xsd and tdwg_dwctypes.xsd are part of the same namespace, but are xs:include'd rather than imported via namespace.
include a few tests for isEqual method. https://projects.ecoinformatics.org/ecoinfo/issues/6407
Change isEqual to private so it can be used by test suite
Add DataONE, Dublin Core, and Dryad schemas during the 2.4.0 upgrade, and be sure to remove the appropriate entries before inserting to avoid duplicate rows.
Add schema support for the DataONE, dublin Core, and Dryad schemas. Schemas get downloaded into lib/schema priior to jar and dist targets, and get loaded into xml_catalog on installation.
move the postgres changes to the oracle version -- update note about not attempting to restore because no Oracle MNs exist.
do not include "sm" alias in the SET clause.
allow statements starting with 'WITH'
comment out the select statements so they do not run during real upgrade.
loosen the restriction on which archive flags we set to false -- if we have an obsoleted_by value then it need not be marked as archived.
add [partial] upgrade to the oracle script -- does not look for any records that the CN deleted because there are no Oracle-backed MNs at this time.
add comment (and commented out code) for possibly inspecting the /dirtySysMeta call for archive=true flag. https://projects.ecoinformatics.org/ecoinfo/issues/6417
only index event information for known events. https://projects.ecoinformatics.org/ecoinfo/issues/6346
call getDescription on cn.setaccesspolicy service failure
make all objects in a package publicly readable when published. https://projects.ecoinformatics.org/ecoinfo/issues/6415
Add the code to check if the docid contains the whitespaces in the handleInsertOrUpdate, handleUpload and handleInsertMultipartInsertAction methods.
make all package contents publicly readable when publishing with a DOI. https://projects.ecoinformatics.org/ecoinfo/issues/6415
Run syncAll in a single thread so admin config UI doesn't freeze
Change CnodeService.archive() to no longer broadcast MN.archive() calls to all of the replica MNs of a pid, but rather broadcast MN.systemMetadataChanged().
allow utf-8 user first/last names to be used in responses for: login, logout, validatesession, getprincipals.
Couple modifications:-use "pid" throughout so as not to confuse docids and pids-ensure any failures in the set do not prevent synching for other pids in the set
adjust whitespace for consistency
sync pids of <distribution><online> data objects with CN when their access rules change in EML 2.0.* <additionalMetadata>
restrict the archived=false update to revisions that still have current entries in the xml_documents table.
tested the restore insertions - adjusted for FK constraints. I was able to delete a document locally, then restore it, then update the document with a new revision as expected.
use 'with' query to find the most recent revision of an object that was archived. still want more feedback on the criteria.
Modified the usage.
continue to work on the criteria for selecting documents to restore.expanded the criteria for setting archived=false to include any revision that was already obsoleted_by somethign else.
correct syntax - add more criteria for selecting documents.
do not set sm.archived=true when generating system metadata for objects that come in via the old Metacat API.
draft of fix for erroneously archived documents - first discovered by LTER - but also applicable to other Metacat MNs that still use the Metacat API as of Jan 2014 CN changes.
Modify the usage message.
Use a DN name for the group in the usage message printout.
Fixed bug where sync'ing not working when CN had more access rules than MN
Sync access policy between MN -> CN when access rules are updated in EML 2.1+ for data object
Remove code to add the organzation in the search filter. This is not necessary since we use the dn as the search base. The code was not actually used but caused some problem.
use v2.4.0 for documentation and upgrade scripts.
can only log events with a valid localId.
Add a note to let users know where to run the script.
Add a note to let user know he/she should use single quotes to surround the hashed password.
edit some of the user management phrases. use UTF-8 for all returned XML. https://projects.ecoinformatics.org/ecoinfo/issues/6320
Use an array of a hash to keep the orgName/orgLabel pair.
Fixed a bug that AuthFile constructor didn't read the new value of the password file path from the metacat.properties.
Add the code to handle a runtime exception.