Add a test class that inserts an ORE with PROV relationships
use mock CN for testing metacat implementations
remove unused tests
add support for v2 DataONE API.
take advantage of the ezidclient for multi-threaded/asynchronous DOI registration. This will be most useful for doing large batch updates and not so much for the one-at-a-time publish actions but works in either context. https://projects.ecoinformatics.org/ecoinfo/issues/6440
use separate surName and givenNames to lookup ORCIDs.
Create a derived data and metadata in the OrePackageTest
all full-text queries for ORCID, but it isn't that great because we might have a"PISCO" creator that shows us in may different orcid profiles...false matches.
correct glaring erros -- still needs to be honed, but at least it runs without NPE and Jena/foresite errors
stub for testing ORE augmentation - this generates an ORE, adds a "wasDerivedFrom" triple and saves to Metacat MN for indexing. https://projects.ecoinformatics.org/ecoinfo/issues/6548
adjust tests for production service -- more "real" information shows additional return values from the query.
switch to the production ORCID server for looking up orcid matches for our creators.add test to summarize how many creator matches we can actually find. https://projects.ecoinformatics.org/ecoinfo/issues/6423
simplify lookup for classes and orcid. remove the "random" annotation code branches -- just too confusing to look at those bogus classes especially now that we have "real" generated annotations.
add 'test' for indexing annotations without actually storing the RDF of the generated annotation.
first pass at direct EML->semantic index method. Still produces an RDF model, but does not persist it in Metacat, only in the triplestore. Allows us to re-run without adding stale RDF to the MN store.
correct the ORE lookup query syntax and add junit assertion to check that it continues to function as expected. https://projects.ecoinformatics.org/ecoinfo/issues/6529
include BioPortal lookup for Entity matches using the data table description. TODO: only associate measurements to the entity observation if they apply.
add "test" for generating annotations based on the entity/attribute details of a datapackage. This iterates through all current EML revisions and either updates or creates annotations based on what it finds. It does add content to your metacat deployment (RDF files) but it can be safely re-run when each time we change our annotation algorithm.
add test for BioPortal annotator service.
refactor web service calls to bioportal and orcid outside of the annotator class. test with orcid sandbox server. include orcid uri for the annotations being generated (we can index these and drive our searches on these values down the road). related to this: https://projects.ecoinformatics.org/ecoinfo/issues/6423 and also some semtools tasks.
include code to generate random annotations for UI testing. Effective, but can be confusing to see so many unrelated concepts on duplicate EML packages.
first pass at generating annotations from EML attribute information. uses the OpenAnnotation model that the metacat-index tests assume which allows us to populate dynamic index fields for the annotation class[es]. There is still much to be done with finding appropriate concepts for each attribute. https://projects.ecoinformatics.org/ecoinfo/issues/6256
Add a comment to warn the users not to change the password file path to a production one since it will be deleted.
clear test password file so subsequent runs will not fail -- tests assume blank file to begin with.
uncomment other tests in suite.
add a few more checks while debugging test
Use D1client for communication with CN (for integration test)
add test to check sync of access policies of data object referenced in EML 2.0.1 docs
include a few tests for isEqual method. https://projects.ecoinformatics.org/ecoinfo/issues/6407
Use client.MetacatClient instance for all metacat api calls
Test syncing of access policies when updated with legacy metacat api
Change the test file according to the change on the constructor.
Made changes according the changes in the AuthFile class.
Add the test method for the getUserInfo method.
Modified the junit test file according to the change in the class.
Change the test file according to the change made in the class.
Change the addGroup method since the API was changed.
Add the test method for the getPrincipals.
Add test methods for changing passwords.
Add method to test the getusers method.
Add a test method to test authentication.
Add an test method for adding a user.
Add a junit test for the AuthFile
Fixed compile error that an ioexception should be caught.
Reviewed code for all uses of FileInputStream, checking to see if the method should be closing the stream, and if so, closing it in the method as well as in the finally clause to ensure we don't leak file descriptors.
Refer to metacat.war deployments since those are now the default. https://projects.ecoinformatics.org/ecoinfo/issues/6082
test that d1 node admin is allowed all permissions. https://projects.ecoinformatics.org/ecoinfo/issues/6086
test for configured target url template on metadata using the default #view url. https://projects.ecoinformatics.org/ecoinfo/issues/6092
implement ORE check method to actually query the MN for OREs that reference the given pid.https://projects.ecoinformatics.org/ecoinfo/issues/6061
default to non-https when testing localhost. If the developer has a hostname configured with https, it will correctly use this configured baseUrl, otherwise it will just use http://localhost.
MN.getPackage() - test with ORE that includes 2 data files and a "metadata" file (still should be using EML for that test). https://projects.ecoinformatics.org/ecoinfo/issues/6026
First pass at MN.getPackage() implementation using Bagit library from LOC. https://projects.ecoinformatics.org/ecoinfo/issues/6026
add method for publishing existing object (usually assumed to be scimeta) with a DOI. https://projects.ecoinformatics.org/ecoinfo/issues/6014
add simple test for the IndexEventDAO class -- adding, removing, listing events in the DB table. https://projects.ecoinformatics.org/ecoinfo/issues/5944
correct regex for whitespace in D1 identifier.
Add code to test title for the query result.
Rewrite some methods, so the query result can be processed many times.
Removed the commented out text cases and add a test for archvied document.
Call setting certificate location to be the test one after getting the MN baseurl.The method to getting MN baseurl somehow calls CN and it set the certificate location to be /var/metacat/certs/METACAT1.pem.
Use the MNode to query the server when we use certificates to set up the session.
Add the code to test the user with a distrusted certificate.
Add tests to test group and rightsholder.
Change the delete to archive.
Add a test for testing access control for the solr query.
use maven to manage most jar dependencies in Metacat.Exceptions include: LSID, Datamamager (EML),
lookup the title for EML files when registering DOIs.lookup the creator from DataONE CN (if available).add EML-based test. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5513
include the create test in the suite
Correctly mint and register DOIs in teh MN API implementation. Add tests to exercise minting and creating. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5513
remove older lucene library and include ORE test to make sure that change does not prevent us from generating OREs. http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5874
move DocInfo parsing into utilities project so that it can be used by Morpho as well as Metacat.http://bugzilla.ecoinformatics.org/show_bug.cgi?id=5737
return from test when we encounter the NotImplemented exception for CN.search()
CN.search() id not implemented by metacat -- making that explicit and also testing for it.
In migrating to Hazelcast 2.4.x, replace deprecated methods.
use CDATA for docname field in docInfo so that XML parser ignores the content that can contain characters like "&
Use a final static string to replace the hard code.Search document title rather than id in testReplicateEML_AtoB method.
use CN session when testing getLogRecords() and getOperationStatistics() becuase they are now protecting "sensitive" information
include testSynchronizationFailed() and call as the CN subject so that it is authorized.
use MN (self) as the Session.subject so that the MN.delete() call is successful.
comment out testDelete because it requires acting as the MN comment out testSynchronizationFailed because it requires acting as the CN
uncomment the MN tests (I bet this was an oversight during local testing)
change ordering of getLogRecords() parameter -- pidFilter is in the middle now
upgrade to latest RC in libclient and common jars -- includes updated getLogRecords and new mn.generateIdentifier method
Add testIsEquivIdentityAuthorized() to ensure that [MN|CN].isAuthorized() is authorizing equivalent identities correctly. Note: Using TypeMarshaller.marshalTypeToOutputStream(type, System.out) to serialize an object seems to jack up output to stdout - not sure why.
refactor D1-specific upgrade utilities into their own package
test harness for running system metadata generation outside of the upgrade process
include comment about KNB estimated time to run during upgrade:Total time: 20 minutes 58 seconds
use "test" to exercise upgrade code on staging DB.
Update the MNodeServiceTest to test the validity of the node document returned by getCapabilities() by parsing it with the TypeMarshaller.
use RC-1 Dataone jars
remove method: assertRelationhttps://redmine.dataone.org/issues/2158
Use a Date with resolution to milliseconds.
include the EML and data tests in the suite
debugging data locking test
cannot check for deleted data since it is forever available (archived)
do not include the "v1" in the base url for the target MN
update tests to comply with these chenages:new jars with many changes -- including new CN methods: ping, describe, listChecksumAlgorithm. Removed MN.setAccessPolicy. Refactored CN.setOwner() to CN.setRightsHolder().
for test to compile, provide BaseException param for setReplicationStatus. I used a NotAuthorized instance.
query for deleted metadata when testing that replication communicated the deletion. to check data, we try to update the data object on the target node (which should fail)
add test for data locking
delete data and eml on the home Metacat and check that the change propagates to the target