Convert SQLExceptions to RuntimeExceptions for Hazelcast MapStore operations.
Keep the hzIdentifiers set in sync with the Metacat systemmetadata table. If entries are added/updated in the hzSystemMetadata map, make sure the identifier is in the set. If (for some administrative reason) the entry is removed, remove the identifier from the set. This usually doesn't happen.
When loading all keys from Metacat into the hzSystemMetadata map, also load identifiers into the hzIdentifiers set if they are not already there. Although entries may be evicted from the map, the list of identifiers will remain. The list will have a fairly small memory footprint since it's just identifiers.
Add support for the distributed Set of unique identifiers in the storage cluster called 'hzIdentifiers'. This set is a persistent total list of all identifiers (even when entries in the hzSystemMetadata map are evicted). It reflects the state of the identifiers in the postgresql systemmetadata table, but is distributed across the cluster. Add the getIdentifiers() method, which returns the ISet of identifiers.
Use a Logger instead of System.out for SystemMetadataMap.
Use Lock instead of ILock to be consistent across classes.
evict the HazelCast SystemMetadata entry if we update the access control rules via Metacat's legacy API, otherwise stale SystemMetadata stays in memory instead of being looked up from the backing table store.
Set a default HazelcastInstance after init() is called, and use this instance in getLock() to acquire a lock in the cluster.
use shared method for looking up "docInfo" map -- both in Metacat replication and in D1 system metadata generation
When using ILock.lock(), get a lock on the string value of the Identifier, not the Identifier object itself. Hazelcast locking won't work otherwise.
Use the Hazelcast ILock mechanism to lock the system metadata identifier rather than using IMap.lock(pid).
delete system metadata when MN.delete() is called.
make MNodeServiceTest pass JUnit testing
return null instead of throwing an exception when pid is not found in store
comment out resynch() method until errors are resolved
use default hazelcast config when not configured to use an external one
For now, remove the hzClient code connecting to the DataONE process cluster to get the hzNodes map. This will be moved into the storage cluster, but use D1Client to get the node list for now.
going back to using IDentifier as the key for the ObjectPAthMap.
Reverting previous @Overrides chanrge from r6470, as that is the desiredbehavior under Java 1.6 -- previous versions of Java (e.g., 1.5) will notcomile with this usage of the @Overrides annotation, but the currentlysupported version will. So reverting to the 1.6 convention.
Removing incorrect @Override annotations that were preventing compilation. The methods marked did not actually override a method in the superclass, so they were not compiling. I think @Overrides was being mistaken for methods that implement an interface but aren't actually in the superclass.
Remove references to CNReplicationTask.
Remove the CNReplicationTask (for now). We will be using Metacat's ForceReplicationHandler to replicate science metadata across CNs, and may explore the use of a 100% evicted hzScienceMetadata map. Either way, the distributed task design won't be needed. When a dropped CN comes back online, we'll catch it up based on last modified dates for PIDs in the hzSystemMetadata map.
Add getNodesMap() to return the hzNodes map from the process cluster. Remove getPendingReplicationTasks since that structure is being removed. Add minor documentation.
lookup latest system metadata update date for use in synchronizing CN-CN when an offline nodes comes back online
changed the key type from Identifier to String for ObjectPathMap. (need a Comparable key).
rework this to be MN->MN replication. Should be fleshed out more.
throw RuntimeExceptions when store() methods throw declared exceptions -- we want callers to put() to be alerted if there are errors.
move CNReplicationTask to the hazelcast package
check if system metadata exists rather than just the id mapping (before creating the entry)
move bulk of the Hazelcast code into HazelcastService from CNodeService so that it is centrall located - easier to manage and configure
removing unneeded class (never used)
cleaned up mock tests on hzObjectPathMap. split out code for mocking a datastore into MockObjectPathMap.
initialize Hazelcast from the custom configuration when initializing the Metacat service.
handle entryAdded (to hzSystemMetadata) to store sysmeta to local store when it is not already present
use HashMap, HashSet instead of the Tree* classes that require Identifier objects implement Comparable
configuring hazelcast tests
further refactoring and start of unit tests for hazelcast elements
small code cleanup - removed unused instantiations of DBUtils.
fixed logic wrt localID and docid. Implemented new method in IdentifierManager to getAllGUIDs from identifier table for implementation of loadAllKeys in ObjectPathMapLoader.
further development of ObjectPathMapLoader.
new class for refreshing the hazelcast map with metacat. Initial commit.
Add in the Hazelcast Id generation namespace and an IdGenerator instance for task ids. Hazelcast will produce cluster-wide unique ids for the "task-ids" namespace, to be used when creating CNreplicationTask objects.
Modify HazelcastService to read configuration information from an on-disk file (not from a jar file). Added init() to start up the service (was calling doRefresh() before. We still need to decide if this is a refreshable service.
move HazelcastService to D1 package
added HazelCast MapStore and MapLoader implementation for SystemMetadata