Metacat uses JUnit tests to test its core functionality. These tests are
good for testing the internal workings of an application, but don't test the
layout and appearance. JUnit tests are meant to be one tool in the developer's
test arsinal. If you are not familiar with JUnit, you should search out some
tutorial documentation online. One such tutorial is at
The Clarkware JUnit primer
Metacat test cases will need to be run on the same server as the Metacat
instance that you want to test. Since Metacat and its test cases share the same
configuration files, there is no way to run the tests remotely.
Metacat test cases are located in the code at:
<workspace>/metacat/test/edu/ucsb/nceas/metacat*/
There you will find several java files that define JUnit tests.
Test cases are run via an ant task, and output from the tests appears in
a build directory. More on this to follow.
All you need to do to get your JUnit test included into the Metacat test
suite is to create it in one of the <workspace>/test/edu/ucsb/nceas/metacat*/
directories. The ant test tasks will know that it should be run.
The following methods are required in a test case class:
- public <Constructor>(String name) - The constructor for the test class.
- public void setUp() - Set up any dependencies for the tests. This is run before each test case.
- public void tearDown() - Release an resources used by the test.
- public static Test suite() - define the test methods that need to be run.
- public void initialize() - define any global initializations that need to be done.
You will test for failure using the many assertion methods available.
Metacat test cases extend the MCTestCase base class, which holds common
methods and variables. Some of these include:
- SUCCESS/FALURE - boolean variables holding the values for success and failure.
- metacatUrl, username, password - connection variables used for LDAP connectivity
- readDocumentIdWhichEqualsDoc() - method to read a document from Metacat server.
- debug() - method to display debug output to standard error.
These are just a few examples to give you an idea of what is in MCTestCase.
The following are a few best practices when writing test cases:
- Extend MCTestCase - Although strictly speaking, it is possible to bypass MCTestCase
and just extend the JUnit TestCase class, you should not do so. You should always
extend the MCTestCase class.
- Extend Multiple Test Methods - Try to strike a balance between the number of test
methods and the size of each test. If a test method starts to get huge, you might see
if you can break it down into mulitple tests based on functionality. If the number of
tests in the test suite starts to get large, you might see if it makes sense to
separate them out into different test classes.
- Use assertion message - Most assertion methods have an alternate implementation that
includes a message parameter. This message will be shown if the assertion fails. You
should use this version of the assertion method.
- debug() - You should use the debug() method available in the MCTestCase class to
display debug output as opposed to System.err.println(). The test configuration will
allow you to turn off debug output when you use the debug() method.
As we discussed earlier, the test cases run from within ant tasks. There is a
task to run all tests and a task to run individual tests.
You will need to have ant installed on your system. For downloads and instructions,
visit the Apache Ant site.
The test cases will look at the server's metacat properties file for configuration,
so there are two places that need to be configured.
First, you need to edit the configuration file at:
<workspace>/metacat/test/test.properties
This should only hold one property: metacat.contextDir. This should point to
the context directory for the metacat server you are testing. For example:
metacat.contextDir=/usr/share/tomcat5.5/webapps/knb
The test classes will use this to determine where to look for the server
metacat.properties file.
the remainder of the configuration needs to happen in the actual server's
metacat.properties file located at:
<workspace>/metacat/lib/metacat.properties
You will need to verify that all test.* properties are set correctly:
- test.printdebug - true if you want debug output, false otherwise
- test.metacatUrl - the url for the metacat servlet (i.e. http://localhost:8080/knb/metacat)
- test.contextUrl - the url for the metacat web service (i.e. http://localhost:8080/knb)
- test.metacatDeployDir - the directory where metacat is physically deployed (i.e. /usr/local/tomcat/webapps/knb)
- test.mcUser - the first metacat test user ("uid=kepler,o=unaffiliated,dc=ecoinformatics,dc=org" should be fine)
- test.mcPassword - the first metacat test password ("kepler" should be fine)
- test.mcAnotherUser - the second metacat test user. This user must be a member of the knb-usr
group in ldap. ("uid=test,o=NCEAS,dc=ecoinformatics,dc=org" should be fine)
- test.mcAnotherPassword - the second metacat test password ("test" should be fine)
- test.piscoUser - the pisco test user ("uid=piscotest,o=PISCO,dc=ecoinformatics,dc=org" should be fine)
- test.piscoPassword - the pisco test password ("testPW" should be fine)
- test.lterUser - the lter test user ("uid=tmonkey,o=LTER,dc=ecoinformatics,dc=org" should be fine)
- test.lterPassword - the lter test password ("T3$tusr" should be fine)
- test.testProperty - a property to verify that we can read properties (leave as "testing")
Note that none of the test users should also be administrative users. This will mess up
the access tests since some document modifications will succeed when we expect them to fail.
Once this is done, you will need to rebuild and redeploy the Metacat server. Note that
changing these properties does nothing to change the way the Metacat server runs. Rebuilding
and redeploying merely makes the test properties available to the JUnit tests.
To run all tests, go to the <workspace>/metacat directory and type
ant clean test
You will see a line to standard output summarizing each test result.
To run one test, go to the <workspace>/metacat directory and type
ant clean runonetest -Dtesttorun=<test_name>
Where <test_name> is the name of the JUnit test class (without .java on
the end). You will see debug information print to standard error.
Regardless of whether you ran one test or all tests, you will see output in
the Metacat build directory in your code at:
<workspace>/metacat/build
There will be one output file for each test class. The files will look like
TEST-edu.ucsb.nceas.<test_dir>.<test_name>.txt
where <test_dir> is the metacat* directory where the test lives and
<test_name> is the name of the JUnit test class. These output files will have
all standard error and standard out output as well as information on assertion
failures in the event of a failed test.
Now and again it is necessary to restore your test database to an older schema version
either because you need to test upgrade functionality, or you need to test backwords
compatibility of code. This section describes how to get your db schema to an older
version.
It is assumed that you have an empty metacat database up and running with a
metacat user.
There are two types of scripts that need to be run in order to create a Metacat
schema:
- xmltables-<dbtype>.sql - where <dbtype> is either oracle or postgres
depending on what type of database you are running against. This script creates the
necessary tables for Metacat.
- loaddtdschema-<dbtype>.sql - where <dbtype> is either oracle or postgres
depending on what type of database you are running against. This script creates the
necessary seed data for Metacat.
One way to get the scripts you need is to check out the release tag for the version
of metacat that you want to install. You can then run the two scripts shown above to
create your database.
For convenience, the scripts to create each version have been extracted and
checked into:
<metacat_code>/src/scripts/metacat-db-versions
The files look like:
- <version>_xmltables-<dbtype>.sql - where <version> is the version
of the schema that you want to create and <dbtype> is either oracle or postgres
depending on what type of database you are running against. This script creates the
necessary tables for Metacat.
- <version>_loaddtdschema-<dbtype>.sql - where <version> is the version
of the schema that you want to create and <dbtype> is either oracle or postgres
depending on what type of database you are running against. This script creates the
necessary seed data for Metacat.
- <version>_cleanall-<dbtype>.sql - where <version> is the version
of the schema that you want to create and <dbtype> is either oracle or postgres
depending on what type of database you are running against. This is a convenience script
to clean out the changes for that version.
For instructions on running database scripts manually, please refer to:
how to run database scripts
The following sections describe some basic end user testing to stress
code that might not get tested by unit testing.
For each Skin:
- View main skin page by going to:
http://dev.nceas.ucsb.edu/knb/style/skins/<skin_name>
for each skin, where <skin_name> is in:
default, nceas, esa, knb, lter, ltss, obfs, nrs, sanparks, saeon
Note that the kepler skin is installed on a different metacat instance and can be found at:
http://kepler-dev.nceas.ucsb.edu/kepler
- Test logging in. Where applicable (available on the skin) log in using an LDAP account.
- Test Basic searching
- Do a basic search with nothing in the search field. Should return all docs.
- Select a distinct word in the title of a doc. Go back to main page and search for
that word.
- Select the link to the doc and open the metadata. Choose a distinct word from a
field that is not Title, Abstract, Keywords or Personnel. Go back to the main page and
search all fields (if applicable)
- Test Advanced Searching
- On the main page, choose advanced search (if applicable)
- Test a variety of different search criteria
- Test Registry (if applicable)
- Create a new account
- use the "forgot your password" link
- change your password
- Test Viewing Document
- Download Metadata
- Choose the metadata download
- Save the file
- view contents for basic validity (contents exist, etc)
- Download Data
- Choose the data download
- view the data for basic validity (contents exist, etc)
- View Data Table
- Find a document with a data table
- Choose to view the data table
- view the data table for basic validity (contents exist, etc)
The following skins use a perl based LDAP web interface to create
accounts, change passwords and reset forgotten passwords:
default, nceas, esa, knb, lter, ltss, obfs, nrs, sanparks, saeon
Following the instructions in the Testing Skins section
go to each of these skins and test:
- Create LDAP Account
- Choose the "Create a New Account" link
- Fill out the required information.
- Choose a username that will be easy to find and remove from ldap later.
- Use your real email address
- Hit the "Register" button
- You may see a page with similar accounts. If so, choose to continue.
- You should get a "Registration Succeeded" message.
- Change LDAP Password (if available)
- Choose the "Change Your Password" link
- Fill out the requested information
- Hit the "Change password" button
- You should get a "Your password has been changed" message.
- Request Forgotten LDAP Password Reset
- Choose the "Forgot Your Password" link
- Enter your username
- Hit the "Reset Password" button
- You should get a "Your password has been reset" message.
- You should get an email with your new password
- Verify that you can log in with the new password
The following skins use a perl based registry service to register metadata and
data in metacat via the web:
nceas, esa, ltss, obfs, nrs, sanparks, saeon
Following the instructions in the Testing Skins section
go to each of these skins and test:
- Choose the "Register Dataset" link
- Fill out required fields. Note that there are typically many different fields.
You should test out different combinations including attaching datasets if
available.
- Hit the "Submit Dataset" button
- Review the information for accuracy
- Submit the data set
- You should get a "Success" message.
- Search for the data set in metacat and review for accuracy
The EcoGrid registry service maintains a database of systems that are available to EcoGrid. Primarily,
these are Metacat instances which are built with the EcoGrid service automatically activated. Testing
the registry service is somewhat complicated. The procedure described here uses Eclipse to test.
These instructions assume that you have Eclipse installed and the Seek project set up as a Java project
in Eclipse.
- Configure the Seek project in Eclipse
- Right click on the Seek project and go to Properties->Java Build Path->Source
- Only the following two direcories should be set up as source:
- seek/projects/ecogrid/src
- seek/projects/ecogrid/tests
- Right click on the Seek project and go to Properties->Java Build Path->Libraries
- Add all Jars from:
- seek/projects/ecogrid/lib/
- seek/projects/ecogrid/lib/axis-1_3/
- seek/projects/ecogrid/build/lib/
- If you do not already have an Ant view open in Eclipse, in the menu, go to
Window->Show View->Ant
- drag the file from the seek project at seek/projects/ecogrid/build.xml into
the Ant window you just opened.
- Double click the serverjar and stubjar targets to build those jar files.
-
- Right click on the Seek project and go to Properties->Java Build Path->Libraries
- Add the two Jar files you just created:
- seek/projects/ecogrid/build/lib/RegistryServiceImpl.jar
- seek/projects/ecogrid/build/lib/RegistryService-stub.jar
- View the RegistryServiceClient usage
- In Eclipse, go to the registry service client at:
seek/projects/ecogrid/src/org/ecoinformatics/ecogrid/client/RegistryServiceClient.java
- Right click on RegistryServiceClient.java and go to Run As->Open Run Dialog
- Name it something like "RegistryServiceClient noargs" since you are running it without arguments.
- Hit the "Apply" button and then the "Run" button.
- Proceed past the project error warning dialog
- In the Eclipse console you should see usage instructions that look like:
- Usage: java RegistryServiceClient add session_id local_file GSH
- Usage: java RegistryServiceClient update session_id docid local_file GSH
- Usage: java RegistryServiceClient remove session_id docid GSH
- Usage: java RegistryServiceClient list session_id GSH
- Usage: java RegistryServiceClient query session_id query_field query_string GSH
- Note: now you can run the client using the green "run" button in the Eclipse
menu. We will use that button from now on, instead of going to the java file.
- List Registry Services on dev
- In Eclipse, go to the green run button dropdown and choose "Open Run Dialog"
- Right click on the "RegistryServiceClient noargs" configuration you created earlier and choose "duplicate".
- Name your new configuration "RegistryServiceClient list dev.nceas"
- Go to the Arguments tab and enter: list 12345 http://dev.nceas.ucsb.edu/registry/services/RegistryService
- This conforms to the list usage we saw earlier
- Note that the session ID is not needed for listing, so we include a random value.
- GSH always refers to the server where the registry database is held.
- Choose "Run"
- Proceed past the project error warning dialog
- You should see a listing of details for all services registered on the dev server.
- Register a new service on dev
- Look in your service list you just printed and find a service that has a
service type of: http://ecoinformatics.org/identifierservice-1.0.0
- Get the service ID and use it to get the xml description from dev metacat by going to:
http://kepler-dev.nceas.ucsb.edu/kepler/metacat/<service_id>
- Save the file to disk
- Edit the file and change the id to something unique and the description to be something
easily recognizable.
- In the browser, go to: http://kepler-dev.nceas.ucsb.edu/kepler/style/skins/dev/login.html
- Log in and make note of the sessionId that was returned
- In Eclipse, go to the green run button dropdown and choose "Open Run Dialog"
- Right click on the "RegistryServiceClient noargs" configuration you created earlier and choose "duplicate".
- Name your new configuration "RegistryServiceClient add-test dev.nceas"
- Go to the Arguments tab and enter: add <sessionId> <xml_file_path> http://dev.nceas.ucsb.edu/registry/services/RegistryService
- This conforms to the add usage we saw earlier
- The <sessionId> is the id you got after loggin in via the dev skin.
- The <xml_file_path> is the full path to the descriptor file you downloaded and modified.
- GSH always refers to the server where the registry database is held.
- Choose "Run"
- Proceed past the project error warning dialog
- You should see a message saying: "The new id is <id>, where <id> is the unique id
you added to the service descriptor file.
- Follow the instructions shown above to list services to make sure your new service shows up
This section dicussed the available load testing code and its usage.
The code to do load testing is located in the metacat project in this directory:
<metacat_src>/test/stress-test
The test code files are written in python for the following tests:
- read - read-load-test.py
- insert - insert-load-test.py
- squery - squery-load-test.py
While these can be run directly from the command line, there is also a driver
file written in bash for convenience named: load-test-driver.sh
The insert and squery tests rely on the following template files respectively:
insert.xml.tmpl and squery.xml.tmpl
The insert and squery tests rely on dictionary files to create unique document IDs.
These files are generated using a shell script named:
generate-dictionaries.sh
Each of these files will be discussed in the next sections
The insert and squery tests (see following sections) will need to create unique document IDs to avoid conflicts and to bypass caching mechanisms. The dictionary files are created by running:
./generate-dictionaries.sh
This will create a separate file for each letter of the alphabet that looks like:
dictionary-a.txt, dictionary-b.txt, etc.
Each file will contain all the five letter word combinations that start with the
letter associated with that file. You should run this script right away, as it takes a little
time to run.
The insert load test is run via a python script with the following usage:
./insert-load-test.py <letter> <iterations> <interval> <host> 2>&1 &
Where:
- letter - the letter of the dictionary you want to use to generate doc IDs.
- iterations - the number of inserts you would like the test to perform.
- interval - the delay in seconds between each insert. You can enter a decimal for
less than one second.
- host - the server that is running the instance of metacat you are load testing.
You should not be running the test drivers on the same machine as metacat, since that
could affect the outcome of the load test.
The insert test will iterate through the dictionary for the letter you have specified.
For each word, it will create a doc ID that looks like:
<word><epoch_date>.<epoch_date>.1
For instance, if the test started at epoch date 123914076495 and the 67th word in the
dictionary file (for letter c) is caacp, your doc ID will look like:
caacp123914076495.67.1
This docid is subtituted for each values of @!docid!@ in the insert template at:
insert.xml.tmpl
Each doc will then be inserted into Metacat using the metacat.py interface file.
Output will be written to a file named:
insert-<letter>.out
Note that you can run several of the insert tests at the same time. You should run
each against a different letter to avoid doc ID naming conflicts and to be able to
view the output from each test in a different output file. See the load-test-driver.sh
for some examples.
The squery load test is run via a python script with the following usage:
./squery-load-test.py <letter> <iterations> <interval> <host> 2>&1 &
Where:
- letter - the letter of the dictionary you want to use to generate doc IDs.
- iterations - the number of squeries you would like the test to perform.
- interval - the delay in seconds between each squery. You can enter a decimal for
less than one second.
- host - the server that is running the instance of metacat you are load testing.
You should not be running the test drivers on the same machine as metacat, since
that could affect the outcome of the load test.
The squery test will iterate through the dictionary for the letter you have specified.
For each word, it will create a query by substituting the dictionary word for every
instance of @!search-word!@ in the squery template at:
squery.xml.tmpl
Each of these queries will be run against Metacat using the metacat.py interface file.
By changing the query for each word, we insure that we do not get cached query results
back from Metacat, which would not cause a significant load.
Output will be written to a file named:
squery-<letter>.out
Note that you can run several of the squery tests at the same time. You should run
each against a different letter to avoid doc ID naming conflicts and to be able to
view the output from each test in a different output file. See the load-test-driver.sh
for some examples. If you are going to run a test against the same letter more than
once, you will need to restart the instance of Metacat being tested to clear the query
cache on that system.
The read load test is run via a python script with the following usage:
./read-load-test.py <letter> <iterations> <interval> <host> 2>&1 &
Where:
- letter - the read test does not use a dictionary. The letter helps us insure that
each test reads a different document, and writes to its own output file.
- iterations - the number of reads you would like the test to perform.
- interval - the delay in seconds between each read. You can enter a decimal for
less than one second.
- host - the server that is running the instance of metacat you are load testing.
You should not be running the test drivers on the same machine as metacat, since
that could affect the outcome of the load test.
The read test will create a doc ID that looks like:
readtest-<letter><epoch_date>.1.1
It will do a single insert using that doc ID and the template at:
insert.xml.tmpl
It will then do a read of that document from Metacat using the metacat.py interface
file for the number of iterations you have specified.
Output will be written to a file named:
read-<letter>.out
Note that you can run several of the read tests at the same time. You should run
each against a different letter to avoid doc ID naming conflicts and to be able to
view the output from each test in a different output file. See the load-test-driver.sh
for some examples.
There is a very simple driver script that allows you to easily run multiple instances
and combinations of the different load tests called:
./load-test-driver.sh
Uncomment the tests you want to run.
Back | Home |
Next