Metacat uses JUnit tests to test its core functionality. These tests are
good for testing the internal workings of an application, but don't test the
layout and appearance. JUnit tests are meant to be one tool in the developer's
test arsinal. If you are not familiar with JUnit, you should search out some
tutorial documentation online. One such tutorial is at
The Clarkware JUnit primer
Metacat test cases will need to be run on the same server as the Metacat
instance that you want to test. Since Metacat and its test cases share the same
configuration files, there is no way to run the tests remotely.
Metacat test cases are located in the code at:
<workspace>/metacat/test/edu/ucsb/nceas/metacat*/
There you will find several java files that define JUnit tests.
Test cases are run via an ant task, and output from the tests appears in
a build directory. More on this to follow.
All you need to do to get your JUnit test included into the Metacat test
suite is to create it in one of the <workspace>/test/edu/ucsb/nceas/metacat*/
directories. The ant test tasks will know that it should be run.
The following methods are required in a test case class:
- public <Constructor>(String name) - The constructor for the test class.
- public void setUp() - Set up any dependencies for the tests. This is run before each test case.
- public void tearDown() - Release an resources used by the test.
- public static Test suite() - define the test methods that need to be run.
- public void initialize() - define any global initializations that need to be done.
You will test for failure using the many assertion methods available.
Metacat test cases extend the MCTestCase base class, which holds common
methods and variables. Some of these include:
- SUCCESS/FALURE - boolean variables holding the values for success and failure.
- metacatUrl, username, password - connection variables used for LDAP connectivity
- readDocumentIdWhichEqualsDoc() - method to read a document from Metacat server.
- debug() - method to display debug output to standard error.
These are just a few examples to give you an idea of what is in MCTestCase.
The following are a few best practices when writing test cases:
- Extend MCTestCase - Although strictly speaking, it is possible to bypass MCTestCase
and just extend the JUnit TestCase class, you should not do so. You should always
extend the MCTestCase class.
- Extend Multiple Test Methods - Try to strike a balance between the number of test
methods and the size of each test. If a test method starts to get huge, you might see
if you can break it down into mulitple tests based on functionality. If the number of
tests in the test suite starts to get large, you might see if it makes sense to
separate them out into different test classes.
- Use assertion message - Most assertion methods have an alternate implementation that
includes a message parameter. This message will be shown if the assertion fails. You
should use this version of the assertion method.
- debug() - You should use the debug() method available in the MCTestCase class to
display debug output as opposed to System.err.println(). The test configuration will
allow you to turn off debug output when you use the debug() method.
As we discussed earlier, the test cases run from within ant tasks. There is a
task to run all tests and a task to run individual tests.
You will need to have ant installed on your system. For downloads and instructions,
visit the Apache Ant site.
The test cases will look at the server's metacat properties file for configuration,
so there are two places that need to be configured.
First, you need to edit the configuration file at:
<workspace>/metacat/test/test.properties
This should only hold one property: metacat.contextDir. This should point to
the context directory for the metacat server you are testing. For example:
metacat.contextDir=/usr/share/tomcat5.5/webapps/knb
The test classes will use this to determine where to look for the server
metacat.properties file.
the remainder of the configuration needs to happen in the actual server's
metacat.properties file located at:
<workspace>/metacat/lib/metacat.properties
You will need to verify that all test.* properties are set correctly:
- test.printdebug - true if you want debug output, false otherwise
- test.metacatUrl - the url for the metacat servlet (i.e. http://localhost:8080/knb/metacat)
- test.contextUrl - the url for the metacat web service (i.e. http://localhost:8080/knb)
- test.metacatDeployDir - the directory where metacat is physically deployed (i.e. /usr/local/tomcat/webapps/knb)
- test.mcUser - the first metacat test user ("uid=kepler,o=unaffiliated,dc=ecoinformatics,dc=org" should be fine)
- test.mcPassword - the first metacat test password ("kepler" should be fine)
- test.mcAnotherUser - the second metacat test user. This user must be a member of the knb-usr
group in ldap. ("uid=test,o=NCEAS,dc=ecoinformatics,dc=org" should be fine)
- test.mcAnotherPassword - the second metacat test password ("test" should be fine)
- test.piscoUser - the pisco test user ("uid=piscotest,o=PISCO,dc=ecoinformatics,dc=org" should be fine)
- test.piscoPassword - the pisco test password ("testPW" should be fine)
- test.lterUser - the lter test user ("uid=tmonkey,o=LTER,dc=ecoinformatics,dc=org" should be fine)
- test.lterPassword - the lter test password ("T3$tusr" should be fine)
- test.testProperty - a property to verify that we can read properties (leave as "testing")
Note that none of the test users should also be administrative users. This will mess up
the access tests since some document modifications will succeed when we expect them to fail.
Once this is done, you will need to rebuild and redeploy the Metacat server. Note that
changing these properties does nothing to change the way the Metacat server runs. Rebuilding
and redeploying merely makes the test properties available to the JUnit tests.
To run all tests, go to the <workspace>/metacat directory and type
ant clean test
You will see a line to standard output summarizing each test result.
To run one test, go to the <workspace>/metacat directory and type
ant clean runonetest -Dtesttorun=<test_name>
Where <test_name> is the name of the JUnit test class (without .java on
the end). You will see debug information print to standard error.
Regardless of whether you ran one test or all tests, you will see output in
the Metacat build directory in your code at:
<workspace>/metacat/build
There will be one output file for each test class. The files will look like
TEST-edu.ucsb.nceas.<test_dir>.<test_name>.txt
where <test_dir> is the metacat* directory where the test lives and
<test_name> is the name of the JUnit test class. These output files will have
all standard error and standard out output as well as information on assertion
failures in the event of a failed test.
Now and again it is necessary to restore your test database to an older schema version
either because you need to test upgrade functionality, or you need to test backwords
compatibility of code. This section describes how to get your db schema to an older
version.
It is assumed that you have an empty metacat database up and running with a
metacat user.
There are two types of scripts that need to be run in order to create a Metacat
schema:
- xmltables-<dbtype>.sql - where <dbtype> is either oracle or postgres
depending on what type of database you are running against. This script creates the
necessary tables for Metacat.
- loaddtdschema-<dbtype>.sql - where <dbtype> is either oracle or postgres
depending on what type of database you are running against. This script creates the
necessary seed data for Metacat.
One way to get the scripts you need is to check out the release tag for the version
of metacat that you want to install. You can then run the two scripts shown above to
create your database.
For convenience, the scripts to create each version have been extracted and
checked into:
<metacat_code>/src/scripts/metacat-db-versions
The files look like:
- <version>_xmltables-<dbtype>.sql - where <version> is the version
of the schema that you want to create and <dbtype> is either oracle or postgres
depending on what type of database you are running against. This script creates the
necessary tables for Metacat.
- <version>_loaddtdschema-<dbtype>.sql - where <version> is the version
of the schema that you want to create and <dbtype> is either oracle or postgres
depending on what type of database you are running against. This script creates the
necessary seed data for Metacat.
- <version>_cleanall-<dbtype>.sql - where <version> is the version
of the schema that you want to create and <dbtype> is either oracle or postgres
depending on what type of database you are running against. This is a convenience script
to clean out the changes for that version.
For instructions on running database scripts manually, please refer to:
how to run database scripts
The following sections describe some basic end user testing to stress
code that might not get tested by unit testing.
For each Skin:
- View main skin page by going to:
http://dev.nceas.ucsb.edu/knb/style/skins/<skin_name>
for each skin, where <skin_name> is in:
default, nceas, esa, knb, kepler, lter, ltss, obfs, nrs, sanparks, saeon
- Test logging in. Where applicable (available on the skin) log in using an LDAP account.
- Test Basic searching
- Do a basic search with nothing in the search field. Should return all docs.
- Select a distinct word in the title of a doc. Go back to main page and search for
that word.
- Select the link to the doc and open the metadata. Choose a distinct word from a
field that is not Title, Abstract, Keywords or Personnel. Go back to the main page and
search all fields (if applicable)
- Test Advanced Searching
- On the main page, choose advanced search (if applicable)
- Test a variety of different search criteria
- Test Registry (if applicable)
- Create a new account
- use the "forgot your password" link
- change your password
- Test Viewing Document
- Download Metadata
- Choose the metadata download
- Save the file
- view contents for basic validity (contents exist, etc)
- Downlaod Data
- Choose the data download
- view the data for basic validity (contents exist, etc)
- View Data Table
- Find a document with a data table
- Choose to view the data table
- view the data table for basic validity (contents exist, etc)
Back | Home |
Next