Testing Metacat

Back | Home | Next
Table of Contents
About Metacat Testing
Overview
JUnit Implementation in Metacat
Writing a Test Case
Basics
MCTestCase Base Class
Best Practices
Running Test Cases
Ant task
Configure Metacat For Testing
Run All Tests
Run One Test
Viewing Test Output
Testing Different Database Schema Versions
Scripts to Run
Get Scripts Via Checkout
Script Repository
Manually Run Scripts
User Testing
Testing Skins
Testing LDAP Web Interface
Testing Metadata Registry
Testing the EcoGrid Registry Service
Load Testing
Load Test Code Files
Generating Dictionary Files
Insert Load Test
SQuery Load Test
Read Load Test
Test Driver Script
Profiling Metacat
About Metacat Testing
Overview

Metacat uses JUnit tests to test its core functionality. These tests are good for testing the internal workings of an application, but don't test the layout and appearance. JUnit tests are meant to be one tool in the developer's test arsinal. If you are not familiar with JUnit, you should search out some tutorial documentation online. One such tutorial is at The Clarkware JUnit primer

Metacat test cases will need to be run on the same server as the Metacat instance that you want to test. Since Metacat and its test cases share the same configuration files, there is no way to run the tests remotely.

JUnit Implementation in Metacat

Metacat test cases are located in the code at:

<workspace>/metacat/test/edu/ucsb/nceas/metacat*/
There you will find several java files that define JUnit tests.

Test cases are run via an ant task, and output from the tests appears in a build directory. More on this to follow.

Writing a Test Case
Basics

All you need to do to get your JUnit test included into the Metacat test suite is to create it in one of the <workspace>/test/edu/ucsb/nceas/metacat*/ directories. The ant test tasks will know that it should be run.

The following methods are required in a test case class:

You will test for failure using the many assertion methods available.

MCTestCase Base Class

Metacat test cases extend the MCTestCase base class, which holds common methods and variables. Some of these include:

These are just a few examples to give you an idea of what is in MCTestCase.

Best Practices

The following are a few best practices when writing test cases:

Running Test Cases
Ant task

As we discussed earlier, the test cases run from within ant tasks. There is a task to run all tests and a task to run individual tests.

You will need to have ant installed on your system. For downloads and instructions, visit the Apache Ant site.

Configure Metacat For Testing

The test cases will look at the server's metacat properties file for configuration, so there are two places that need to be configured.

First, you need to edit the configuration file at:

<workspace>/metacat/test/test.properties
This should only hold one property: metacat.contextDir. This should point to the context directory for the metacat server you are testing. For example:
metacat.contextDir=/usr/share/tomcat5.5/webapps/knb
The test classes will use this to determine where to look for the server metacat.properties file.

the remainder of the configuration needs to happen in the actual server's metacat.properties file located at:

<workspace>/metacat/lib/metacat.properties
You will need to verify that all test.* properties are set correctly:

Note that none of the test users should also be administrative users. This will mess up the access tests since some document modifications will succeed when we expect them to fail.

Once this is done, you will need to rebuild and redeploy the Metacat server. Note that changing these properties does nothing to change the way the Metacat server runs. Rebuilding and redeploying merely makes the test properties available to the JUnit tests.

Run All Tests

To run all tests, go to the <workspace>/metacat directory and type

ant clean test
You will see a line to standard output summarizing each test result.

Run One Test

To run one test, go to the <workspace>/metacat directory and type

ant clean runonetest -Dtesttorun=<test_name>
Where <test_name> is the name of the JUnit test class (without .java on the end). You will see debug information print to standard error.

Viewing Test Output

Regardless of whether you ran one test or all tests, you will see output in the Metacat build directory in your code at:

<workspace>/metacat/build
There will be one output file for each test class. The files will look like
TEST-edu.ucsb.nceas.<test_dir>.<test_name>.txt
where <test_dir> is the metacat* directory where the test lives and <test_name> is the name of the JUnit test class. These output files will have all standard error and standard out output as well as information on assertion failures in the event of a failed test.

Testing Different Database Schema Versions

Now and again it is necessary to restore your test database to an older schema version either because you need to test upgrade functionality, or you need to test backwords compatibility of code. This section describes how to get your db schema to an older version.

Scripts to Run

It is assumed that you have an empty metacat database up and running with a metacat user.

There are two types of scripts that need to be run in order to create a Metacat schema:

Get Scripts Via Checkout

One way to get the scripts you need is to check out the release tag for the version of metacat that you want to install. You can then run the two scripts shown above to create your database.

Script Repository

For convenience, the scripts to create each version have been extracted and checked into:

<metacat_code>/src/scripts/metacat-db-versions

The files look like:

Manually Run Scripts

For instructions on running database scripts manually, please refer to: how to run database scripts

User Testing

The following sections describe some basic end user testing to stress code that might not get tested by unit testing.

Testing Skins

For each Skin:

Testing LDAP Web Interface

The following skins use a perl based LDAP web interface to create accounts, change passwords and reset forgotten passwords:

default, nceas, esa, knb, lter, ltss, obfs, nrs, sanparks, saeon

Following the instructions in the Testing Skins section go to each of these skins and test:

Testing Metadata Registry

The following skins use a perl based registry service to register metadata and data in metacat via the web:

nceas, esa, ltss, obfs, nrs, sanparks, saeon

Following the instructions in the Testing Skins section go to each of these skins and test:

Testing the EcoGrid Registry Service

The EcoGrid registry service maintains a database of systems that are available to EcoGrid. Primarily, these are Metacat instances which are built with the EcoGrid service automatically activated. Testing the registry service is somewhat complicated. The procedure described here uses Eclipse to test. These instructions assume that you have Eclipse installed and the Seek project set up as a Java project in Eclipse.

Load Testing

This section dicussed the available load testing code and its usage.

Load Test Code Files

The code to do load testing is located in the metacat project in this directory:

<metacat_src>/test/stress-test

The test code files are written in python for the following tests:

While these can be run directly from the command line, there is also a driver file written in bash for convenience named: load-test-driver.sh

The insert and squery tests rely on the following template files respectively:

insert.xml.tmpl and squery.xml.tmpl

The insert and squery tests rely on dictionary files to create unique document IDs. These files are generated using a shell script named:

generate-dictionaries.sh

Each of these files will be discussed in the next sections

Generating Dictionary Files

The insert and squery tests (see following sections) will need to create unique document IDs to avoid conflicts and to bypass caching mechanisms. The dictionary files are created by running:

./generate-dictionaries.sh

This will create a separate file for each letter of the alphabet that looks like:

dictionary-a.txt, dictionary-b.txt, etc.

Each file will contain all the five letter word combinations that start with the letter associated with that file. You should run this script right away, as it takes a little time to run.

Insert Load Test

The insert load test is run via a python script with the following usage:

./insert-load-test.py <letter> <iterations> <interval> <host> 2>&1 &

Where:

The insert test will iterate through the dictionary for the letter you have specified. For each word, it will create a doc ID that looks like:

<word><epoch_date>.<epoch_date>.1

For instance, if the test started at epoch date 123914076495 and the 67th word in the dictionary file (for letter c) is caacp, your doc ID will look like:

caacp123914076495.67.1

This docid is subtituted for each values of @!docid!@ in the insert template at:

insert.xml.tmpl

Each doc will then be inserted into Metacat using the metacat.py interface file.

Output will be written to a file named:

insert-<letter>.out

Note that you can run several of the insert tests at the same time. You should run each against a different letter to avoid doc ID naming conflicts and to be able to view the output from each test in a different output file. See the load-test-driver.sh for some examples.

SQuery Load Test

The squery load test is run via a python script with the following usage:

./squery-load-test.py <letter> <iterations> <interval> <host> 2>&1 &

Where:

The squery test will iterate through the dictionary for the letter you have specified. For each word, it will create a query by substituting the dictionary word for every instance of @!search-word!@ in the squery template at:

squery.xml.tmpl

Each of these queries will be run against Metacat using the metacat.py interface file. By changing the query for each word, we insure that we do not get cached query results back from Metacat, which would not cause a significant load.

Output will be written to a file named:

squery-<letter>.out

Note that you can run several of the squery tests at the same time. You should run each against a different letter to avoid doc ID naming conflicts and to be able to view the output from each test in a different output file. See the load-test-driver.sh for some examples. If you are going to run a test against the same letter more than once, you will need to restart the instance of Metacat being tested to clear the query cache on that system.

Read Load Test

The read load test is run via a python script with the following usage:

./read-load-test.py <letter> <iterations> <interval> <host> 2>&1 &

Where:

The read test will create a doc ID that looks like:

readtest-<letter><epoch_date>.1.1

It will do a single insert using that doc ID and the template at:

insert.xml.tmpl

It will then do a read of that document from Metacat using the metacat.py interface file for the number of iterations you have specified.

Output will be written to a file named:

read-<letter>.out

Note that you can run several of the read tests at the same time. You should run each against a different letter to avoid doc ID naming conflicts and to be able to view the output from each test in a different output file. See the load-test-driver.sh for some examples.

Test Driver Script

There is a very simple driver script that allows you to easily run multiple instances and combinations of the different load tests called:

./load-test-driver.sh

Uncomment the tests you want to run.


Back | Home | Next