Currently viewing

A checker is an external script which is called from the main testsuite program to verify the correctness of a specific test. More than one checker can be applied to individual test. For example, the user might want to check whether the run was successful, whether output was produced and whether results are within specified thresholds. The checkers can be written in any scripting language (bash, python ...). Communication of runtime variables between the testsuite and a checker is achieved via environment variables. A checker can access a set of environment variables defined by the teststuite.
Root directory of the testsuite.
Verbosity level requested by the user for running the testsuite.
Directory where the current test was run.
File which contains the standard output and standard error of the current test.
Directory containing the namelist of the current test.
Directory containing the reference files for the current test.

Each checker should return an exit code with the following definition:
Results match bit-by-bit as compared to the reference.
Results match the reference within defined thresholds.
Test not applicable and thus skipped.
Test failed, results are outside of thresholds.
Test failed due to model crash.

The checkers to be called for a given test are defined within the testlist.xml file. All checkers are called irrespective of the exit code of preceding tests. The overall status of a test is defined as the maximum of the exit codes of all checkers called.
The run success checker
The run success checker will check the successful execution of the test. It searches the standard output of the simulation for the typical CLEAN UP message issued at the end of a successful model simulation
The netcdf output checker
The existence checker will check whether a simulation has correctly written at least one netCDF output file and that this file is of non-zero size.
The identical checker
When bit identical results are expected, as for example when comparing two different parallelizations on a given system, the checker should be called. This checker expects a YUPRTEST output file and absolute differences between the output of the test and the reference file are compared to within a zero tolerance threshold. Note that bit identical results can only be expected when two simulations are run on the same system. A test using the identical should always have reference directory pointing to a previous test in the working folder, e.g. ../test_1.
The SAMOA checker
Runs SAMOA on the output files.
The restart checker
Performs a 0–24h and a 0–12h,restart,12–24h run and checks whether the results are identical.
The decomposition checker

Checks whether the results of two different composition are identical. An additional simulation is performed and compared to the reference one.
 Currently viewing