2
================================
3
A Subversion Testing Framework
4
================================
7
The three goals of Subversion's automated test-suite:
9
1. It must be easy to run.
10
2. It must be easy to understand the results.
11
3. It must be easy to add new tests.
15
Definition of an SVN "test program"
16
-----------------------------------
18
A Subversion test program is any executable that contains a number of
19
sub-tests it can run. It has a standard interface:
21
1. If run with a numeric argument N, the program runs sub-test N.
23
2. If run with the argument `list', it will list the names of all sub-tests.
25
3. If run with no arguments, the program runs *all* sub-tests.
27
4. The program returns either 0 (success) or 1 (if any sub-test failed).
29
5. Upon finishing a test, the program reports the results in a format
30
which is both machine-readable (for the benefit of automatic
31
regression tracking scripts), and human-readable (for the sake of
32
painstaking grovelling by hand in the dead of night):
34
(PASS | FAIL): (argv[0]) (argv[1]): (description)
38
[sussman@newton:~] ./frobtest 2
39
PASS: frobtest 2: frobnicating fragile data
42
Note that no particular programming language is required to write a
43
set of tests; they just needs to export this user interface.
47
How to write new C tests
48
------------------------
50
The C test framework tests library APIs, both internal and external.
52
All test programs use a standard `main' function. You write .c files
53
that contain only test functions --- you should not define your own
56
Instead, your code should define an externally visible array
57
`test_funcs', like this:
60
struct svn_test_descriptor_t test_funcs[] =
63
SVN_TEST_PASS(test_a),
64
SVN_TEST_PASS(test_b),
65
SVN_TEST_PASS(test_c),
69
In this example, `test_a', `test_b', and `test_c' are the names of
70
test functions. The first and last elements of the array must be
71
SVN_TEST_NULL. The first SVN_TEST_NULL is there to leave room for
72
Buddha. The standard `main' function searches for the final
73
SVN_TEST_NULL to determine the size of the array.
75
Instead of SVN_TEST_PASS, you can use SVN_TEST_XFAIL to declare that a
76
test is expected to fail. The status of such tests is then no longer
77
marked as PASS or FAIL, but rather as XFAIL (eXpected FAILure) or
78
XPASS (uneXpected PASS).
80
The purpose of XFAIL tests is to confirm that a known bug still
81
exists. When you see such a test uneXpectedly PASS, you've probably
82
fixed the bug it tests for, even if that wasn't your intention. :-)
84
Each test function conforms to the svn_test_driver_t prototype:
86
svn_error_t *f (const char **MSG,
87
svn_boolean_t MSG_ONLY
90
When called, a test function should first set *MSG to a brief (as in,
91
half-line) description of the test. Then, if MSG_ONLY is TRUE, the
92
test should immediately return SVN_NO_ERROR. Else it should perform a
93
test. If the test passes, the function should return SVN_NO_ERROR;
94
otherwise, it should return an error object, built using the functions
97
Once you've got a .c file with a bunch of tests and a `test_funcs'
98
array, you should link it against the `libsvn_tests_main.la' libtool
99
library, in this directory, `subversion/tests'. That library provides
100
a `main' function which will check the command-line arguments, pick
101
the appropriate tests to run from your `test_funcs' array, and print
102
the results in the standard way.
105
How to write new Python tests
106
-----------------------------
108
The python test framework exercises the command-line client as a
111
To write python tests, please look at the README file inside the
112
clients/cmdline/ subdirectory.
115
When to write new tests
116
-----------------------
118
In the world of CVS development, people have noticed that the same
119
bugs tend to recur over and over. Thus the CVS community has adopted
120
a hard-and-fast rule that whenever somebody fixes a bug, a *new* test
121
is added to the suite to specifically check for it. It's a common
122
case that in the process of fixing a bug, several old bugs are
123
accidentally resurrected... and then quickly revealed by the test
126
This same rule applies to Subversion development: ** If you fix a
127
bug, write a test for it. **
129
(However, we should note that this rule is somewhat relaxed until
130
Subversion hits 1.0. A majority of pre-1.0 bugs are due to the code
131
being in the "initial growth" stage.)
137
Regression tests are for testing interface promises. This might
138
include semi-private interfaces (such as the non-public .h files
139
inside module subdirs), but does not include implementation details
140
behind the interfaces. For example, this is a good way to test
143
/* Test that svn_fs_txn_name fulfills its promise. */
144
char *txn_name = NULL;
145
SVN_ERR = svn_fs_txn_name (&txn_name, txn, pool);
146
if (txn_name == NULL)
151
/* Test that the txn got id "0", since it's the first txn. */
152
char *txn_name = NULL;
153
SVN_ERR = svn_fs_txn_name (&txn_name, txn, pool);
154
if (txn_name && (strcmp (txn_name, "0") != 0))
157
During development, it may sometimes be very convenient to
158
*temporarily* test implementation details via the regular test suite.
159
It's okay to do that, but please remove the test when you're done and
160
make sure it's clearly marked in the meantime. Since implementation
161
details are not interface promises, they might legitimately change --
162
and when they change, that test will break. At which point whoever
163
encountered the problem will look into the test suite and find the
164
temporary test you forgot to remove. As long as it's marked like
167
/* Temporary test for debugging only: Test that the txn got id
168
* "0", since it's the first txn.
169
* NOTE: If the test suite is failing because of this test, then
170
* just remove the test. It was written to help me debug an
171
* implementation detail that might have changed by now, so its
172
* failure does not necessarily mean there's anything wrong with
174
char *txn_name = NULL;
175
SVN_ERR = svn_fs_txn_name (&txn_name, txn, pool);
176
if (txn_name && (strcmp (txn_name, "0") != 0))
179
...then they won't have wasted much time.
186
[shared library "libsvn_tests_main"]
187
A standardized main() function to drive tests. Link this into
188
your automated test-programs.
191
[shared library "libsvn_tests_editor"]
192
An editor for testing drivers of svn_delta_edit_fns_t. This
193
editor's functions simply print information to stdout.
196
A subdirectory containing various <delta-pkg> XML files. If one
197
of these testing trees isn't what you need, create a new one and
198
put it with the others.
201
A collection of python scripts to test the command-line client.
207
The file `build.conf' (at the top level of the tree) defines a
208
[test-scripts] section. These are a list of scripts that will be run
209
whenever someone types `make check`.
211
Each script is expected to output sub-test information as described in
212
the first section of this document; the `make check` rule scans for
213
FAIL codes, and logs all the sub-test output into a top-level file
216
If you write a new C executable that contains subtests, be sure to add
217
a build "target" under the TESTING TARGETS section of build.conf.
219
If you write a new python-script, be sure to add to the [test-scripts]
226
Please see subversion/tests/clients/cmdline/README for how to run the
227
command-line client test suite against a remote repository.
235
1. ...must be easy to run.
239
2. ...must be easy to understand the results.
241
* test programs output standardized messages
242
* all messages are logged
243
* `make check` only displays errors (not successes!)
245
3. ...must be easy to add new tests.
247
* add your own sub-test to an existing test program, or
248
* add a new test program using template C or python code.