0.37.7
by Christian Perrier
Import upstream version 3.6.0~pre3 |
1 |
subunit: A streaming protocol for test results |
2 |
Copyright (C) 2005-2009 Robert Collins <robertc@robertcollins.net> |
|
3 |
||
4 |
Licensed under either the Apache License, Version 2.0 or the BSD 3-clause |
|
5 |
license at the users choice. A copy of both licenses are available in the |
|
6 |
project source as Apache-2.0 and BSD. You may not use this file except in |
|
7 |
compliance with one of these two licences. |
|
8 |
|
|
9 |
Unless required by applicable law or agreed to in writing, software |
|
10 |
distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT |
|
11 |
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
|
12 |
license you chose for the specific language governing permissions and |
|
13 |
limitations under that license. |
|
14 |
||
15 |
See the COPYING file for full details on the licensing of Subunit. |
|
16 |
||
17 |
subunit reuses iso8601 by Michael Twomey, distributed under an MIT style |
|
18 |
licence - see python/iso8601/LICENSE for details. |
|
19 |
||
20 |
Subunit
|
|
21 |
-------
|
|
22 |
||
23 |
Subunit is a streaming protocol for test results. The protocol is human |
|
24 |
readable and easily generated and parsed. By design all the components of |
|
25 |
the protocol conceptually fit into the xUnit TestCase->TestResult interaction. |
|
26 |
||
27 |
Subunit comes with command line filters to process a subunit stream and |
|
28 |
language bindings for python, C, C++ and shell. Bindings are easy to write |
|
29 |
for other languages. |
|
30 |
||
31 |
A number of useful things can be done easily with subunit: |
|
32 |
* Test aggregation: Tests run separately can be combined and then |
|
33 |
reported/displayed together. For instance, tests from different languages |
|
34 |
can be shown as a seamless whole. |
|
35 |
* Test archiving: A test run may be recorded and replayed later. |
|
36 |
* Test isolation: Tests that may crash or otherwise interact badly with each |
|
37 |
other can be run seperately and then aggregated, rather than interfering |
|
38 |
with each other. |
|
39 |
* Grid testing: subunit can act as the necessary serialisation and |
|
40 |
deserialiation to get test runs on distributed machines to be reported in |
|
41 |
real time. |
|
42 |
||
43 |
Subunit supplies the following filters: |
|
44 |
* tap2subunit - convert perl's TestAnythingProtocol to subunit. |
|
45 |
* subunit2pyunit - convert a subunit stream to pyunit test results.
|
|
46 |
* subunit2gtk - show a subunit stream in GTK.
|
|
47 |
* subunit2junitxml - convert a subunit stream to JUnit's XML format. |
|
48 |
* subunit-diff - compare two subunit streams. |
|
49 |
* subunit-filter - filter out tests from a subunit stream. |
|
50 |
* subunit-ls - list info about tests present in a subunit stream. |
|
51 |
* subunit-stats - generate a summary of a subunit stream. |
|
52 |
* subunit-tags - add or remove tags from a stream. |
|
53 |
||
54 |
Integration with other tools |
|
55 |
----------------------------
|
|
56 |
||
57 |
Subunit's language bindings act as integration with various test runners like |
|
58 |
'check', 'cppunit', Python's 'unittest'. Beyond that a small amount of glue |
|
59 |
(typically a few lines) will allow Subunit to be used in more sophisticated |
|
60 |
ways. |
|
61 |
||
62 |
Python
|
|
63 |
======
|
|
64 |
||
65 |
Subunit has excellent Python support: most of the filters and tools are written |
|
66 |
in python and there are facilities for using Subunit to increase test isolation |
|
67 |
seamlessly within a test suite. |
|
68 |
||
69 |
One simple way to run an existing python test suite and have it output subunit |
|
70 |
is the module ``subunit.run``:: |
|
71 |
||
72 |
$ python -m subunit.run mypackage.tests.test_suite |
|
73 |
|
|
74 |
For more information on the Python support Subunit offers , please see |
|
75 |
``pydoc subunit``, or the source in ``python/subunit/__init__.py`` |
|
76 |
||
77 |
C
|
|
78 |
=
|
|
79 |
||
80 |
Subunit has C bindings to emit the protocol, and comes with a patch for 'check' |
|
81 |
which has been nominally accepted by the 'check' developers. See 'c/README' for |
|
82 |
more details. |
|
83 |
||
84 |
C++ |
|
85 |
===
|
|
86 |
||
87 |
The C library is includable and usable directly from C++. A TestListener for |
|
88 |
CPPUnit is included in the Subunit distribution. See 'c++/README' for details. |
|
89 |
||
90 |
shell
|
|
91 |
=====
|
|
92 |
||
93 |
Similar to C, the shell bindings consist of simple functions to output protocol |
|
94 |
elements, and a patch for adding subunit output to the 'ShUnit' shell test |
|
95 |
runner. See 'shell/README' for details. |
|
96 |
||
97 |
Filter recipes |
|
98 |
--------------
|
|
99 |
||
100 |
To ignore some failing tests whose root cause is already known:: |
|
101 |
||
102 |
subunit-filter --without 'AttributeError.*flavor' |
|
103 |
||
104 |
||
105 |
The protocol |
|
106 |
------------
|
|
107 |
||
108 |
Sample subunit wire contents |
|
109 |
----------------------------
|
|
110 |
||
111 |
The following:: |
|
112 |
test: test foo works |
|
113 |
success: test foo works. |
|
114 |
test: tar a file. |
|
115 |
failure: tar a file. [ |
|
116 |
..
|
|
117 |
].. space is eaten. |
|
118 |
foo.c:34 WARNING foo is not defined. |
|
119 |
] |
|
120 |
a writeln to stdout |
|
121 |
||
122 |
When run through subunit2pyunit:: |
|
123 |
.F |
|
124 |
a writeln to stdout |
|
125 |
||
126 |
======================== |
|
127 |
FAILURE: tar a file. |
|
128 |
------------------- |
|
129 |
.. |
|
130 |
].. space is eaten. |
|
131 |
foo.c:34 WARNING foo is not defined. |
|
132 |
||
133 |
||
134 |
Subunit protocol description |
|
135 |
============================
|
|
136 |
||
137 |
This description is being ported to an EBNF style. Currently its only partly in |
|
138 |
that style, but should be fairly clear all the same. When in doubt, refer the |
|
139 |
source (and ideally help fix up the description!). Generally the protocol is |
|
140 |
line orientated and consists of either directives and their parameters, or |
|
141 |
when outside a DETAILS region unexpected lines which are not interpreted by |
|
142 |
the parser - they should be forwarded unaltered. |
|
143 |
||
144 |
test|testing|test:|testing: test label |
|
145 |
success|success:|successful|successful: test label |
|
146 |
success|success:|successful|successful: test label DETAILS |
|
147 |
failure: test label |
|
148 |
failure: test label DETAILS |
|
149 |
error: test label |
|
150 |
error: test label DETAILS |
|
151 |
skip[:] test label |
|
152 |
skip[:] test label DETAILS |
|
153 |
xfail[:] test label |
|
154 |
xfail[:] test label DETAILS |
|
155 |
progress: [+|-]X |
|
156 |
progress: push |
|
157 |
progress: pop |
|
158 |
tags: [-]TAG ... |
|
159 |
time: YYYY-MM-DD HH:MM:SSZ |
|
160 |
||
161 |
DETAILS ::= BRACKETED | MULTIPART |
|
162 |
BRACKETED ::= '[' CR UTF8-lines ']' CR |
|
163 |
MULTIPART ::= '[ multipart' CR PART* ']' CR |
|
164 |
PART ::= PART_TYPE CR NAME CR PART_BYTES CR |
|
165 |
PART_TYPE ::= Content-Type: type/sub-type(;parameter=value,parameter=value) |
|
166 |
PART_BYTES ::= (DIGITS CR LF BYTE{DIGITS})* '0' CR LF |
|
167 |
||
168 |
unexpected output on stdout -> stdout. |
|
169 |
exit w/0 or last test completing -> error |
|
170 |
||
171 |
Tags given outside a test are applied to all following tests |
|
172 |
Tags given after a test: line and before the result line for the same test |
|
173 |
apply only to that test, and inherit the current global tags. |
|
174 |
A '-' before a tag is used to remove tags - e.g. to prevent a global tag |
|
175 |
applying to a single test, or to cancel a global tag. |
|
176 |
||
177 |
The progress directive is used to provide progress information about a stream |
|
178 |
so that stream consumer can provide completion estimates, progress bars and so |
|
179 |
on. Stream generators that know how many tests will be present in the stream |
|
180 |
should output "progress: COUNT". Stream filters that add tests should output |
|
181 |
"progress: +COUNT", and those that remove tests should output |
|
182 |
"progress: -COUNT". An absolute count should reset the progress indicators in |
|
183 |
use - it indicates that two separate streams from different generators have |
|
184 |
been trivially concatenated together, and there is no knowledge of how many |
|
185 |
more complete streams are incoming. Smart concatenation could scan each stream |
|
186 |
for their count and sum them, or alternatively translate absolute counts into |
|
187 |
relative counts inline. It is recommended that outputters avoid absolute counts |
|
188 |
unless necessary. The push and pop directives are used to provide local regions |
|
189 |
for progress reporting. This fits with hierarchically operating test |
|
190 |
environments - such as those that organise tests into suites - the top-most |
|
191 |
runner can report on the number of suites, and each suite surround its output |
|
192 |
with a (push, pop) pair. Interpreters should interpret a pop as also advancing |
|
193 |
the progress of the restored level by one step. Encountering progress |
|
194 |
directives between the start and end of a test pair indicates that a previous |
|
195 |
test was interrupted and did not cleanly terminate: it should be implicitly |
|
196 |
closed with an error (the same as when a stream ends with no closing test |
|
197 |
directive for the most recently started test). |
|
198 |
||
199 |
The time directive acts as a clock event - it sets the time for all future |
|
200 |
events. The value should be a valid ISO8601 time. |
|
201 |
||
202 |
The skip result is used to indicate a test that was found by the runner but not |
|
203 |
fully executed due to some policy or dependency issue. This is represented in |
|
204 |
python using the addSkip interface that testtools |
|
205 |
(https://edge.launchpad.net/testtools) defines. When communicating with a non |
|
206 |
skip aware test result, the test is reported as an error. |
|
207 |
The xfail result is used to indicate a test that was expected to fail failing |
|
208 |
in the expected manner. As this is a normal condition for such tests it is |
|
209 |
represented as a successful test in Python. |
|
210 |
In future, skip and xfail results will be represented semantically in Python, |
|
211 |
but some discussion is underway on the right way to do this. |
|
212 |