1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
|
Unit Testing for IRM
======================
Unit testing is the process of writing lots of little pieces of test code,
each of which test some part of the functionality of a software system. By
having a comprehensive set of test cases which are satisfied by the
production code, we can have real confidence in our software, instead of
just "hoping" it works.
Prerequisites
---------------
The test suite makes a few assumptions about your environment. If these
conditions aren't met, the chances are that the testsuite won't report
accurately.
* A MySQL server, listening on the localhost, with a database called 'test',
a user called 'test' with full rights to the 'test' database, with no
password.
* A HTTP server listening on the localhost. Obviously, the webserver must
be configured to run PHP scripts...
If you're running the tests from the command line, the webserver must be
set up to serve requests for /_irmtest/ to the directory containing your
IRM codebase. There is also an experimental HTML-based test runner, which
operates through a web browser. It's much prettier, but I don't guarantee
that it'll run all the tests quite properly.
* To run the tests in their text mode, you must have a command-line PHP
interpreter.
If you're interested in enhancing the test suite by autodetecting the
presence of these resources and only running tests for which the necessary
resources are available, please do!
What is a unit test?
----------------------
The general structure of all unit tests is as follows:
* Setup pre-conditions (the state of the system prior to the execution of the
code under test)
* Run the code to be tested, giving it all applicable arguments
* Validate that the code ran correctly by examining the state of the system
after the code under test has run, and ensuring that it has done
what it should have done.
The pre-conditions are normally either set at the beginning of the test
method, or in a general method called setUp() (see below for the structure
of test code).
Running the code itself is normally making a call to the function to
be tested, constructing an object, or, for Web applications, making one or
more HTTP requests to the webpage of interest.
Validating the post-run state of the system is done by examining the
database, system files, and script output, and using various assert methods
to communicate the success or failure of various tests to the user.
So how do I write a Unit Test?
-------------------------------
Put together a snippet of code which sets up the state of the application,
then run the code to be tested, and check the result. This snippet of code
should be placed in a test case in a method named test[Something](). Each
test method takes no arguments and returns nothing.
Have a look at the existing test code in the testing/ directory of the
IRM source distribution for examples of how to write test code.
When you create a new test class, you need to tell the test code runner that
it needs to run the new test code. Simply add a new line to alltests like
the existing addTestFile calls.
What should I test?
---------------------
The naive answer would be "everything". However, that is impractical.
There are just too many things that could be tested for a policy of "test
everything" to allow any actual code to be written.
It helps to think of tests as pulls of the handle on a poker machine. Each
pull costs you something (some time to write it). You "win" when a test
that you expected to pass fails, or when a test you expected to fail
actually passes. You lose when a test gives you no additional useful
feedback. You want to maximise your winnings.
So, write tests that demonstrate something new and different about the
operation of the system. Before you write any production code, write a test
which defines how you want the system to act in order to pass the test. Run
the test suite, verify that the test fails. Now, modify the production code
just enough so the test passes. If you want the system to do something that
can't be expressed in one test, write multiple tests, each one interspersed
with some production development to satisfy *just* *that* new test. This
is, in my experience, the best way to ensure that you have good test
coverage, whilst minimising the production of tests which add no value to
the system.
How do I retro-fit unit testing onto an existing codebase?
------------------------------------------------------------
IRM already has a significant amount of code written, which would take
hundreds of tests to cover completely. A lot of the code is structured in
such a way as to make it hard to test, as well. There is little point in
going back and writing tests for all of this functionality. It appears to
work well enough, so writing a complete test suite is really a waste of
resources.
However, from now on, every time you want to make some modification (whether
it be a refactoring, a bug fix, or a feature addition), write one or more
test cases which demonstrate your desired result:
Refactoring: Write tests surrounding the functionality you intend to
refactor. Show the test cases accurately represent the desired
functionality of the system by ensuring they all run properly. Then
perform the refactoring, ensuring you haven't broken anything by
making sure the tests all still run properly.
Bug fix: Write one or more tests which shows the bug in action -- in other
words, it hits the bug, and produces erroneous results. Then modify
the system so that the test passes. You can be confident that
you've fixed the bug, because you have concrete feedback from the
tests that the bug no longer exists.
Feature addition: Put together some tests which invoke the feature you want
to add, and test that the feature is working as it should.
Naturally, these tests will fail at first, because the feature
doesn't exist. But you then modify the production code to make the
feature work, and you stop when your tests all pass.
Over time, as old code gets modified, the percentage of code covered by the
tests will increase, and eventually we will have a comprehensively tested
piece of software.
During modifications, if you manage to break something accidentally, write a
test to show the breakage and fix it from there. If you broke it once,
there's a good chance it'll break again when someone else modifies it, and
there should be a test to immediately warn the programmer that they've
broken something.
How do I run the unit tests?
------------------------------
The primary script that executes all of the unit tests is the 'alltests'
script in the testing/ directory of the distribution. To make it easily
runnable on all platforms, there is a run script ('run' for Unix, 'run.bat'
for Windows) which should run all of the tests.
To use the web test version copy the ini files from testing/data to config/.
and edit the username and password to match your sql installation, you will
also need to match the password in ConfigTest.php, and point your browser
at http://yourhost/path-to-irm/testing/alltests.php
|