3
The regression tests are a comprehensive set of tests for the SQL
4
implementation in PostgreSQL. They test standard SQL operations as well as the
5
extended capabilities of PostgreSQL. From PostgreSQL 6.1 onward, the regression
6
tests are current for every official release.
8
-------------------------------------------------------------------------------
12
The regression test can be run against an already installed and running server,
13
or using a temporary installation within the build tree. Furthermore, there is
14
a "parallel" and a "sequential" mode for running the tests. The sequential
15
method runs each test script in turn, whereas the parallel method starts up
16
multiple server processes to run groups of tests in parallel. Parallel testing
17
gives confidence that interprocess communication and locking are working
18
correctly. For historical reasons, the sequential test is usually run against
19
an existing installation and the parallel method against a temporary
20
installation, but there are no technical reasons for this.
22
To run the regression tests after building but before installation, type
26
in the top-level directory. (Or you can change to "src/test/regress" and run
27
the command there.) This will first build several auxiliary files, such as some
28
sample user-defined trigger functions, and then run the test driver script. At
29
the end you should see something like
31
======================
33
======================
35
or otherwise a note about which tests failed. See the Section called Test
36
Evaluation below for more.
38
Because this test method runs a temporary server, it will not work when you are
39
the root user (since the server will not start as root). If you already did the
40
build as root, you do not have to start all over. Instead, make the regression
41
test directory writable by some other user, log in as that user, and restart
42
the tests. For example
44
root# chmod -R a+w src/test/regress
45
root# chmod -R a+w contrib/spi
47
joeuser$ cd top-level build directory
50
(The only possible "security risk" here is that other users might be able to
51
alter the regression test results behind your back. Use common sense when
52
managing user permissions.)
54
Alternatively, run the tests after installation.
56
The parallel regression test starts quite a few processes under your user ID.
57
Presently, the maximum concurrency is twenty parallel test scripts, which means
58
sixty processes: there's a server process, a psql, and usually a shell parent
59
process for the psql for each test script. So if your system enforces a per-
60
user limit on the number of processes, make sure this limit is at least
61
seventy-five or so, else you may get random-seeming failures in the parallel
62
test. If you are not in a position to raise the limit, you can cut down the
63
degree of parallelism by setting the MAX_CONNECTIONS parameter. For example,
65
gmake MAX_CONNECTIONS=10 check
67
runs no more than ten tests concurrently.
69
On some systems, the default Bourne-compatible shell ("/bin/sh") gets confused
70
when it has to manage too many child processes in parallel. This may cause the
71
parallel test run to lock up or fail. In such cases, specify a different
72
Bourne-compatible shell on the command line, for example:
74
gmake SHELL=/bin/ksh check
76
If no non-broken shell is available, you may be able to work around the problem
77
by limiting the number of connections, as shown above.
79
To run the tests after installation, initialize a data area and start the
84
The tests will expect to contact the server at the local host and the default
85
port number, unless directed otherwise by PGHOST and PGPORT environment
88
-------------------------------------------------------------------------------
92
Some properly installed and fully functional PostgreSQL installations can
93
"fail" some of these regression tests due to platform-specific artifacts such
94
as varying floating-point representation and time zone support. The tests are
95
currently evaluated using a simple "diff" comparison against the outputs
96
generated on a reference system, so the results are sensitive to small system
97
differences. When a test is reported as "failed", always examine the
98
differences between expected and actual results; you may well find that the
99
differences are not significant. Nonetheless, we still strive to maintain
100
accurate reference files across all supported platforms, so it can be expected
103
The actual outputs of the regression tests are in files in the "src/test/
104
regress/results" directory. The test script uses "diff" to compare each output
105
file against the reference outputs stored in the "src/test/regress/expected"
106
directory. Any differences are saved for your inspection in "src/test/regress/
107
regression.diffs". (Or you can run "diff" yourself, if you prefer.)
109
-------------------------------------------------------------------------------
111
Error message differences
113
Some of the regression tests involve intentional invalid input values. Error
114
messages can come from either the PostgreSQL code or from the host platform
115
system routines. In the latter case, the messages may vary between platforms,
116
but should reflect similar information. These differences in messages will
117
result in a "failed" regression test that can be validated by inspection.
119
-------------------------------------------------------------------------------
123
If you run the tests against an already-installed server that was initialized
124
with a collation-order locale other than C, then there may be differences due
125
to sort order and follow-up failures. The regression test suite is set up to
126
handle this problem by providing alternative result files that together are
127
known to handle a large number of locales. For example, for the char test, the
128
expected file "char.out" handles the C and POSIX locales, and the file
129
"char_1.out" handles many other locales. The regression test driver will
130
automatically pick the best file to match against when checking for success and
131
for computing failure differences. (This means that the regression tests cannot
132
detect whether the results are appropriate for the configured locale. The tests
133
will simply pick the one result file that works best.)
135
If for some reason the existing expected files do not cover some locale, you
136
can add a new file. The naming scheme is testname_digit.out. The actual digit
137
is not significant. Remember that the regression test driver will consider all
138
such files to be equally valid test results. If the test results are platform-
139
specific, the technique described in the Section called Platform-specific
140
comparison files should be used instead.
142
-------------------------------------------------------------------------------
144
Date and time differences
146
A few of the queries in the "horology" test will fail if you run the test on
147
the day of a daylight-saving time changeover, or the day after one. These
148
queries expect that the intervals between midnight yesterday, midnight today
149
and midnight tomorrow are exactly twenty-four hours --- which is wrong if
150
daylight-saving time went into or out of effect meanwhile.
152
Note: Because USA daylight-saving time rules are used, this problem
153
always occurs on the first Sunday of April, the last Sunday of
154
October, and their following Mondays, regardless of when daylight-
155
saving time is in effect where you live. Also note that the problem
156
appears or disappears at midnight Pacific time (UTC-7 or UTC-8), not
157
midnight your local time. Thus the failure may appear late on
158
Saturday or persist through much of Tuesday, depending on where you
161
Most of the date and time results are dependent on the time zone environment.
162
The reference files are generated for time zone PST8PDT (Berkeley, California),
163
and there will be apparent failures if the tests are not run with that time
164
zone setting. The regression test driver sets environment variable PGTZ to
165
PST8PDT, which normally ensures proper results. However, your operating system
166
must provide support for the PST8PDT time zone, or the time zone-dependent
167
tests will fail. To verify that your machine does have this support, type the
172
The command above should have returned the current system time in the PST8PDT
173
time zone. If the PST8PDT time zone is not available, then your system may have
174
returned the time in UTC. If the PST8PDT time zone is missing, you can set the
175
time zone rules explicitly:
177
PGTZ='PST8PDT7,M04.01.0,M10.05.03'; export PGTZ
179
There appear to be some systems that do not accept the recommended syntax for
180
explicitly setting the local time zone rules; you may need to use a different
181
PGTZ setting on such machines.
183
Some systems using older time-zone libraries fail to apply daylight-saving
184
corrections to dates before 1970, causing pre-1970 PDT times to be displayed in
185
PST instead. This will result in localized differences in the test results.
187
-------------------------------------------------------------------------------
189
Floating-point differences
191
Some of the tests involve computing 64-bit floating-point numbers (double
192
precision) from table columns. Differences in results involving mathematical
193
functions of double precision columns have been observed. The float8 and
194
geometry tests are particularly prone to small differences across platforms, or
195
even with different compiler optimization options. Human eyeball comparison is
196
needed to determine the real significance of these differences which are
197
usually 10 places to the right of the decimal point.
199
Some systems display minus zero as -0, while others just show 0.
201
Some systems signal errors from pow() and exp() differently from the mechanism
202
expected by the current PostgreSQL code.
204
-------------------------------------------------------------------------------
206
Row ordering differences
208
You might see differences in which the same rows are output in a different
209
order than what appears in the expected file. In most cases this is not,
210
strictly speaking, a bug. Most of the regression test scripts are not so
211
pedantic as to use an ORDER BY for every single SELECT, and so their result row
212
orderings are not well-defined according to the letter of the SQL
213
specification. In practice, since we are looking at the same queries being
214
executed on the same data by the same software, we usually get the same result
215
ordering on all platforms, and so the lack of ORDER BY isn't a problem. Some
216
queries do exhibit cross-platform ordering differences, however. (Ordering
217
differences can also be triggered by non-C locale settings.)
219
Therefore, if you see an ordering difference, it's not something to worry
220
about, unless the query does have an ORDER BY that your result is violating.
221
But please report it anyway, so that we can add an ORDER BY to that particular
222
query and thereby eliminate the bogus "failure" in future releases.
224
You might wonder why we don't order all the regression test queries explicitly
225
to get rid of this issue once and for all. The reason is that that would make
226
the regression tests less useful, not more, since they'd tend to exercise query
227
plan types that produce ordered results to the exclusion of those that don't.
229
-------------------------------------------------------------------------------
233
There is at least one case in the random test script that is intended to
234
produce random results. This causes random to fail the regression test once in
235
a while (perhaps once in every five to ten trials). Typing
237
diff results/random.out expected/random.out
239
should produce only one or a few lines of differences. You need not worry
240
unless the random test always fails in repeated attempts. (On the other hand,
241
if the random test is *never* reported to fail even in many trials of the
242
regression tests, you probably *should* worry.)
244
-------------------------------------------------------------------------------
246
Platform-specific comparison files
248
Since some of the tests inherently produce platform-specific results, we have
249
provided a way to supply platform-specific result comparison files. Frequently,
250
the same variation applies to multiple platforms; rather than supplying a
251
separate comparison file for every platform, there is a mapping file that
252
defines which comparison file to use. So, to eliminate bogus test "failures"
253
for a particular platform, you must choose or make a variant result file, and
254
then add a line to the mapping file, which is "src/test/regress/resultmap".
256
Each line in the mapping file is of the form
258
testname/platformpattern=comparisonfilename
260
The test name is just the name of the particular regression test module. The
261
platform pattern is a pattern in the style of the Unix tool "expr" (that is, a
262
regular expression with an implicit ^ anchor at the start). It is matched
263
against the platform name as printed by "config.guess" followed by :gcc or :cc,
264
depending on whether you use the GNU compiler or the system's native compiler
265
(on systems where there is a difference). The comparison file name is the name
266
of the substitute result comparison file.
268
For example: some systems using older time zone libraries fail to apply
269
daylight-saving corrections to dates before 1970, causing pre-1970 PDT times to
270
be displayed in PST instead. This causes a few differences in the "horology"
271
regression test. Therefore, we provide a variant comparison file, "horology-no-
272
DST-before-1970.out", which includes the results to be expected on these
273
systems. To silence the bogus "failure" message on HPUX platforms, "resultmap"
276
horology/.*-hpux=horology-no-DST-before-1970
278
which will trigger on any machine for which the output of "config.guess"
279
includes -hpux. Other lines in "resultmap" select the variant comparison file
280
for other platforms where it's appropriate.