~rdiaz02/functional-testing/main01

17 by RDU
Minor touches, README, license, etc
1
What is this
2
============
3
4
The files here are used for testing of the different Asterias
5
applications.
6
7
8
Requirements
9
=============
10
11
All:
12
----
13
You need:
14
15
- Python 2.4 (might work with 2.3 and/or 2.5, but we haven't tested it).
16
- Funkload (http://funkload.nuxeo.org).
17
18
(Note: I have no idea whether any of this works under Windows; we've only
19
tested under GNU/Linux. This is no serious limitation: these are tests we
20
use to make sure our applications do what they should, so we will not
21
spend time to make them run under OSs we don't use.).
22
23
24
Extra:
25
------
26
27
If you want to carry out the more exhaustive numerical tests (under
28
/PomeloII/NumericalTesting) you need:
29
30
- rpy, the R/Python interface (http://rpy.sourceforge.net); there are
31
  Debian packages.
32
33
- R (http://cran.r-project.org), of course, and several of its recommended
34
  packages.
35
36
37
38
39
Running the tests
40
==================
41
42
For Pomelo, the file testPomeloSingleTest.sh  runs a single
43
test. Otherwise, all the test*.py in each subdirectory should be run using
44
"fl-run-test testwhatever.py". 
45
46
There is a utility shell script, under the main directory,
30 by ramon diaz-uriarte
sleep times changes and numerical testing fixes
47
"run_all_tests.sh" that runs all tests, except the NumericalResults
48
tests. All the tests in this file are completed in about 90 minutes (but
49
can take longer, or give network-related exceptions, if there are network
50
problems, etc. )
51
17 by RDU
Minor touches, README, license, etc
52
53
The main exception are the set of tests under "NumericalResults". The file
54
there is pomelo-num.py. This file should run as any stand-alone python
30 by ramon diaz-uriarte
sleep times changes and numerical testing fixes
55
file. However, if you run into networks problems (that are unrelated to
56
asterias), it is advised that you run this file interactively from a
57
Python shell. This also provides you some more control, and allows you to
31 by ramon diaz-uriarte
Additions to README
58
rerun specifictests if you are interested. When run stand-alone, the tests
59
take between 20 and 40 minutes (depending on network speed and on your
60
machine speed ---it is R who does the comparisons).
61
17 by RDU
Minor touches, README, license, etc
62
63
30 by ramon diaz-uriarte
sleep times changes and numerical testing fixes
64
NumericalResults: details
17 by RDU
Minor touches, README, license, etc
65
=========================
66
67
These are related to the pomelo-num.py. This is the logic of the tests:
68
69
- A bunch of reference results have been precomputed in R, and stored
70
  under "DataSets_R_Analyses".
71
72
- How these have been generated is explained in each of the .R files:
73
  * Permut.R
74
  * Limma.R
75
  * FishersExact.R
76
  * Cox.R
77
78
  Those files get a file/data set, run the R analyses, and store
79
  results. There should be no need to recreate these analyses (unless you
80
  want, of course).
81
82
- The file flstandalone.py: this is a modified version of funkload. I
83
  modified it so that I can use it for our type of testing here without
84
  using the unit test testing type of approach of fl-run-test.
85
86
- The file pomelo-num.py defines a few functions, and does the testing
87
  itself. The logic is:
88
89
  a) Using rpy, set an R session that will be called from Python
90
91
  b) Load a bunch of helper R functions
92
93
  c) Use funkload to submit files to the application, with the appropriate
94
  parameters, etc. and get back the output (whatever that needs to be: the
95
  results file, the comparisons between sets of coefficients, etc). This
96
  can be more or less involved, depending on what we are doing (e.g., the
97
  most complex are using covariates in linear models).
98
99
  d) Using rpy, get R to read the output, and carry out the comparisons of
100
  the computed stuff from Pomelo (what funkload just got back in step c)
101
  before) and the precomputed results (the reference results). Of course,
102
  some of these comparisons cannot be literal comparison (recall these are
103
  real numbers), and some need to account for the standard error
104
  associated with an estimate (e.g., p-values returned from permutation
105
  tests). This explains why in many cases you see listing of differences
106
  that do not mean that things are broken, but just that you should expect
31 by ramon diaz-uriarte
Additions to README
107
  to see some differences.