4
This benchmark framework consists of the files:
5
bench.erl - see bench module below
6
bench.hrl - Defines some useful macros
7
all.erl - see all module below
12
The module bench is a generic module that measures execution time
13
of functions in callback modules and writes an html-report on the outcome.
15
When you execute the function bench:run/0 it will compile and run all
16
benchmark modules in the current directory.
21
In the all module there is a function called releases/0 that you can
22
edit to contain all your erlang installations and then you can
23
run your benchmarks on several erlang versions using only one command i.e.
26
Requirements on callback modules
27
---------------------------------
29
* A callback module must be named <callbackModuleName>_bm.erl
31
* The module must export the function benchmarks/0 that must return:
32
{Iterations, [Name1,Name2...]} where Iterations is the number of
33
times each benchmark should be run. Name1, Name2 and so one are the
34
name of exported functions in the module.
36
* The exported functions Name1 etc. must take one argument i.e. the number
37
of iterations and should return the atom ok.
39
* The functions in a benchmark module should represent different
40
ways/different sequential algorithms for doing something. And the
41
result will be how fast they are compared to each other.
46
Files that are created in the current directory are *.bmres and
47
index.html. The file(s) with the extension "bmres" are an intermediate
48
representation of the benchmark results and is only meant to be read
49
by the reporting mechanism defined in bench.erl. The index.html file
50
is the report telling you how good the benchmarks are in comparison to
51
each other. If you run your test on several erlang releases the
52
html-file will include the result for all versions.
57
To get meaningful measurements, you should make sure that:
59
* The total execution time is at least several seconds.
61
* That any time spent in setup before entering the measurement loop is very
62
small compared to the total time.
64
* That time spent by the loop itself is small compared to the total execution
67
Consider the following example of a benchmark function that does
68
a local function call.
72
foo(), % Local function call
75
The problem is that both "foo()" and "local_call(Iter-1)" takes about
76
the same amount of time. To get meaningful figures you'll need to make
77
sure that the loop overhead will not be visible. In this case we can
78
take help of a macro in bench.hrl to repeat the local function call
79
many times, making sure that time spent calling the local function is
80
relatively much longer than the time spent iterating. Of course, all
81
benchmarks in the same module must be repeated the same number of
82
times; thus external_call will look like
84
external_call(0) -> ok;
85
external_call(Iter) ->
86
?rep20(?MODULE:foo()),
87
external_call(Iter-1).
89
This technique is only necessary if the operation we are testing executes
92
If you for instance want to test a sort routine we can keep it simple:
95
do_sort(Iter, lists:seq(0, 63)).
97
do_sort(0, List) -> ok;
98
do_sort(Iter, List) ->
100
do_sort(Iter-1, List).
102
The call to lists:seq/2 is only done once. The loop overhead in the
103
do_sort/2 function is small compared to the execution time of lists:sort/1.
108
Any error enforced by a callback module will result in exit of the benchmark
109
program and an errormessage that should give a good idea of what is wrong.