1
<!-- <?xml version="1.0" ?>
2
<!DOCTYPE chapter PUBLIC "-//KDE//DTD DocBook XML V4.1-Based Variant V1.0//EN" "dtd/kdex.dtd">
3
To validate or process this file as a standalone document, uncomment
4
this prolog. Be sure to comment it out again when you are done -->
6
<chapter id="arts-in-detail">
7
<title>&arts; in Detail</title>
9
<sect1 id="architecture">
10
<title>Architecture</title>
14
<imagedata fileref="arts-structure.png" format="PNG"/>
16
<textobject><phrase>The &arts; structure.</phrase></textobject>
20
<sect1 id="modules-ports">
21
<title>Modules & Ports</title>
24
The idea of &arts; is, that synthesis can be done using small modules,
25
which only do one thing, and then recombine them in complex
26
structures. The small modules normally have inputs, where they can get
27
some signals or parameters, and outputs, where they produce some
32
One module (Synth_ADD) for instance just takes the two signals at
33
it's input and adds them together. The result is available as output
34
signal. The places where modules provide their input/output signals are
40
<sect1 id="structures">
41
<title>Structures</title>
44
A structure is a combination of connected modules, some of which may
45
have parameters coded directly to their input ports, others which may be
46
connected, and others, which are not connected at all.
50
What you can do with &artsbuilder; is describing structures. You
51
describe, which modules you want to be connected with which other
52
modules. When you are done, you can save that structure description to a
53
file, or tell &arts; to create such a structure you described (Execute).
57
Then you'll probably hear some sound, if you did everything the right
65
<title>Streams</title>
73
<title>Latency</title>
75
<sect2 id="what-islatency">
76
<title>What Is Latency?</title>
79
Suppose you have an application called <quote>mousepling</quote>(that
80
should make a <quote>pling</quote> sound if you click on a button. The
81
latency is the time between your finger clicking the mouse button and
82
you hearing the pling. The latency in this setup composes itself out of
83
certain latencies, that have different causes.
88
<sect2 id="latenbcy-simple">
89
<title>Latency in Simple Applications</title>
92
In this simple application, latency occurs at these places:
99
The time until the kernel has notified the X11 server that a mouse
106
The time until the X11 server has notified your application that a mouse
113
The time until the mousepling application has decided that this button
114
is worth playing a pling.
120
The time it takes the mousepling application to tell the soundserver
121
that it should play a pling.
127
The time it takes for the pling (which the soundserver starts mixing to
128
the other output at once) to go through the buffered data, until it
129
really reaches the position where the soundcard plays.
135
The time it takes the pling sound from the speakers to reach your ear.
141
The first three items are latencies external to &arts;. They are
142
interesting, but beyond the scope of this document. Nevertheless be
143
aware that they exist, so that even if you have optimized everything
144
else to really low values, you may not necessarily get exactly the
145
result you calculated.
149
Telling the server to play something involves usually one single &MCOP;
150
call. There are benchmarks which confirm that, on the same host with
151
unix domain sockets, telling the server to play something can be done
152
about 9000 times in one second with the current implementation. I expect
153
that most of this is kernel overhead, switching from one application to
154
another. Of course this value changes with the exact type of the
155
parameters. If you transfer a whole image with one call, it will be
156
slower than if you transfer only one long value. For the returncode the
157
same is true. However for ordinary strings (such as the filename of the
158
<literal role="extension">wav</literal> file to play) this shouldn't be
163
That means, we can approximate this time with 1/9000 sec, that is below
164
0.15 ms. We'll see that this is not relevant.
168
Next is the time between the server starting playing and the soundcard
169
getting something. The server needs to do buffering, so that when other
170
applications are running, such as your X11 server or
171
<quote>mousepling</quote> application no dropouts are heard. The way
172
this is done under &Linux; is that there are a number fragments of a
173
size. The server will refill fragments, and the soundcard will play
178
So suppose there are three fragments. The server refills the first, the
179
soundcard starts playing it. The server refills the second. The server
180
refills the third. The server is done, other applications can do
185
As the soundcard has played the first fragment, it starts playing the
186
second and the server starts refilling the first. And so on.
190
The maximum latency you get with all that is (number of fragments)*(size
191
of each fragment)/(samplingrate * (size of each sample)). Suppose we
192
assume 44kHz stereo, and 7 fragments a 1024 bytes (the current aRts
193
defaults), we get 40 ms.
197
These values can be tuned according to your needs. However, the
198
<acronym>CPU</acronym> usage increases with smaller latencies, as the
199
sound server needs to refill the buffers more often, and in smaller
200
parts. It is also mostly impossible to reach better values without
201
giving the soundserver realtime priority, as otherwise you'll often get
206
However, it is realistic to do something like 3 fragments with 256 bytes
207
each, which would make this value 4.4 ms. With 4.4ms delay the idle
208
<acronym>CPU</acronym> usage of &arts; would be about 7.5%. With 40ms delay, it would be
209
about 3% (of a PII-350, and this value may depend on your soundcard,
210
kernel version and others).
214
Then there is the time it takes the pling sound to get from the speakers
215
to your ear. Suppose your distance from the speakers is 2 meters. Sound
216
travels at a speed of 330 meters per second. So we can approximate this
222
<sect2 id="latency-streaming">
223
<title>Latency in Streaming Applications</title>
226
Streaming applications are those that produce their sound themselves.
227
Assume a game, which outputs a constant stream of samples, and should
228
now be adapted to replay things via &arts;. To have an example: when I
229
press a key, the figure which I am playing jumps, and a boing sound is
234
First of all, you need to know how &arts; does streaming. Its very
235
similar to the I/O with the soundcard. The game sends some packets with
236
samples to the sound server. Lets say three packets. As soon as the
237
sound server is done with the first packet, it sends a confirmation back
238
to the game that this packet is done.
242
The game creates another packet of sound and sends it to the server.
243
Meanwhile the server starts consuming the second sound packet, and so
244
on. The latency here looks similar like in the simple case:
250
The time until the kernel has notified the X11 server that a key was
257
The time until the X11 server has notified the game that a key was
264
The time until the game has decided that this key is worth playing a
271
The time until the packet of sound in which the game has started putting
272
the boing sound is reaching the sound server.
278
The time it takes for the boing (which the soundserver starts mixing to
279
the other output at once) to go through the buffered data, until it
280
really reaches the position where the soundcard plays.
286
The time it takes the boing sound from the speakers to
294
The external latencies, as above, are beyond the scope of this document.
298
Obviously, the streaming latency depends on the time it takes all
299
packets that are used for streaming to be played once. So it is (number
300
of packets)*(size of each packet)/(samplingrate * (size of each sample))
304
As you see that is the same formula as applies for the
305
fragments. However for games, it makes no sense to do such small delays
306
as above. I'd say a realistic configuration for games would be 2048
307
bytes per packet, use 3 packets. The resulting latency would be 35ms.
311
This is based on the following: assume that the game renders 25 frames
312
per second (for the display). It is probably safe to assume that you
313
won't notice a difference of sound output of one frame. Thus 1/25 second
314
delay for streaming is acceptable, which in turn means 40ms would be
319
Most people will also not run their games with realtime priority, and
320
the danger of drop-outs in the sound is not to be neglected. Streaming
321
with 3 packets a 256 bytes is possible (I tried that) - but causes a lot
322
of <acronym>CPU</acronym> usage for streaming.
326
For server side latencies, you can calculate these exactly as above.
331
<sect2 id="cpu-usage">
332
<title>Some <acronym>CPU</acronym> usage considerations</title>
335
There are a lot of factors which influence _<acronym>CPU</acronym> usage
336
in a complex scenario, with some streaming applications and some others,
337
some plugins on the server etc. To name a few:
343
Raw <acronym>CPU</acronym> usage by the calculations necessary.
349
&arts; internal scheduling overhead - how &arts; decides when which
350
module should calculate what.
356
Integer to float conversion overhead.
362
&MCOP;0 protocol overhead.
368
Kernel: process/context switching.
374
Kernel: communication overhead
380
For raw <acronym>CPU</acronym> usage for calculations, if you play two
381
streams, simultaneuosly you need to do additions. If you apply a filter,
382
some calculations are involved. To have a simplified example, adding two
383
streams involves maybe four <acronym>CPU</acronym> cycles per addition,
384
on a 350Mhz processor, this is 44100*2*4/350000000 = 0.1%
385
<acronym>CPU</acronym> usage.
389
&arts; internal scheduling: &arts; needs to decide which plugin when
390
calculates what. This takes time. Take a profiler if you are interested
391
in that. Generally what can be said is: the less realtime you do
392
(&ie;. the larger blocks can be calculated at a time) the less
393
scheduling overhead you have. Above calculating blocks of 128 samples at
394
a time (thus using fragment sizes of 512 bytes) the scheduling overhead
395
is probably not worth thinking about it.
399
Integer to float conversion overhead: &arts; uses floats internally as
400
data format. These are easy to handle and on recent processors not
401
slower than integer operations. However, if there are clients which play
402
data which is not float (like a game that should do its sound output via
403
&arts;), it needs to be converted. The same applies if you want to
404
replay the sounds on your soundcard. The soundcard wants integers, so
409
Here are numbers for a Celeron, approx. ticks per sample, with -O2 +egcs
410
2.91.66 (taken by Eugene Smith <email>hamster@null.ru</email>). This is
411
of course highly processor dependant:
415
convert_mono_8_float: 14
416
convert_stereo_i8_2float: 28
417
convert_mono_16le_float: 40
418
interpolate_mono_16le_float: 200
419
convert_stereo_i16le_2float: 80
420
convert_mono_float_16le: 80
424
So that means 1% <acronym>CPU</acronym> usage for conversion and 5% for
425
interpolation on this 350 MHz processor.
429
&MCOP; protocol overheadL &MCOP; does, as a rule of thumb, 9000
430
invocations per second. Much of this is not &MCOP;s fault, but relates
431
to the two kernel causes named below. However, this gives a base to do
432
calculations what the cost of streaming is.
436
Each data packet transferred through streaming can be considered one
437
&MCOP; invocation. Of course large packets are slower than 9000
438
packets/s, but its about the idea.
442
Suppose you use packet sizes of 1024 bytes. Thus, to transfer a stream
443
with 44kHz stereo, you need to transfer 44100*4/1024 = 172 packets per
444
second. Suppose you could with 100% cpu usage transfer 9000 packets,
445
then you get (172*100)/9000 = 2% <acronym>CPU</acronym> usage due to
446
streaming with 1024 byte packets.
450
That are approximations. However, they show, that you would be much
451
better off (if you can afford it for the latency), to use for instance
452
packets of 4096 bytes. We can make a compact formula here, by
453
calculating the packet size which causes 100% <acronym>CPU</acronym> usage as
454
44100*4/9000 = 19.6 samples, and thus getting the quick formula:
458
streaming <acronym>CPU</acronym> usage in percent = 1960/(your packet size)
462
which gives us 0.5% <acronym>CPU</acronym> usage when streaming with 4096 byte packets.
466
Kernel process/context switching: this is part of the &MCOP; protocol
467
overhead. Switching between two processes takes time. There is new
468
memory mapping, the caches are invalid, whatever else (if there is a
469
kernel expert reading this - let me know what exactly are the causes).
470
This means: it takes time.
474
I am not sure how many context switches &Linux; can do per second, but
475
that number isn't infinite. Thus, of the &MCOP; protocol overhead I
476
suppose quite a bit is due to context switching. In the beginning of
477
&MCOP;, I did tests to use the same communication inside one process,
478
and it was much faster (four times as fast or so).
482
Kernel: communication overhead: This is part of the &MCOP; protocol
483
overhead. Transferring data between processes is currently done via
484
sockets. This is convenient, as the usual select() methods can be used
485
to determine when a message has arrived. It can also be combined with
486
other I/O sources as audio I/O, X11 server or whatever else easily.
490
However, those read and write calls cost certainly processor cycles. For
491
small invocations (such as transferring one midi event) this is probably
492
not so bad, for large invocations (such as transferring one video frame
493
with several megabytes) this is clearly a problem.
497
Adding the usage of shared memory to &MCOP; where appropriate is
498
probably the best solution. However it should be done transparent to the
499
application programmer.
503
Take a profiler or do other tests to find out how much exactly
504
current audio streaming is impacted by the not using sharedmem. However,
505
its not bad, as audio streaming (replaying mp3) can be done with 6%
506
total <acronym>CPU</acronym> usage for &artsd; and
507
<application>artscat</application> (and 5% for the mp3
508
decoder). However, this includes all things from the necessary
509
calculations up do the socket overhead, thus I'd say in this setup you
510
could perhaps save 1% by using sharedmem.
515
<sect2 id="hard-numbers">
516
<title>Some Hard Numbers</title>
519
These are done with the current development snapshot. I also wanted to
520
try out the real hard cases, so this is not what everyday applications
525
I wrote an application called streamsound which sends streaming data to
526
&arts;. Here it is running with realtime priority (without problems),
527
and one small serverside (volume-scaling and clipping) plugin:
531
4974 stefan 20 0 2360 2360 1784 S 0 17.7 1.8 0:21 artsd
532
5016 stefan 20 0 2208 2208 1684 S 0 7.2 1.7 0:02 streamsound
533
5002 stefan 20 0 2208 2208 1684 S 0 6.8 1.7 0:07 streamsound
534
4997 stefan 20 0 2208 2208 1684 S 0 6.6 1.7 0:07 streamsound
538
Each of them is streaming with 3 fragments a 1024 bytes (18 ms). There
539
are three such clients running simultaneously. I know that that does
540
look a bit too much, but as I said: take a profiler and find out what
541
costs time, and if you like, improve it.
545
However, I don't think using streaming like that is realistic or makes
546
sense. To take it even more to the extreme, I tried what would be the
547
lowest latency possible. Result: you can do streaming without
548
interruptions with one client application, if you take 2 fragments of
549
128 bytes between aRts and the soundcard, and between the client
550
application and aRts. This means that you have a total maximum latency
551
of 128*4/44100*4 = 3 ms, where 1.5 ms is generated due to soundcard I/O
552
and 1.5 ms is generated through communication with &arts;. Both
553
applications need to run realtimed.
557
But: this costs an enormous amount of
558
<acronym>CPU</acronym>. This example cost you about 45% of my
559
P-II/350. I also starts to click if you start top, move windows on your
560
X11 display or do disk I/O. All these are kernel issues. The problem is
561
that scheduling two or more applications with realtime priority cost you
562
an enormous amount of effort, too, even more if the communicate, notify
567
Finally, a more real life example. This is &arts; with artsd and one
568
artscat (one streaming client) running 16 fragments a 4096 bytes:
572
5548 stefan 12 0 2364 2364 1752 R 0 4.9 1.8 0:03 artsd
573
5554 stefan 3 0 752 752 572 R 0 0.7 0.5 0:00 top
574
5550 stefan 2 0 2280 2280 1696 S 0 0.5 1.7 0:00 artscat
582
<sect1 id="dynamic-instantiation">
583
<title>Dynamic Instantiation</title>
591
<title>Busses</title>
594
Busses are dynamically built connections that transfer audio. Basically,
595
there are some uplinks and some downlinks. All signals from the uplinks
596
are added and send to the downlinks.
600
Busses as currently implemented operate in stereo, so you can only
601
transfer stereo data over busses. If you want mono data, well, transfer
602
it only over one channel and set the other to zero or whatever. What
603
you need to to, is to create one or more Synth_BUS_UPLINK
604
objects and tell them a bus name, to which they should talk (⪚
605
<quote>audio</quote> or <quote>drums</quote>). Simply throw the data in
610
Then, you'll need to create one or more Synth_BUS_DOWNLINK
611
objects, and tell them the bus name (<quote>audio</quote> or
612
<quote>drums</quote> ... if it matches, the data will get through), and
613
the mixed data will come out again.
617
The uplinks and downlinks can reside in different structures, you can
618
even have different &artsbuilder;s running and start an uplink in one
619
and receive the data from the other with a downlink.
623
What is nice about busses is, that they are fully dynamic. Clients can
624
plug in and out on the fly. There should be no clicking or noise as this
629
Of course, you should not plug out a client playing a signal, since it
630
will probably not be a zero level when plugged out the bus, and then it
636
<sect1 id="network-ransparency">
637
<title>Network Transparency</title>
642
<sect1 id="security">
643
<title>Security</title>
650
<title>Effects and Effect Stacks</title>
657
<title>Trader</title>
660
&arts;/&MCOP; heavily relies on splitting up things into small
661
components. This makes things very flexible, as you can extend the
662
system easily by adding new components, which implement new effects,
663
fileformats, oscillators, gui elements, ... As almost everything is a
664
component, almost everything can be extended easily, without changing
665
existing sources. New components can be simply loaded dynamically to
666
enhance already existing applications.
670
However, to make this work, two things are required:
677
Components must advertise themselves - they must describe what great
678
things they offer, so that applications will be able to use them.
684
Applications must actively look for components that they could use,
685
instead of using always the same thing for some task.
692
The combination of this: components which say <quote>here I am, I am
693
cool, use me</quote>, and applications (or if you like, other
694
components) which go out and look which component they could use to get
695
a thing done, is called trading.
699
In &arts;, components describe themselves by specifying values that they
700
<quote>support</quote> for properties. A typical property for a
701
file-loading component could be the extension of the files that it can
702
process. Typical values could be <literal
703
role="extension">wav</literal>, <literal role="extension">aiff</literal>
704
or <literal role="extension">mp3</literal>.
708
In fact, every component may choose to offer many different values for
709
one property. So one single component could offer reading both, <literal
710
role="extension">wav</literal> and <literal
711
role="extension">aiff</literal> files, by specifying that it supports
712
these values for the property <quote>Extension</quote>.
716
To do so, a component has to place a <literal
717
role="extension">.mcopclass</literal> file at an appropriate place,
718
containing the properties it supports, for our example, this could look
719
like this (and would be installed in
720
<filename><replaceable>componentdir</replaceable>/Arts/WavPlayObject.mcopclass</filename>):
724
Interface=Arts::WavPlayObject,Arts::PlayObject,Arts::SynthModule,Arts::Object
725
Author="Stefan Westerfeld <stefan@space.twc.de>"
726
URL="http://www.arts-project.org"
728
MimeType=audio/x-wav,audio/x-aiff
732
It is important that the filename of the <literal
733
role="extension">.mcopclass</literal>-file also says what the interface
734
of the component is called like. The trader doesn't look at the contents
735
at all, if the file (like here) is called
736
<filename>Arts/WavPlayObject.mcopclass</filename>, the component
737
interface is called <interfacename>Arts::WavPlayObject</interfacename>
738
(modules map to directories).
742
To look for components, there are two interfaces (which are defined in
743
<filename>core.idl</filename>, so you have them in every application),
744
called <interfacename>Arts::TraderQuery</interfacename> and
745
<interfacename>Arts::TraderOffer</interfacename>. You to go on a
746
<quote>shopping tour</quote> for components like this:
752
Create a query object:
755
Arts::TraderQuery query;
761
Specify what you want. As you saw above, components describe themselves
762
using properties, for which they offer certain values. So specifying
763
what you want is done by selecting components that support a certain
764
value for a property. This is done using the supports method of a
769
query.supports("Interface","Arts::PlayObject");
770
query.supports("Extension","wav");
776
Finally, perform the query using the query method. Then, you'll
777
(hopefully) get some offers:
781
vector<Arts::TraderOffer> *offers = query.query();
787
Now you can examine what you found. Important is the interfaceName
788
method of TraderOffer, which will tell you the name of the component,
789
that matched the query. You can also find out further properties by
790
getProperty. The following code will simply iterate through all
791
components, print their interface names (which could be used for
792
creation), and delete the results of the query again:
795
vector<Arts::TraderOffer>::iterator i;
796
for(i = offers->begin(); i != offers->end(); i++)
797
cout << i->interfaceName() << endl;
804
For this kind of trading service to be useful, it is important to
805
somehow agree on what kinds of properties components should usually
806
define. It is essential that more or less all components in a certain
807
area use the same set of properties to describe themselves (and the same
808
set of values where applicable), so that applications (or other
809
components) will be able to find them.
813
Author (type string, optional): This can be used to ultimately let the
814
world know that you wrote something. You can write anything you like in
815
here, e-mail adress is of course helpful.
819
Buildable (type boolean, recommended): This indicates whether the
820
component is usable with <acronym>RAD</acronym> tools (such as
821
&artsbuilder;) which use components by assigning properties and
822
connecting ports. It is recommended to set this value to true for
823
almost any signal processing component (such as filters, effects,
824
oscillators, ...), and for all other things which can be used in
825
<acronym>RAD</acronym> like fashion, but not for internal stuff like for
826
instance <interfacename>Arts::InterfaceRepo</interfacename>.
830
Extension (type string, used where relevant): Everything dealing with
831
files should consider using this. You should put the lowercase version
832
of the file extension without the <quote>.</quote> here, so something
833
like <userinput>wav</userinput> should be fine.
837
Interface (type string, required): This should include the full list of
838
(useful) interfaces your components supports, probably including
839
<interfacename>Arts::Object</interfacename> and if applicable
840
<interfacename>Arts::SynthModule</interfacename>.
844
Language (type string, recommended): If you want your component to be
845
dynamically loaded, you need to specify the language here. Currently,
846
the only allowed value is <userinput>C++</userinput>, which means the
847
component was written using the normal C++ <acronym>API</acronym>. If
848
you do so, you'll also need to set the <quote>Library</quote> property
853
Library (type string, used where relevant): Components written in C++
854
can be dynamically loaded. To do so, you have to compile them into a
855
dynamically loadable libtool (<literal role="extension">.la</literal>)
856
module. Here, you can specify the name of the <literal
857
role="extension">.la</literal>-File that contains your component.
858
Remember to use REGISTER_IMPLEMENTATION (as always).
862
MimeType (type string, used where relevant): Everything dealing with
863
files should consider using this. You should put the lowercase version
864
of the standard mimetype here, for instance
865
<userinput>audio/x-wav</userinput>.
869
&URL; (type string, optional): If you like to let people know where they
870
can find a new version of the component (or a homepage or anything), you
871
can do it here. This should be standard &HTTP; or &FTP; &URL;.
877
<sect1 id="midi-synthesis">
878
<title><acronym>MIDI</acronym> Synthesis</title>
883
<sect1 id="instruments">
884
<title>Instruments</title>
889
<sect1 id="session-management">
890
<title>Session Management</title>
895
<sect1 id="full-duplex">
896
<title>Full duplex Audio</title>
902
<sect1 id="namespaces">
903
<title>Namespaces in &arts;</title>
905
<sect2 id="namespaces-intro">
906
<title>Introduction</title>
909
Each namespace declaration corresponds to a <quote>module</quote>
910
declaration in the &MCOP; &IDL;.
926
In this case, the generated C++ code for the &IDL; snippet would look
934
/* declaration of A_base/A_skel/A_stub and similar */
935
class A { // Smartwrapped reference class
940
/* declaration of B_base/B_skel/B_stub and similar */
947
So when referring the classes from the above example in your C++ code,
948
you would have to write <classname>M::A</classname>, but only
949
B. However, you can of course use <quote>using M</quote> somewhere -
950
like with any namespace in C++.
955
<sect2 id="namespaces-how">
956
<title>How &arts; uses namespaces</title>
959
There is one global namespace called <quote>Arts</quote>, which all
960
programs and libraries that belong to &arts; itself use to put their
961
declarations in. This means, that when writing C++ code that depends on
962
&arts;, you normally have to prefix every class you use with
963
<classname>Arts::</classname>, like this:
967
int main(int argc, char **argv)
969
Arts::Dispatcher dispatcher;
970
Arts::SimpleSoundServer server(Arts::Reference("global:Arts_SimpleSoundServer"));
972
server.play("/var/foo/somefile.wav");
976
The other alternative is to write a using once, like this:
980
using namespace Arts;
982
int main(int argc, char **argv)
984
Dispatcher dispatcher;
985
SimpleSoundServer server(Reference("global:Arts_SimpleSoundServer"));
987
server.play("/var/foo/somefile.wav");
992
In &IDL; files, you don't exactly have a choice. If you are writing code
993
that belongs to &arts; itself, you'll have to put it into module &arts;.
997
// IDL File for aRts code:
998
#include <artsflow.idl>
999
module Arts { // put it into the Arts namespace
1000
interface Synth_TWEAK : SynthModule
1002
in audio stream invalue;
1003
out audio stream outvalue;
1004
attribute float tweakFactor;
1010
If you write code that doesn't belong to &arts; itself, you should not
1011
put it into the <quote>Arts</quote> namespace. However, you can make an
1012
own namespace if you like. In any case, you'll have to prefix classes
1013
you use from &arts;.
1017
// IDL File for code which doesn't belong to aRts:
1018
#include <artsflow.idl>
1020
// either write without module declaration, then the generated classes will
1021
// not use a namespace:
1022
interface Synth_TWEAK2 : Arts::SynthModule
1024
in audio stream invalue;
1025
out audio stream outvalue;
1026
attribute float tweakFactor;
1029
// however, you can also choose your own namespace, if you like, so if you
1030
// write an application "PowerRadio", you could for instance do it like this:
1037
interface Tuner : Arts::SynthModule {
1038
attribute Station station; // no need to prefix Station, same module
1039
out audio stream left, right;
1046
<sect2 id="namespaces-implementation">
1047
<title>Internals: How the Implementation Works</title>
1050
Often, in interfaces, casts, method signatures and similar, &MCOP; needs
1051
to refer to names of types or interfaces. These are represented as
1052
string in the common &MCOP; datastructures, while the namespace is
1053
always fully represented in the C++ style. This means the strings would
1054
contain <quote>M::A</quote> and <quote>B</quote>, following the example
1059
Note this even applies if inside the &IDL; text the namespace qualifiers
1060
were not given, since the context made clear which namespace the
1061
interface <interfacename>A</interfacename> was meant to be used in.
1067
<sect1 id="threads">
1068
<title>Threads in &arts;</title>
1070
<sect2 id="threads-basics">
1071
<title>Basics</title>
1074
Using threads isn't possible on all platforms. This is why &arts; was
1075
originally written without using threading at all. For almost all
1076
problems, for each threaded solution to the problem, there is a
1077
non-threaded solution that does the same.
1081
For instance, instead of putting audio output in a seperate thread, and
1082
make it blocking, &arts; uses non-blocking audio output, and figures out
1083
when to write the next chunk of data using
1084
<function>select()</function>.
1088
However, &arts; (in very recent versions) at least provides support for
1089
people who do want to implement their objects using threads. For
1090
instance, if you already have code for an <literal
1091
role="extension">mp3</literal> player, and the code expects the <literal
1092
role="extension">mp3</literal> decoder to run in a seperate thread, it's
1093
usally the easiest thing to do to keep this design.
1097
The &arts;/&MCOP; implementation is built along sharing state between
1098
seperate objects in obvious and non-obvious ways. A small list of shared
1104
The Dispatcher object which does &MCOP; communication.
1110
The Reference counting (Smartwrappers).
1116
The IOManager which does timer and fd watches.
1122
The ObjectManager which creates objects and dynamically loads plugins.
1128
The FlowSystem which calls calculateBlock in the appropriate situations.
1134
All of the above objects don't expect to be used concurrently (&ie;
1135
called from seperate threads at the same time). Generally there are two
1136
ways of solving this:
1142
Require the caller of any functions on this objects to
1143
acquire a lock before using them.
1149
Making these objects really threadsafe and/or create
1150
per-thread instances of them.
1156
&arts; follows the first approach: you will need a lock whenever you talk to
1157
any of these objects. The second approach is harder to do. A hack which
1158
tries to achieve this is available at
1159
<ulink url="http://space.twc.de/~stefan/kde/download/arts-mt.tar.gz">
1160
http://space.twc.de/~stefan/kde/download/arts-mt.tar.gz</ulink>, but for
1161
the current point in time, a minimalistic approach will probably work
1162
better, and cause less problems with existing applications.
1166
<sect2 id="threads-locking">
1167
<title>When/how to acquire the lock?</title>
1170
You can get/release the lock with the two functions:
1177
url="http://space.twc.de/~stefan/kde/arts-mcop-doc/arts-reference/headers/Arts__Dispatcher.html#lock"><function>Arts::Dispatcher::lock()</function></ulink>
1183
url="http://space.twc.de/~stefan/kde/arts-mcop-doc/arts-reference/headers/Arts__Dispatcher.html#unlock"><function>Arts::Dispatcher::unlock()</function></ulink>
1189
Generally, you don't need to acquire the lock (and you shouldn't try to
1190
do so), if it is already held. A list of conditions when this is the
1197
You receive a callback from the IOManager (timer or fd).
1203
You get call due to some &MCOP; request.
1209
You are called from the NotificationManager.
1215
You are called from the FlowSystem (calculateBlock)
1221
There are also some exceptions of functions. which you can only call in
1222
the main thread, and for that reason you will never need a lock to call
1229
Constructor/destructor of Dispatcher/IOManager.
1235
<methodname>Dispatcher::run()</methodname> /
1236
<methodname>IOManager::run()</methodname>
1241
<para><methodname>IOManager::processOneEvent()</methodname></para>
1246
But that is it. For everything else that is somehow related to &arts;,
1247
you will need to get the lock, and release it again when
1248
done. Always. Here is a simple example:
1252
class SuspendTimeThread : Arts::Thread {
1256
* you need this lock because:
1257
* - constructing a reference needs a lock (as global: will go to
1258
* the object manager, which might in turn need the GlobalComm
1259
* object to look up where to connect to)
1260
* - assigning a smartwrapper needs a lock
1261
* - constructing an object from reference needs a lock (because it
1262
* might need to connect a server)
1264
Arts::Dispatcher::lock();
1265
Arts::SoundServer server = Arts::Reference("global:Arts_SoundServer");
1266
Arts::Dispatcher::unlock();
1269
* you need a lock here, because
1270
* - dereferencing a smartwrapper needs a lock (because it might
1272
* - doing an MCOP invocation needs a lock
1274
Arts::Dispatcher::lock();
1275
long seconds = server.secondsUntilSuspend();
1276
Arts::Dispatcher::unlock();
1278
printf("seconds until suspend = %d",seconds);
1288
<sect2 id="threads-classes">
1289
<title>Threading related classes</title>
1292
The following threading related classes are currently available:
1299
url="http://www.arts-project.org/doc/headers/Arts__Thread.html"><classname>
1300
Arts::Thread</classname></ulink> - which encapsulates a thread.
1306
<ulink url="http://www.arts-project.org/doc/headers/Arts__Mutex.html">
1307
<classname>Arts::Mutex</classname></ulink> - which encapsulates a mutex.
1314
url="http://www.arts-project.org/doc/headers/Arts__ThreadCondition.html">
1315
<classname>Arts::ThreadCondition</classname></ulink> - which provides
1316
support to wake up threads which are waiting for a certain condition to
1324
url="http://www.arts-project.org/doc/headers/Arts__SystemThreads.html"><classname>Arts::SystemThreads</classname></ulink>
1325
- which encapsulates the operating system threading layer (which offers
1326
a few helpful functions to application programmers).
1332
See the links for documentation.
1338
<sect1 id="references-errors">
1339
<title>References and Error Handling</title>
1342
&MCOP; references are one of the most central concepts in &MCOP;
1343
programming. This section will try to describe how exactly references
1344
are used, and will especially also try to cover cases of failure (server
1348
<sect2 id="references-properties">
1349
<title>Basic properties of references</title>
1354
An &MCOP; reference is not an object, but a reference to an object: Even
1355
though the following declaration
1361
looks like a definition of an object, it only declares a reference to an
1362
object. As C++ programmer, you might also think of it as Synth_PLAY *, a
1363
kind of pointer to a Synth_PLAY object. This especially means, that p
1364
can be the same thing as a NULL pointer.
1370
You can create a NULL reference by assigning it explicitly
1373
Arts::Synth_PLAY p = Arts::Synth_PLAY::null();
1379
Invoking things on a NULL reference leads to a core dump
1382
Arts::Synth_PLAY p = Arts::Synth_PLAY::null();
1383
string s = p.toString();
1386
will lead to a core dump. Comparing this to a pointer, it is essentially
1392
which every C++ programmer would know to avoid.
1398
Uninitialized objects try to lazy-create themselves upon first use
1403
string s = p.toString();
1406
is something different than dereferencing a NULL pointer. You didn't tell
1407
the object at all what it is, and now you try to use it. The guess here
1408
is that you want to have a new local instance of a Arts::Synth_PLAY
1409
object. Of course you might have wanted something else (like creating the
1410
object somewhere else, or using an existing remote object). However, it
1411
is a convenient short cut to creating objects. Lazy creation will not work
1412
once you assigned something else (like a null reference).
1416
The equivalent C++ terms would be
1422
which obviously in C++ just plain segfaults. So this is different here.
1423
This lazy creation is tricky especially as not necessarily an implementation
1424
exists for your interface.
1428
For instance, consider an abstract thing like a
1429
Arts::PlayObject. There are certainly concrete PlayObjects like those for
1430
playing mp3s or wavs, but
1433
Arts::PlayObject po;
1437
will certainly fail. The problem is that although lazy creation kicks
1438
in, and tries to create a PlayObject, it fails, because there are only
1439
things like Arts::WavPlayObject and similar. Thus, use lazy creation
1440
only when you are sure that an implementation exists.
1446
References may point to the same object
1450
Arts::SimpleSoundServer s = Arts::Reference("global:Arts_SimpleSoundServer");
1451
Arts::SimpleSoundServer s2 = s;
1455
creates two references referring to the same object. It doesn't copy any
1456
value, and doesn't create two objects.
1462
All objects are reference counted So once an object isn't referred any
1463
longer by any references, it gets deleted. There is no way to
1464
explicitely delete an object, however, you can use something like this
1469
p = Arts::Synth_PLAY::null();
1471
to make the Synth_PLAY object go away in the end. Especially, it should never
1472
be necessary to use new and delete in conjunction with references.
1479
<sect2 id="references-failure">
1480
<title>The case of failure</title>
1483
As references can point to remote objects, the servers containing these
1484
objects can crash. What happens then?
1491
A crash doesn't change whether a reference is a null reference. This
1492
means that if <function>foo.isNull()</function> was
1493
<returnvalue>true</returnvalue> before a server crash then it is also
1494
<returnvalue>true</returnvalue> after a server crash (which is
1495
clear). It also means that if <function>foo.isNull()</function> was
1496
<returnvalue>false</returnvalue> before a server crash (foo referred to
1497
an object) then it is also <returnvalue>false</returnvalue> after the
1504
Invoking methods on a valid reference stays safe
1505
Suppose the server containing the object calc crashed. Still calling things
1508
int k = calc.subtract(i,j)
1510
are safe. Obviously subtract has to return something here, which it
1511
can't because the remote object no longer exists. In this case (k == 0)
1512
would be true. Generally, operations try to return something
1513
<quote>neutral</quote> as result, such as 0.0, a null reference for
1514
objects or empty strings, when the object no longer exists.
1520
Checking <function>error()</function> reveals whether something worked.
1526
int k = calc.subtract(i,j)
1528
printf("k is not i-j!\n");
1531
would print out <computeroutput>k is not i-j</computeroutput> whenever
1532
the remote invocation didn't work. Otherwise <varname>k</varname> is
1533
really the result of the subtract operation as performed by the remote
1534
object (no server crash). However, for methods doing things like
1535
deleting a file, you can't know for sure whether it really happened. Of
1536
course it happened if <function>.error()</function> is
1537
<returnvalue>false</returnvalue>. However, if
1538
<function>.error()</function> is <returnvalue>true</returnvalue>, there
1539
are two possibilities:
1545
The file got deleted, and the server crashed just after deleting it, but
1546
before transferring the result.
1552
The server crashed before beeing able to delete the file.
1560
Using nested invocations is dangerous in crash resistent programs
1564
Using something like
1566
window.titlebar().setTitle("foo");
1568
is not a good idea. Suppose you know that window contains a valid Window
1569
reference. Suppose you know that <function>window.titlebar()</function>
1570
will return a Titlebar reference because the Window object is
1571
implemented properly. However, still the above statement isn't safe.
1575
What could happen is that the server containing the Window object has
1576
crashed. Then, regardless of how good the Window implementation is, you
1577
will get a null reference as result of the window.titlebar()
1578
operation. And then of course invoking setTitle on that null reference
1579
will lead to a crash as well.
1583
So a safe variant of this would be
1585
Titlebar titlebar = window.titlebar();
1587
titlebar.setTitle("foo");
1589
add the appropriate error handling if you like. If you don't trust the
1590
Window implementation, you might as well use
1592
Titlebar titlebar = window.titlebar();
1593
if(!titlebar.isNull())
1594
titlebar.setTitle("foo");
1596
which are both safe.
1602
There are other conditions of failure, such as network disconnection
1603
(suppose you remove the cable between your server and client while your
1604
application runs). However their effect is the same like a server crash.
1608
Overall, it is of course a consideration of policy how strictly you try
1609
to trap communcation errors throughout your application. You might
1610
follow the <quote>if the server crashes, we need to debug the server
1611
until it never crashes again</quote> method, which would mean you need
1612
not bother about all these problems.
1617
<sect2 id="references-internals">
1618
<title>Internals: Distributed Reference Counting</title>
1621
An object, to exist, must be owned by someone. If it isn't, it will
1622
cease to exist (more or less) immediately. Internally, ownership is
1623
indicated by calling <function>_copy()</function>, which increments an
1624
reference count, and given back by calling
1625
<function>_release()</function>. As soon as the reference count drops to
1626
zero, a delete will be done.
1630
As a variation of the theme, remote usage is indicated by
1631
<function>_useRemote()</function>, and dissolved by
1632
<function>_releaseRemote()</function>. These functions lead a list which
1633
server has invoked them (and thus owns the object). This is used in case
1634
this server disconnects (&ie; crash, network failure), to remove the
1635
references that are still on the objects. This is done in
1636
<function>_disconnectRemote()</function>.
1640
Now there is one problem. Consider a return value. Usually, the return
1641
value object will not be owned by the calling function any longer. It
1642
will however also not be owned by the caller, until the message holding
1643
the object is received. So there is a time of
1644
<quote>ownershipless</quote> objects.
1648
Now, when sending an object, one can be reasonable sure that as soon as
1649
it is received, it will be owned by somebody again, unless, again, the
1650
receiver dies. However this means that special care needs to be taken
1651
about object at least while sending, probably also while receiving, so
1652
that it doesn't die at once.
1656
The way &MCOP; does this is by <quote>tagging</quote> objects that are
1657
in process of being copied across the wire. Before such a copy is
1658
started, <function>_copyRemote</function> is called. This prevents the
1659
object from being freed for a while (5 seconds). Once the receiver calls
1660
<function>_useRemote()</function>, the tag is removed again. So all
1661
objects that are send over wire are tagged before transfer.
1665
If the receiver receives an object which is on his server, of course he
1666
will not <function>_useRemote()</function> it. For this special case,
1667
<function>_cancelCopyRemote()</function> exists to remove the tag
1668
manually. Other than that, there is also timer based tag removal, if
1669
tagging was done, but the receiver didn't really get the object (due to
1670
crash, network failure). This is done by the
1671
<classname>ReferenceClean</classname> class.
1678
<sect1 id="detail-gui-elements">
1679
<title>&GUI; Elements</title>
1682
&GUI; elements are currently in the experimental state. However, this
1683
section will describe what is supposed to happen here, so if you are a
1684
developer, you will be able to understand how &arts; will deal with
1685
&GUI;s in the future. There is some code there already, too.
1689
&GUI; elements should be used to allow synthesis structures to interact
1690
with the user. In the simplest case, the user should be able to modify
1691
some parameters of a structure directly (such as a gain factor which is
1692
used before the final play module).
1696
In more complex settings, one could imagine the user modifying
1697
parameters of groups of structures and/or not yet running structures,
1698
such as modifying the <acronym>ADSR</acronym> envelope of the currently
1699
active &MIDI; instrument. Another thing would be setting the filename of
1700
some sample based instrument.
1704
On the other hand, the user could like to monitor what the synthesizer
1705
is doing. There could be oscilloscopes, spectrum analyzers, volume
1706
meters and <quote>experiments</quote> that figure out the frequency
1707
transfer curve of some given filter module.
1711
Finally, the &GUI; elements should be able to control the whole
1712
structure of what is running inside &arts; and how. The user should be
1713
able to assign instruments to midi channels, start new effect
1714
processors, configure his main mixer pult (which is built of &arts;
1715
structures itself) to have one channel more and use another strategy for
1720
You see - the <acronym>GUI</acronym> elements should bring all
1721
possibilities of the virtual studio &arts; should simulate to the
1722
user. Of course, they should also gracefully interact with midi inputs
1723
(such as sliders should move if they get &MIDI; inputs which also change
1724
just that parameter), and probably even generate events themselves, to
1725
allow the user interaction to be recorded via sequencer.
1729
Technically, the idea is to have an &IDL; base class for all widgets
1730
(<classname>Arts::Widget</classname>), and derive a number of commonly
1731
used widgets from there (like <classname>Arts::Poti</classname>,
1732
<classname>Arts::Panel</classname>, <classname>Arts::Window</classname>,
1737
Then, one can implement these widgets using a toolkit, for instance &Qt;
1738
or Gtk. Finally, effects should build their &GUI;s out of existing
1739
widgets. For instance, a freeverb effect could build it's &GUI; out of
1740
five <classname>Arts::Poti</classname> thingies and an
1741
<classname>Arts::Window</classname>. So IF there is a &Qt;
1742
implementation for these base widgets, the effect will be able to
1743
display itself using &Qt;. If there is Gtk implementation, it will also
1744
work for Gtk (and more or less look/work the same).
1748
Finally, as we're using &IDL; here, &artsbuilder; (or other tools) will
1749
be able to plug &GUI;s together visually, or autogenerate &GUI;s given
1750
hints for parameters, only based on the interfaces. It should be
1751
relatively straight forward to write a <quote>create &GUI; from
1752
description</quote> class, which takes a &GUI; description (containing
1753
the various parameters and widgets), and creates a living &GUI; object
1758
Based on &IDL; and the &arts;/&MCOP; component model, it should be easy
1759
to extend the possible objects which can be used for the &GUI; just as
1760
easy as it is to add a plugin implementing a new filter to &arts;.