1
<?xml version="1.0" encoding="latin1" ?>
2
<!DOCTYPE chapter SYSTEM "chapter.dtd">
7
<year>2000</year><year>2009</year>
8
<holder>Ericsson AB. All Rights Reserved.</holder>
11
The contents of this file are subject to the Erlang Public License,
12
Version 1.1, (the "License"); you may not use this file except in
13
compliance with the License. You should have received a copy of the
14
Erlang Public License along with this software. If not, it can be
15
retrieved online at http://www.erlang.org/.
17
Software distributed under the License is distributed on an "AS IS"
18
basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See
19
the License for the specific language governing rights and limitations
24
<title>Architecture</title>
25
<prepared>Håkan Mattsson</prepared>
26
<responsible>Håkan Mattsson</responsible>
28
<approved>Håkan Mattsson</approved>
30
<date>2007-06-15</date>
32
<file>megaco_architecture.xml</file>
36
<title>Network view</title>
37
<p>Megaco is a (master/slave) protocol for control of gateway functions at
38
the edge of the packet network. Examples of this is IP-PSTN trunking
39
gateways and analog line gateways. The main function of Megaco is to
40
allow gateway decomposition into a call agent (call control) part (known
41
as Media Gateway Controller, MGC) - master, and an gateway interface
42
part (known as Media Gateway, MG) - slave. The MG has no call control
43
knowledge and only handle making the connections and simple
45
<p>SIP and H.323 are peer-to-peer protocols for call control (valid only
46
for some of the protocols within H.323), or more generally multi-media
47
session protocols. They both operate at a different level (call control)
48
from Megaco in a decomposed network, and are therefor not aware of
49
wether or not Megaco is being used underneath.</p>
50
<image file="megaco_sys_arch">
51
<icaption>Network architecture</icaption>
53
<p>Megaco and peer protocols are complementary in nature and entirely
54
compatible within the same system. At a system level, Megaco allows
56
<list type="bulleted">
58
<p>overall network cost and performance optimization</p>
61
<p>protection of investment by isolation of changes at the call
65
<p>freedom to geographically distribute both call function and
69
<p>adaption of legacy equipment</p>
75
<title>General</title>
76
<p>This Erlang/OTP application supplies a framework for building
77
applications that needs to utilize the Megaco/H.248 protocol.</p>
78
<p>We have introduced the term "user" as a generic term for either
79
an MG or an MGC, since most of the functionality we support, is
80
common for both MG's and MGC's. A (local) user may be configured
81
in various ways and it may establish any number of connections
82
to its counterpart, the remote user. Once a connection has been
83
established, the connection is supervised and it may be used for
84
the purpose of sending messages. N.B. according to the standard
85
an MG is connected to at most one MGC, while an MGC may be
86
connected to any number of MG's.</p>
87
<p>For the purpose of managing "virtual MG's", one Erlang node may
88
host any number of MG's. In fact it may host a mix of MG's and
89
MGC's. You may say that an Erlang node may host any number of
91
<p>The protocol engine uses callback modules to handle various
93
<list type="bulleted">
95
<p>encoding callback modules - handles the encoding and
96
decoding of messages. Several modules for handling different
97
encodings are included, such as ASN.1 BER, pretty well
98
indented text, compact text and some others. Others may be
102
<p>transport callback modules - handles sending and receiving
103
of messages. Transport modules for TCP/IP and UDP/IP are
104
included and others may be written by you.</p>
107
<p>user callback modules - the actual implementation of an MG
108
or MGC. Most of the functions are intended for handling of a
109
decoded transaction (request, reply, acknowledgement), but
110
there are others that handles connect, disconnect and
114
<p>Each connection may have its own configuration of callback
115
modules, re-send timers, transaction id ranges etc. and they may
116
be re-configured on-the-fly.</p>
117
<p>In the API of Megaco, a user may explicitely send action
118
requests, but generation of transaction identifiers, the
119
encoding and actual transport of the message to the remote user
120
is handled automatically by the protocol engine according to the
121
actual connection configuration. Megaco messages are not exposed
123
<p>On the receiving side the transport module receives the message
124
and forwards it to the protocol engine, which decodes it and
125
invokes user callback functions for each transaction. When a
126
user has handled its action requests, it simply returns a list
127
of action replies (or a message error) and the protocol engine
128
uses the encoding module and transport module to compose and
129
forward the message to the originating user.</p>
130
<p>The protocol stack does also handle things like automatic
131
sending of acknowledgements, pending transactions, re-send of
132
messages, supervision of connections etc.</p>
133
<p>In order to provide a solution for scalable implementations of
134
MG's and MGC's, a user may be distributed over several Erlang
135
nodes. One of the Erlang nodes is connected to the physical
136
network interface, but messages may be sent from other nodes and
137
the replies are automatically forwarded back to the originating
142
<title>Single node config</title>
143
<p>Here a system configuration with an MG and MGC residing
144
in one Erlang node each is outlined:</p>
145
<image file="single_node_config">
146
<icaption>Single node config</icaption>
151
<title>Distributed config</title>
152
<p>In a larger system with a user (in this case an MGC)
153
distributed over several Erlang nodes, it looks a little bit
154
different. Here the encoding is performed on the originating
155
Erlang node (1) and the binary is forwarded to the node (2) with
156
the physical network interface. When the potential message reply
157
is received on the interface on node (2), it is decoded there
158
and then different actions will be taken for each transaction in
159
the message. The transaction reply will be forwarded in its
160
decoded form to the originating node (1) while the other types
161
of transactions will be handled locally on node (2).</p>
162
<p>Timers and re-send of messages will be handled on locally on
163
one node, that is node(1), in order to avoid unneccessary
164
transfer of data between the Erlang nodes.
167
<image file="distr_node_config">
168
<icaption>Distributes node config</icaption>
173
<title>Message round-trip call flow</title>
174
<p>The typical round-trip of a message can be viewed as
175
follows. Firstly we view the call flow on the originating
177
<image file="call_flow">
178
<icaption>Message Call Flow (originating side)</icaption>
180
<p>Then we continue with the call flow on the destination
182
<image file="call_flow_cont">
183
<icaption>Message Call Flow (destination side)</icaption>