1
Multi Tenancy Networking Protections in XenServer
2
=================================================
4
The purpose of the vif_rules script is to allow multi-tenancy on a XenServer
5
host. In a multi-tenant cloud environment a host machine needs to be able to
6
enforce network isolation amongst guest instances, at both layer two and layer
7
three. The rules prevent guests from taking and using unauthorized IP addresses,
8
sniffing other guests traffic, and prevents ARP poisoning attacks. This current
9
revision only supports IPv4, but will support IPv6 in the future.
19
If the kernel doesn't support these, you will need to obtain the Source RPMS for
20
the proper version of XenServer to recompile the dom0 kernel.
22
XenServer Requirements (32-bit dom0)
23
====================================
25
- arptables 32-bit rpm
29
XenServer Environment Specific Notes
30
====================================
32
- XenServer 5.5 U1 based on the 2.6.18 kernel didn't include physdev module
33
support. Support for this had to be recompiled into the kernel.
34
- XenServer 5.6 based on the 2.6.27 kernel didn't include physdev, ebtables, or
36
- XenServer 5.6 FP1 didn't include physdev, ebtables, or arptables but they do
37
have a Cloud Supplemental pack available to partners which swaps out the
38
kernels for kernels that support the networking rules.
43
iptables, ebtables, and arptables drop rules are applied to all forward chains
44
on the host. These are applied at boot time with an init script. They ensure
45
all forwarded packets are dropped by default. Allow rules are then applied to
46
the instances to ensure they have permission to talk on the internet.
51
Any time an underprivileged domain or domU is started or stopped, it gets a
52
unique domain id (dom_id). This dom_id is utilized in a number of places, one
53
of which is it's assigned to the virtual interface (vif). The vifs are attached
54
to the bridge that is attached to the physical network. For instance, if you
55
had a public bridge attached to eth0 and your domain id was 5, your vif would be
58
The networking rules are applied to the VIF directly so they apply at the lowest
59
level of the networking stack. Because the VIF changes along with the domain id
60
on any start, stop, or reboot of the instance, the rules need to be removed and
61
re-added any time that occurs.
63
Because the dom_id can change often, the vif_rules script is hooked into the
64
/etc/xensource/scripts/vif script that gets called anytime an instance is
65
started, or stopped, which includes pauses and resumes.
67
Examples of the rules ran for the host on boot:
69
iptables -P FORWARD DROP
70
iptables -A FORWARD -m physdev --physdev-in eth0 -j ACCEPT
71
ebtables -P FORWARD DROP
72
ebtables -A FORWARD -o eth0 -j ACCEPT
73
arptables -P FORWARD DROP
74
arptables -A FORWARD --opcode Request --in-interface eth0 -j ACCEPT
75
arptables -A FORWARD --opcode Reply --in-interface eth0 -j ACCEPT
77
Examples of the rules that are ran per instance state change:
79
iptables -A FORWARD -m physdev --physdev-in vif1.0 -s 10.1.135.22/32 -j ACCEPT
80
arptables -A FORWARD --opcode Request --in-interface "vif1.0" \
81
--source-ip 10.1.135.22 -j ACCEPT
82
arptables -A FORWARD --opcode Reply --in-interface "vif1.0" \
83
--source-ip 10.1.135.22 --source-mac 9e:6e:cc:19:7f:fe -j ACCEPT
84
ebtables -A FORWARD -p 0806 -o vif1.0 --arp-ip-dst 10.1.135.22 -j ACCEPT
85
ebtables -A FORWARD -p 0800 -o vif1.0 --ip-dst 10.1.135.22 -j ACCEPT
86
ebtables -I FORWARD 1 -s ! 9e:6e:cc:19:7f:fe -i vif1.0 -j DROP
88
Typically when you see a vif, it'll look like vif<domain id>.<network bridge>.
89
vif2.1 for example would be domain 2 on the second interface.
91
The vif_rules.py script needs to pull information about the IPs and MAC
92
addresses assigned to the instance. The current implementation assumes that
93
information is put into the VM Record into the xenstore-data key in a JSON
94
string. The vif_rules.py script reads out of the JSON string to determine the
95
IPs, and MAC addresses to protect.
97
An example format is given below:
99
# xe vm-param-get uuid=<uuid> param-name=xenstore-data
101
vm-data/networking/4040fa7292e4:
103
"ips": [{"netmask":"255.255.255.0",
105
"ip":"173.200.100.10"}],
106
"mac":"40:40:fa:72:92:e4",
107
"gateway":"173.200.100.1",
109
"dns":["72.3.128.240","72.3.128.241"]};
111
vm-data/networking/40402321c9b8:
113
"ips":[{"netmask":"255.255.224.0",
115
"ip":"10.177.10.10"}],
116
"routes":[{"route":"10.176.0.0",
117
"netmask":"255.248.0.0",
118
"gateway":"10.177.10.1"},
119
{"route":"10.191.192.0",
120
"netmask":"255.255.192.0",
121
"gateway":"10.177.10.1"}],
122
"mac":"40:40:23:21:c9:b8"}
124
The key is used for two purposes. One, the vif_rules.py script will read from
125
it to apply the rules needed after parsing the JSON. The second is that because
126
it's put into the xenstore-data field, the xenstore will be populated with this
127
data on boot. This allows a guest agent the ability to read out data about the
128
instance and apply configurations as needed.
133
- Copy host-rules into /etc/init.d/ and make sure to chmod +x host-rules.
134
- Run 'chkconfig host-rules on' to add the init script to start up.
135
- Copy vif_rules.py into /etc/xensource/scripts
136
- Patch /etc/xensource/scripts/vif using the supplied patch file. It may vary
137
for different versions of XenServer but it should be pretty self explanatory.
138
It calls the vif_rules.py script on domain creation and tear down.
139
- Run '/etc/init.d/host-rules start' to start up the host based rules.
140
- The instance rules will then fire on creation of the VM as long as the correct
142
- You can check to see if the rules are in place with: iptables --list,
143
arptables --list, or ebtables --list