1178
1178
</itemizedlist>
1181
<sect1 id="eucalyptus" status="review">
1182
<title>Eucalyptus</title>
1184
<sect2 id="eucalyptus-overview" status="review">
1185
<title>Overview</title>
1188
<emphasis>Eucalyptus</emphasis> is an open-source software infrastructure for implementing "cloud computing" on your own clusters.
1189
<emphasis>Eucalyptus</emphasis> allows you to create your own cloud computing environment in order to maximize computing resources
1190
and provide a cloud computing environment to your users.
1194
This section will cover setting up a Cloud Computing environment using <application>Eucalyptus</application> with
1195
<application>KVM</application>. For more information on KVM see <xref linkend="libvirt"/>.
1199
The Cloud Computing environment will consist of three components, typically installed on at least two separate machines
1200
(termed the 'front-end' and 'node(s)' for the rest of this document):
1206
<emphasis>One Front-End:</emphasis> hosts one Cloud Controller, a Java based Web configuration interface, and a Cluster Controller,
1207
which determines where virtual machines (VMs) will be housed and manages cluster level VM networking.
1212
<emphasis>One or more Compute Nodes:</emphasis> runs the Node Controller component of Eucalyptus, which allows the machine to be part
1213
of the cloud as a host for VMs.
1219
The simple <emphasis>System</emphasis> networking option will be used by default. This network method allows virtual machine instances, to
1220
obtain IP addresses from the local LAN, assuming that a DHCP server is properly configured on the LAN to hand out IPs dynamically to VMs
1221
that request them. Each node will be configured for bridge networking. For more details see <xref linkend="bridging"/>.
1225
<sect2 id="eucalyptus-configuration" status="review">
1226
<title>Configuration</title>
1229
First, on the <emphasis>Front-End</emphasis> install the appropriate packages. In a terminal prompt on the Front-End enter:
1233
<command>sudo apt-get install eucalyptus-cloud eucalyptus-cc</command>
1237
Next, on the each <emphasis>Compute Node</emphasis> install the node controller package. In a terminal prompt on each Compute Node enter:
1241
<command>sudo apt-get install eucalyptus-nc</command>
1245
Once the installation is complete, and it may take a while, in a browser go to <emphasis>https://front-end:8443</emphasis> and login to the
1246
administration interface using the default username and password of <emphasis>admin</emphasis>. You will then be prompted to change the
1247
password, configure an email address for the admin user, and set the storage URL.
1251
In the web interface's <emphasis>"Configuration"</emphasis> tab, add a cluster under the <emphasis>"Clusters"</emphasis> heading
1252
(in this configuration, the cluster controller is on the same system as the cloud controller, so entering 'localhost' as the cluster hostname is correct).
1253
Once the form is filled out click the <emphasis>"Add Cluster"</emphasis> button.
1257
Now, back on the <emphasis>Front-End</emphasis>, add the nodes to the cluster:
1261
<command>sudo euca_conf -addnode hostname_of_node</command>
1265
You will then be prompted to log into your Node, install the <application>eucalyptus-nc</application> package, and add the <emphasis>eucalyptus</emphasis>
1266
user's ssh key to the node's <filename>authorized_keys</filename> file, and confirm authenticity of the host's OpenSSH RSA key fingerprint.
1267
Finally, the command will complete by synchronizing the eucalyptus component keys and node registration is complete.
1271
On the Node, the <filename>/etc/eucalyptus/eucalyptus.conf</filename> configuration file will need editing to use your node's bridge interface
1272
(assuming here that the interface is named <emphasis>'br0'</emphasis>):
1276
VNET_INTERFACE="br0"
1282
Finally, restart <application>eucalyptus-nc</application>:
1286
<command>sudo /etc/init.d/eucalyptus-nc restart</command>
1292
Be sure to replace <emphasis>nodecontroller</emphasis>, <emphasis>node01</emphasis>, and <emphasis>node02</emphasis>
1293
with actual hostnames.
1298
<application>Eucalyptus</application> is now ready to host images on the cloud.
1302
<sect2 id="eucalyptus-references" status="review">
1303
<title>References</title>
1308
See the <ulink url="http://eucalyptus.cs.ucsb.edu/">Eucalyptus website</ulink> for more information.
1313
For information on loading instances see the <ulink url="https://help.ubuntu.com/community/Eucalyptus">Eucalyptus Wiki</ulink> page.
1318
You can also find help in the <emphasis>#ubuntu-virt</emphasis>, <emphasis>#eucalyptus</emphasis>, and
1319
<emphasis>#ubuntu-server</emphasis> IRC channels on <ulink url="http://freenode.net">Freenode</ulink>.
1326
<sect1 id="opennebula" status="review">
1327
<title>OpenNebula</title>
1330
<application>OpenNebula</application> allows virtual machines to be placed and re-placed dynamically on a pool of physical resources.
1331
This allows a virtual machine to be hosted from any location available.
1335
This section will detail configuring an OpenNebula cluster using three machines: one <emphasis>Front-End</emphasis> host, and two
1336
<emphasis>Compute Nodes</emphasis> used to run the virtual machines. The Compute Nodes will also need a bridge configured to allow the
1337
virtual machines access to the local network. For details see <xref linkend="bridging"/>.
1340
<sect2 id="opennebula-installation" status="review">
1341
<title>Installation</title>
1344
First, from a terminal on the Front-End enter:
1348
<command>sudo apt-get install opennebula</command>
1352
On each Compute Node install:
1356
<command>sudo apt-get install opennebula-node</command>
1360
In order to copy SSH keys, the <emphasis>oneadmin</emphasis> user will need to have a password. On each machine execute:
1364
<command>sudo passwd oneadmin</command>
1368
Next, copy the <emphasis>oneadmin</emphasis> user's SSH key to the Compute Nodes, and to the Front-End's <filename>authorized_keys</filename> file:
1372
<command>sudo scp /var/lib/one/.ssh/id_rsa.pub oneadmin@node01:/var/lib/one/.ssh/authorized_keys</command>
1373
<command>sudo scp /var/lib/one/.ssh/id_rsa.pub oneadmin@node02:/var/lib/one/.ssh/authorized_keys</command>
1374
<command>sudo sh -c "cat /var/lib/one/.ssh/id_rsa.pub >> /var/lib/one/.ssh/authorized_keys"</command>
1378
The SSH key for the Compute Nodes needs to be added to the <filename>/etc/ssh/ssh_known_hosts</filename> file on the Front-End host. To accomplish
1379
this <application>ssh</application> to each Compute Node as a user other than <emphasis>oneadmin</emphasis>. Then exit from the SSH session, and
1380
execute the following to copy the SSH key from <filename>~/.ssh/known_hosts</filename> to <filename>/etc/ssh/ssh_known_hosts</filename>:
1384
<command>sudo sh -c "ssh-keygen -f .ssh/known_hosts -F node01 1>> /etc/ssh/ssh_known_hosts"</command>
1385
<command>sudo sh -c "ssh-keygen -f .ssh/known_hosts -F node02 1>> /etc/ssh/ssh_known_hosts"</command>
1390
Replace <emphasis>node01</emphasis> and <emphasis>node02</emphasis> with the appropriate host names.
1395
This allows the <emphasis>oneadmin</emphasis> to use <application>scp</application>, without a password or manual intervention, to deploy an
1396
image to the Compute Nodes.
1400
On the Front-End create a directory to store the VM images, giving the <emphasis>oneadmin</emphasis> user access to the directory:
1404
<command>sudo mkdir /var/lib/one/images</command>
1405
<command>sudo chown oneadmin /var/lib/one/images/</command>
1409
Finally, copy a virtual machine disk file into <filename>/var/lib/one/images</filename>. You can create an Ubuntu virtual machine
1410
using <application>vmbuilder</application>, see <xref linkend="jeos-and-vmbuilder"/> for details.
1414
<sect2 id="opennebula-configuration" status="review">
1415
<title>Configuration</title>
1418
The <emphasis>OpenNebula Cluster</emphasis> is now ready to be configured, and virtual machines added to the cluster.
1422
From a terminal prompt enter:
1426
<command>onehost create node01 im_kvm vmm_kvm tm_ssh</command>
1427
<command>onehost create node02 im_kvm vmm_kvm tm_ssh</command>
1431
Next, create a <emphasis>Virtual Network</emphasis> template file named <filename>vnet01.template</filename>:
1439
NETWORK_ADDRESS = 192.168.0.0
1444
Be sure to change <emphasis>192.168.0.0</emphasis> to your local network.
1449
Using the <application>onevnet</application> utility, add the virtual network to OpenNebula:
1453
<command>onevnet create vnet01.template</command>
1457
Now create a <emphasis>VM Template</emphasis> file named <filename>vm01.template</filename>:
1468
source = "/var/lib/one/images/vm01.qcow2",
1472
NIC = [ NETWORK="LAN" ]
1474
GRAPHICS = [type="vnc",listen="127.0.0.1",port="-1"]
1478
Start the virtual machine using <application>onevm</application>:
1482
<command>onevm submit vm01.template</command>
1486
Use the <application>onevm list</application> option to view information about virtual machines. Also, the <application>onevm show vm01</application>
1487
option will display more details about a specific virtual machine.
1491
<sect2 id="opennebula-references" status="review">
1492
<title>References</title>
1497
See the <ulink url="http://www.opennebula.org/doku.php?id=start">OpenNebula website</ulink> for more information.
1502
You can also find help in the <emphasis>#ubuntu-virt</emphasis> and
1503
<emphasis>#ubuntu-server</emphasis> IRC channels on <ulink url="http://freenode.net">Freenode</ulink>.