4
This document will explain how to install OPNFV Euphrates with JOID including installing JOID, configuring JOID for your environment, and deploying OPNFV with different SDN solutions in HA, or non-HA mode. Prerequisites include
6
- An Ubuntu 16.04 LTS Server Jumphost
7
- Minimum 2 Networks per Pharos requirement
9
- One for the administrative network with gateway to access the Internet
10
- One for the OpenStack public network to access OpenStack instances via floating IPs
11
- JOID supports multiple isolated networks for data as well as storage based on your network requirement for OpenStack.
13
- Minimum 6 Physical servers for bare metal environment
15
- Jump Host x 1, minimum H/W configuration:
19
- Hard Disk: 1 (250GB)
20
- NIC: eth0 (Admin, Management), eth1 (external network)
22
- Control and Compute Nodes x 5, minimum H/W configuration:
26
- Hard Disk: 2 (500GB) prefer SSD
27
- NIC: eth0 (Admin, Management), eth1 (external network)
29
**NOTE**: Above configuration is minimum. For better performance and usage of the OpenStack, please consider higher specs for all nodes.
31
Make sure all servers are connected to top of rack switch and configured accordingly. No DHCP server should be up and configured. Configure gateways only on eth0 and eth1 networks to access the network outside your lab.
37
JOID as Juju OPNFV Infrastructure Deployer allows you to deploy different combinations of
38
OpenStack release and SDN solution in HA or non-HA mode. For OpenStack, JOID supports
39
Juno and Liberty. For SDN, it supports Openvswitch, OpenContrail, OpenDayLight, and ONOS. In addition to HA or non-HA mode, it also supports deploying from the latest development tree.
41
JOID heavily utilizes the technology developed in Juju and MAAS. Juju is a
42
state-of-the-art, open source, universal model for service oriented architecture and
43
service oriented deployments. Juju allows you to deploy, configure, manage, maintain,
44
and scale cloud services quickly and efficiently on public clouds, as well as on physical
45
servers, OpenStack, and containers. You can use Juju from the command line or through its
46
powerful GUI. MAAS (Metal-As-A-Service) brings the dynamism of cloud computing to the
47
world of physical provisioning and Ubuntu. Connect, commission and deploy physical servers
48
in record time, re-allocate nodes between services dynamically, and keep them up to date;
49
and in due course, retire them from use. In conjunction with the Juju service
50
orchestration software, MAAS will enable you to get the most out of your physical hardware
51
and dynamically deploy complex services with ease and confidence.
53
For more info on Juju and MAAS, please visit https://jujucharms.com/ and http://maas.ubuntu.com.
57
The MAAS server is installed and configured on Jumphost with Ubuntu 16.04 LTS with
58
access to the Internet. Another VM is created to be managed by MAAS as a bootstrap node
59
for Juju. The rest of the resources, bare metal or virtual, will be registered and
60
provisioned in MAAS. And finally the MAAS environment details are passed to Juju for use.
64
We will use 03-maasdeploy.sh to automate the deployment of MAAS clusters for use as a Juju provider. MAAS-deployer uses a set of configuration files and simple commands to build a MAAS cluster using virtual machines for the region controller and bootstrap hosts and automatically commission nodes as required so that the only remaining step is to deploy services with Juju. For more information about the maas-deployer, please see https://launchpad.net/maas-deployer.
66
Configuring the Jump Host
67
^^^^^^^^^^^^^^^^^^^^^^^^^
68
Let's get started on the Jump Host node.
70
The MAAS server is going to be installed and configured on a Jumphost machine. We need to create bridges on the Jump Host prior to setting up the MAAS.
72
**NOTE**: For all the commands in this document, please do not use a ‘root’ user account to run. Please create a non root user account. We recommend using the ‘ubuntu’ user.
74
Install the bridge-utils package on the Jump Host and configure a minimum of two bridges, one for the Admin network, the other for the Public network:
78
$ sudo apt-get install bridge-utils
80
$ cat /etc/network/interfaces
81
# This file describes the network interfaces available on your system
82
# and how to activate them. For more information, see interfaces(5).
84
# The loopback network interface
86
iface lo inet loopback
88
iface p1p1 inet manual
91
iface brAdm inet static
96
iface p1p2 inet manual
99
iface brPublic inet static
101
netmask 255.255.240.0
103
dns-nameservers 8.8.8.8
106
**NOTE**: If you choose to use separate networks for management, data, and storage, then you need to create a bridge for each interface. In case of VLAN tags, make the appropriate network on jump-host depend upon VLAN ID on the interface.
108
**NOTE**: The Ethernet device names can vary from one installation to another. Please change the Ethernet device names according to your environment.
110
MAAS has been integrated in the JOID project. To get the JOID code, please run
114
$ sudo apt-get install git
115
$ git clone https://gerrit.opnfv.org/gerrit/p/joid.git
117
Setting Up Your Environment for JOID
118
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
119
To set up your own environment, create a directory in joid/ci/maas/<company name>/<pod number>/ and copy an existing JOID environment over. For example:
124
$ mkdir -p ../labconfig/myown/pod
125
$ cp ../labconfig/cengn/pod2/labconfig.yaml ../labconfig/myown/pod/
127
Now let's configure labconfig.yaml file. Please modify the sections in the labconfig as per your lab configuration.
132
## Change the name of the lab you want maas name will get firmat as per location and rack name ##
137
## based on your lab hardware please fill it accoridngly. ##
138
# Define one network and control and two control, compute and storage
139
# and rest for compute and storage for backward compaibility. again
140
# server with more disks should be used for compute and storage only.
142
# DCOMP4-B, 24cores, 64G, 2disk, 4TBdisk
145
roles: [network,control]
149
mac: ["0c:c4:7a:3a:c5:b6"]
152
mac: ["0c:c4:7a:3a:c5:b7"]
159
## repeate the above section for number of hardware nodes you have it.
161
## define the floating IP range along with gateway IP to be used during the instance floating ips ##
162
floating-ip-range: 172.16.120.20,172.16.120.62,172.16.120.254,172.16.120.0/24
163
# Mutiple MACs seperated by space where MACs are from ext-ports across all network nodes.
165
## interface name to be used for floating ips ##
166
# eth1 of m4 since tags for networking are not yet implemented.
180
## define the maximum disk possible in your environment ##
183
## Ensure the following configuration matches the bridge configuration on your jumphost
188
gateway: 10.120.0.254
192
cidr: 172.16.120.0/24
193
gateway: 172.16.120.254
198
Next we will use the 03-maasdeploy.sh in joid/ci to kick off maas deployment.
200
Starting MAAS depoyment
201
^^^^^^^^^^^^^^^^^^^^^^^
202
Now run the 03-maasdeploy.sh script with the environment you just created
206
~/joid/ci$ ./03-maasdeploy.sh custom ../labconfig/mylab/pod/labconfig.yaml
208
This will take approximately 30 minutes to couple of hours depending on your environment. This script will do the following:
209
1. Create 1 VM (KVM).
210
2. Install MAAS on the Jumphost.
211
3. Configure MAAS to enlist and commission a VM for Juju bootstrap node.
212
4. Configure MAAS to enlist and commission bare metal servers.
213
5. Download and load 16.04 images to be used by MAAS.
215
When it's done, you should be able to view the MAAS webpage (in our example http://172.16.50.2/MAAS) and see 1 bootstrap node and bare metal servers in the 'Ready' state on the nodes page.
217
Troubleshooting MAAS deployment
218
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
219
During the installation process, please carefully review the error messages.
221
Join IRC channel #opnfv-joid on freenode to ask question. After the issues are resolved, re-running 03-maasdeploy.sh will clean up the VMs created previously. There is no need to manually undo what’s been done.
225
JOID allows you to deploy different combinations of OpenStack release and SDN solution in
226
HA or non-HA mode. For OpenStack, it supports Juno and Liberty. For SDN, it supports Open
227
vSwitch, OpenContrail, OpenDaylight and ONOS (Open Network Operating System). In addition
228
to HA or non-HA mode, it also supports deploying the latest from the development tree (tip).
230
The deploy.sh script in the joid/ci directoy will do all the work for you. For example, the following deploys OpenStack Newton with OpenvSwitch in a HA mode.
234
~/joid/ci$ ./deploy.sh -o newton -s nosdn -t ha -l custom -f none -m openstack
236
The deploy.sh script in the joid/ci directoy will do all the work for you. For example, the following deploys Kubernetes with Load balancer on the pod.
240
~/joid/ci$ ./deploy.sh -m openstack -f lb
242
Take a look at the deploy.sh script. You will find we support the following for each option::
246
odl: OpenDayLight Lithium version.
247
opencontrail: OpenContrail.
248
onos: ONOS framework as SDN.
250
noha: NO HA mode of OpenStack.
251
ha: HA mode of OpenStack.
252
tip: The tip of the development.
254
mitak: OpenStack Mitaka version.
255
newton: OpenStack Newton version.
257
default: For virtual deployment where installation will be done on KVM created using ./03-maasdeploy.sh
258
custom: Install on bare metal OPNFV defined by labconfig.yaml
260
none: no special feature will be enabled.
261
ipv6: IPv6 will be enabled for tenant in OpenStack.
262
dpdk: dpdk will be enabled.
263
lxd: virt-type will be lxd.
264
dvr: DVR will be enabled.
265
lb: Load balancing in case of Kubernetes will be enabled.
267
xenial: distro to be used is Xenial 16.04
269
amd64: Only x86 architecture will be used. Future version will support arm64 as well.
271
openstack: Openstack model will be deployed.
272
kubernetes: Kubernetes model will be deployed.
274
The script will call 01-bootstrap.sh to bootstrap the Juju VM node, then it will call 02-deploybundle.sh with the corrosponding parameter values.
278
./02-deploybundle.sh $opnfvtype $openstack $opnfvlab $opnfvsdn $opnfvfeature $opnfvdistro
281
Python script GenBundle.py would be used to create bundle.yaml based on the template
282
defined in the config_tpl/juju2/ directory.
284
By default debug is enabled in the deploy.sh script and error messages will be printed on the SSH terminal where you are running the scripts. It could take an hour to a couple of hours (maximum) to complete.
286
You can check the status of the deployment by running this command in another terminal::
288
$ watch juju status --format tabular
290
This will refresh the juju status output in tabular format every 2 seconds.
292
Next we will show you what Juju is deploying and to where, and how you can modify based on your own needs.
294
OPNFV Juju Charm Bundles
295
^^^^^^^^^^^^^^^^^^^^^^^^
296
The magic behind Juju is a collection of software components called charms. They contain
297
all the instructions necessary for deploying and configuring cloud-based services. The
298
charms publicly available in the online Charm Store represent the distilled DevOps
299
knowledge of experts.
301
A bundle is a set of services with a specific configuration and their corresponding
302
relations that can be deployed together in a single step. Instead of deploying a single
303
service, they can be used to deploy an entire workload, with working relations and
304
configuration. The use of bundles allows for easy repeatability and for sharing of
305
complex, multi-service deployments.
307
For OPNFV, we have created the charm bundles for each SDN deployment. They are stored in
308
each directory in ~/joid/ci.
310
We use Juju to deploy a set of charms via a yaml configuration file. You can find the complete format guide for the Juju configuration file here: http://pythonhosted.org/juju-deployer/config.html
312
In the ‘services’ subsection, here we deploy the ‘Ubuntu Xenial charm from the charm
313
store,’ You can deploy the same charm and name it differently such as the second
314
service ‘nodes-compute.’ The third service we deploy is named ‘ntp’ and is deployed from
315
the NTP Trusty charm from the Charm Store. The NTP charm is a subordinate charm, which is
316
designed for and deployed to the running space of another service unit.
318
The tag here is related to what we define in the deployment.yaml file for the
319
MAAS. When ‘constraints’ is set, Juju will ask its provider, in this case MAAS,
320
to provide a resource with the tags. In this case, Juju is asking one resource tagged with
321
control and one resource tagged with compute from MAAS. Once the resource information is
322
passed to Juju, Juju will start the installation of the specified version of Ubuntu.
324
In the next subsection, we define the relations between the services. The beauty of Juju
325
and charms is you can define the relation of two services and all the service units
326
deployed will set up the relations accordingly. This makes scaling out a very easy task.
327
Here we add the relation between NTP and the two bare metal services.
329
Once the relations are established, Juju considers the deployment complete and moves to the next.
333
juju deploy bundles.yaml
335
It will start the deployment , which will retry the section,
339
nova-cloud-controller:
340
branch: lp:~openstack-charmers/charms/trusty/nova-cloud-controller/next
343
network-manager: Neutron
347
We define a service name ‘nova-cloud-controller,’ which is deployed from the next branch
348
of the nova-cloud-controller Trusty charm hosted on the Launchpad openstack-charmers team.
349
The number of units to be deployed is 1. We set the network-manager option to ‘Neutron.’
350
This 1-service unit will be deployed to a LXC container at service ‘nodes-api’ unit 0.
352
To find out what other options there are for this particular charm, you can go to the code location at http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/nova-cloud-controller/next/files and the options are defined in the config.yaml file.
354
Once the service unit is deployed, you can see the current configuration by running juju get::
356
$ juju config nova-cloud-controller
358
You can change the value with juju config, for example::
360
$ juju config nova-cloud-controller network-manager=’FlatManager’
362
Charms encapsulate the operation best practices. The number of options you need to configure should be at the minimum. The Juju Charm Store is a great resource to explore what a charm can offer you. Following the nova-cloud-controller charm example, here is the main page of the recommended charm on the Charm Store: https://jujucharms.com/nova-cloud-controller/trusty/66
364
If you have any questions regarding Juju, please join the IRC channel #opnfv-joid on freenode for JOID related questions or #juju for general questions.
366
Testing Your Deployment
367
^^^^^^^^^^^^^^^^^^^^^^^
368
Once juju-deployer is complete, use juju status --format tabular to verify that all deployed units are in the ready state.
370
Find the Openstack-dashboard IP address from the juju status output, and see if you can login via a web browser. The username and password is admin/openstack.
372
Optionally, see if you can log in to the Juju GUI. The Juju GUI is on the Juju bootstrap node, which is the second VM you define in the 03-maasdeploy.sh file. The username and password is admin/admin.
374
If you deploy OpenDaylight, OpenContrail or ONOS, find the IP address of the web UI and login. Please refer to each SDN bundle.yaml for the login username/password.
378
Logs are indispensable when it comes time to troubleshoot. If you want to see all the
379
service unit deployment logs, you can run juju debug-log in another terminal. The
380
debug-log command shows the consolidated logs of all Juju agents (machine and unit logs)
381
running in the environment.
383
To view a single service unit deployment log, use juju ssh to access to the deployed unit. For example to login into nova-compute unit and look for /var/log/juju/unit-nova-compute-0.log for more info.
387
$ juju ssh nova-compute/0
391
ubuntu@R4N4B1:~$ juju ssh nova-compute/0
392
Warning: Permanently added '172.16.50.60' (ECDSA) to the list of known hosts.
393
Warning: Permanently added '3-r4n3b1-compute.maas' (ECDSA) to the list of known hosts.
394
Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 3.13.0-77-generic x86_64)
396
* Documentation: https://help.ubuntu.com/
398
Last login: Tue Feb 2 21:23:56 2016 from bootstrap.maas
399
ubuntu@3-R4N3B1-compute:~$ sudo -i
400
root@3-R4N3B1-compute:~# cd /var/log/juju/
401
root@3-R4N3B1-compute:/var/log/juju# ls
402
machine-2.log unit-ceilometer-agent-0.log unit-ceph-osd-0.log unit-neutron-contrail-0.log unit-nodes-compute-0.log unit-nova-compute-0.log unit-ntp-0.log
403
root@3-R4N3B1-compute:/var/log/juju#
405
**NOTE**: By default Juju will add the Ubuntu user keys for authentication into the deployed server and only ssh access will be available.
407
Once you resolve the error, go back to the jump host to rerun the charm hook with::
409
$ juju resolved --retry <unit>
411
If you would like to start over, run juju destroy-environment <environment name> to release the resources, then you can run deploy.sh again.
414
The following are the common issues we have collected from the community:
416
- The right variables are not passed as part of the deployment procedure.
420
./deploy.sh -o newton -s nosdn -t ha -l custom -f none
422
- If you have setup maas not with 03-maasdeploy.sh then the ./clean.sh command could hang,
423
the juju status command may hang because the correct MAAS API keys are not mentioned in
424
cloud listing for MAAS.
425
Solution: Please make sure you have an MAAS cloud listed using juju clouds.
426
and the correct MAAS API key has been added.
427
- Deployment times out:
428
use the command juju status --format=tabular and make sure all service containers receive an IP address and they are executing code. Ensure there is no service in the error state.
429
- In case the cleanup process hangs,run the juju destroy-model command manually.
431
**Direct console access** via the OpenStack GUI can be quite helpful if you need to login to a VM but cannot get to it over the network.
432
It can be enabled by setting the ``console-access-protocol`` in the ``nova-cloud-controller`` to ``vnc``. One option is to directly edit the juju-deployer bundle and set it there prior to deploying OpenStack.
436
nova-cloud-controller:
438
console-access-protocol: vnc
440
To access the console, just click on the instance in the OpenStack GUI and select the Console tab.
442
Post Installation Configuration
443
===============================
444
Configuring OpenStack
445
^^^^^^^^^^^^^^^^^^^^^
446
At the end of the deployment, the admin-openrc with OpenStack login credentials will be created for you. You can source the file and start configuring OpenStack via CLI.
450
~/joid_config$ cat admin-openrc
451
export OS_USERNAME=admin
452
export OS_PASSWORD=openstack
453
export OS_TENANT_NAME=admin
454
export OS_AUTH_URL=http://172.16.50.114:5000/v2.0
455
export OS_REGION_NAME=RegionOne
457
We have prepared some scripts to help your configure the OpenStack cloud that you just deployed. In each SDN directory, for example joid/ci/opencontrail, there is a ‘scripts’ folder where you can find the scripts. These scripts are created to help you configure a basic OpenStack Cloud to verify the cloud. For more information on OpenStack Cloud configuration, please refer to the OpenStack Cloud Administrator Guide: http://docs.openstack.org/user-guide-admin/. Similarly, for complete SDN configuration, please refer to the respective SDN administrator guide.
459
Each SDN solution requires slightly different setup. Please refer to the README in each
460
SDN folder. Most likely you will need to modify the openstack.sh and cloud-setup.sh
461
scripts for the floating IP range, private IP network, and SSH keys. Please go through
462
openstack.sh, glance.sh and cloud-setup.sh and make changes as you see fit.
464
Let’s take a look at those for the Open vSwitch and briefly go through each script so you know what you need to change for your own environment.
469
configure-juju-on-openstack get-cloud-images joid-configure-openstack
473
Let’s first look at ‘openstack.sh’. First there are 3 functions defined, configOpenrc(), unitAddress(), and unitMachine().
479
export SERVICE_ENDPOINT=$4
481
unset SERVICE_ENDPOINT
482
export OS_USERNAME=$1
483
export OS_PASSWORD=$2
484
export OS_TENANT_NAME=$3
485
export OS_AUTH_URL=$4
486
export OS_REGION_NAME=$5
491
if [[ "$jujuver" < "2" ]]; then
492
juju status --format yaml | python -c "import yaml; import sys; print yaml.load(sys.stdin)[\"services\"][\"$1\"][\"units\"][\"$1/$2\"][\"public-address\"]" 2> /dev/null
494
juju status --format yaml | python -c "import yaml; import sys; print yaml.load(sys.stdin)[\"applications\"][\"$1\"][\"units\"][\"$1/$2\"][\"public-address\"]" 2> /dev/null
499
if [[ "$jujuver" < "2" ]]; then
500
juju status --format yaml | python -c "import yaml; import sys; print yaml.load(sys.stdin)[\"services\"][\"$1\"][\"units\"][\"$1/$2\"][\"machine\"]" 2> /dev/null
502
juju status --format yaml | python -c "import yaml; import sys; print yaml.load(sys.stdin)[\"applications\"][\"$1\"][\"units\"][\"$1/$2\"][\"machine\"]" 2> /dev/null
506
The function configOpenrc() creates the OpenStack login credentials, the function unitAddress() finds the IP address of the unit, and the function unitMachine() finds the machine info of the unit.
511
keystoneIp=$(keystoneIp)
512
if [[ "$jujuver" < "2" ]]; then
513
adminPasswd=$(juju get keystone | grep admin-password -A 5 | grep value | awk '{print $2}' 2> /dev/null)
515
adminPasswd=$(juju config keystone | grep admin-password -A 5 | grep value | awk '{print $2}' 2> /dev/null)
518
configOpenrc admin $adminPasswd admin http://$keystoneIp:5000/v2.0 RegionOne > ~/joid_config/admin-openrc
519
chmod 0600 ~/joid_config/admin-openrc
522
This finds the IP address of the keystone unit 0, feeds in the OpenStack admin
523
credentials to a new file name ‘admin-openrc’ in the ‘~/joid_config/’ folder
524
and change the permission of the file. It’s important to change the credentials here if
525
you use a different password in the deployment Juju charm bundle.yaml.
529
neutron net-show ext-net > /dev/null 2>&1 || neutron net-create ext-net \
530
--router:external=True \
531
--provider:network_type flat \
532
--provider:physical_network physnet1
535
neutron subnet-show ext-subnet > /dev/null 2>&1 || neutron subnet-create ext-net \
536
--name ext-subnet --allocation-pool start=$EXTNET_FIP,end=$EXTNET_LIP \
537
--disable-dhcp --gateway $EXTNET_GW $EXTNET_NET
539
This section will create the ext-net and ext-subnet for defining the for floating ips.
543
openstack congress datasource create nova "nova" \
544
--config username=$OS_USERNAME \
545
--config tenant_name=$OS_TENANT_NAME \
546
--config password=$OS_PASSWORD \
547
--config auth_url=http://$keystoneIp:5000/v2.0
549
This section will create the congress datasource for various services.
550
Each service datasource will have entry in the file.
558
sudo mkdir $folder || true
560
if grep -q 'virt-type: lxd' bundles.yaml; then
562
http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-lxc.tar.gz \
563
http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-root.tar.gz "
567
http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img \
568
http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img \
569
http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img \
570
http://mirror.catn.com/pub/catn/images/qcow2/centos6.4-x86_64-gold-master.img \
571
http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 \
572
http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img "
578
if [ -f $folder/$FILENAME ];
580
echo "$FILENAME already downloaded."
582
wget -O $folder/$FILENAME $URL
586
This section of the file will download the images to jumphost if not found to be used with
589
**NOTE**: The image downloading and uploading might take too long and time out. In this case, use juju ssh glance/0 to log in to the glance unit 0 and run the script again, or manually run the glance commands.
591
joid-configure-openstack
592
~~~~~~~~~~~~~~~~~~~~~~~~
596
source ~/joid_config/admin-openrc
598
First, source the the admin-openrc file.
601
#Upload images to glance
602
glance image-create --name="Xenial LXC x86_64" --visibility=public --container-format=bare --disk-format=root-tar --property architecture="x86_64" < /srv/data/xenial-server-cloudimg-amd64-root.tar.gz
603
glance image-create --name="Cirros LXC 0.3" --visibility=public --container-format=bare --disk-format=root-tar --property architecture="x86_64" < /srv/data/cirros-0.3.4-x86_64-lxc.tar.gz
604
glance image-create --name="Trusty x86_64" --visibility=public --container-format=ovf --disk-format=qcow2 < /srv/data/trusty-server-cloudimg-amd64-disk1.img
605
glance image-create --name="Xenial x86_64" --visibility=public --container-format=ovf --disk-format=qcow2 < /srv/data/xenial-server-cloudimg-amd64-disk1.img
606
glance image-create --name="CentOS 6.4" --visibility=public --container-format=bare --disk-format=qcow2 < /srv/data/centos6.4-x86_64-gold-master.img
607
glance image-create --name="Cirros 0.3" --visibility=public --container-format=bare --disk-format=qcow2 < /srv/data/cirros-0.3.4-x86_64-disk.img
609
upload the images into glane to be used for creating the VM.
614
nova flavor-delete m1.tiny
615
nova flavor-create m1.tiny 1 512 8 1
617
Adjust the tiny image profile as the default tiny instance is too small for Ubuntu.
621
# configure security groups
622
neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol icmp --remote-ip-prefix 0.0.0.0/0 default
623
neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol tcp --port-range-min 22 --port-range-max 22 --remote-ip-prefix 0.0.0.0/0 default
625
Open up the ICMP and SSH access in the default security group.
630
keystone tenant-create --name demo --description "Demo Tenant"
631
keystone user-create --name demo --tenant demo --pass demo --email demo@demo.demo
633
nova keypair-add --pub-key id_rsa.pub ubuntu-keypair
635
Create a project called ‘demo’ and create a user called ‘demo’ in this project. Import the key pair.
639
# configure external network
640
neutron net-create ext-net --router:external --provider:physical_network external --provider:network_type flat --shared
641
neutron subnet-create ext-net --name ext-subnet --allocation-pool start=10.5.8.5,end=10.5.8.254 --disable-dhcp --gateway 10.5.8.1 10.5.8.0/24
643
This section configures an external network ‘ext-net’ with a subnet called ‘ext-subnet’.
644
In this subnet, the IP pool starts at 10.5.8.5 and ends at 10.5.8.254. DHCP is disabled.
645
The gateway is at 10.5.8.1, and the subnet mask is 10.5.8.0/24. These are the public IPs
646
that will be requested and associated to the instance. Please change the network configuration according to your environment.
651
neutron net-create demo-net
652
neutron subnet-create --name demo-subnet --gateway 10.20.5.1 demo-net 10.20.5.0/24
654
This section creates a private network for the instances. Please change accordingly.
658
neutron router-create demo-router
659
neutron router-interface-add demo-router demo-subnet
660
neutron router-gateway-set demo-router ext-net
662
This section creates a router and connects this router to the two networks we just created.
666
# create pool of floating ips
668
while [ $i -ne 10 ]; do
669
neutron floatingip-create ext-net
673
Finally, the script will request 10 floating IPs.
675
configure-juju-on-openstack
676
~~~~~~~~~~~~~~~~~~~~~~~~~~~
678
This script can be used to do juju bootstrap on openstack so that Juju can be used as model tool to deploy the services and VNF on top of openstack using the JOID.
681
Appendix A: Single Node Deployment
682
==================================
683
By default, running the script ./03-maasdeploy.sh will automatically create the KVM VMs on a single machine and configure everything for you.
687
if [ ! -e ./labconfig.yaml ]; then
690
cp ../labconfig/default/labconfig.yaml ./
691
cp ../labconfig/default/deployconfig.yaml ./
693
Please change joid/ci/labconfig/default/labconfig.yaml accordingly. The MAAS deployment script will do the following:
694
1. Create bootstrap VM.
695
2. Install MAAS on the jumphost.
696
3. Configure MAAS to enlist and commission VM for Juju bootstrap node.
698
Later, the 03-massdeploy.sh script will create three additional VMs and register them into the MAAS Server:
702
if [ "$virtinstall" -eq 1 ]; then
703
sudo virt-install --connect qemu:///system --name $NODE_NAME --ram 8192 --cpu host --vcpus 4 \
704
--disk size=120,format=qcow2,bus=virtio,io=native,pool=default \
705
$netw $netw --boot network,hd,menu=off --noautoconsole --vnc --print-xml | tee $NODE_NAME
707
nodemac=`grep "mac address" $NODE_NAME | head -1 | cut -d '"' -f 2`
708
sudo virsh -c qemu:///system define --file $NODE_NAME
710
maas $PROFILE machines create autodetect_nodegroup='yes' name=$NODE_NAME \
711
tags='control compute' hostname=$NODE_NAME power_type='virsh' mac_addresses=$nodemac \
712
power_parameters_power_address='qemu+ssh://'$USER'@'$MAAS_IP'/system' \
713
architecture='amd64/generic' power_parameters_power_id=$NODE_NAME
714
nodeid=$(maas $PROFILE machines read | jq -r '.[] | select(.hostname == '\"$NODE_NAME\"').system_id')
715
maas $PROFILE tag update-nodes control add=$nodeid || true
716
maas $PROFILE tag update-nodes compute add=$nodeid || true
720
Appendix B: Automatic Device Discovery
721
======================================
722
If your bare metal servers support IPMI, they can be discovered and enlisted automatically
723
by the MAAS server. You need to configure bare metal servers to PXE boot on the network
724
interface where they can reach the MAAS server. With nodes set to boot from a PXE image,
725
they will start, look for a DHCP server, receive the PXE boot details, boot the image,
726
contact the MAAS server and shut down.
728
During this process, the MAAS server will be passed information about the node, including
729
the architecture, MAC address and other details which will be stored in the database of
730
nodes. You can accept and commission the nodes via the web interface. When the nodes have
731
been accepted the selected series of Ubuntu will be installed.
734
Appendix C: Machine Constraints
735
===============================
736
Juju and MAAS together allow you to assign different roles to servers, so that hardware and software can be configured according to their roles. We have briefly mentioned and used this feature in our example. Please visit Juju Machine Constraints https://jujucharms.com/docs/stable/charms-constraints and MAAS tags https://maas.ubuntu.com/docs/tags.html for more information.
738
Appendix D: Offline Deployment
739
==============================
740
When you have limited access policy in your environment, for example, when only the Jump Host has Internet access, but not the rest of the servers, we provide tools in JOID to support the offline installation.
742
The following package set is provided to those wishing to experiment with a ‘disconnected
743
from the internet’ setup when deploying JOID utilizing MAAS. These instructions provide
744
basic guidance as to how to accomplish the task, but it should be noted that due to the
745
current reliance of MAAS and DNS, that behavior and success of deployment may vary
746
depending on infrastructure setup. An official guided setup is in the roadmap for the next release:
748
1. Get the packages from here: https://launchpad.net/~thomnico/+archive/ubuntu/ubuntu-cloud-mirrors
750
**NOTE**: The mirror is quite large 700GB in size, and does not mirror SDN repo/ppa.
752
2. Additionally to make juju use a private repository of charms instead of using an external location are provided via the following link and configuring environments.yaml to use cloudimg-base-url: https://github.com/juju/docs/issues/757