323
327
service nova-compute restart
329
Configure Neutron to communicate with the Bare Metal Server
330
===========================================================
332
Neutron needs to be configured so that the bare metal server can communicate
333
with the OpenStack Networking service for DHCP, PXE Boot and other
334
requirements. This section describes how to configure Neutron for a single flat
335
network use case for bare metal provisioning.
337
You will also need to provide Ironic with the MAC address(es) of each Node that
338
it is provisioning; Ironic in turn will pass this information to Neutron for
339
DHCP and PXE Boot configuration. An example of this is shown in the
340
`Enrollment`_ section.
342
#. Edit ``/etc/neutron/plugins/ml2/ml2_conf.ini`` and modify these::
346
tenant_network_types = flat
347
mechanism_drivers = openvswitch
350
flat_networks = physnet1
353
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
354
enable_security_group = True
357
network_vlan_ranges = physnet1
358
bridge_mappings = physnet1:br-eth2
359
# Replace eth2 with the interface on the neutron node which you
360
# are using to connect to the bare metal server
362
#. Add the integration bridge to Open vSwitch::
364
ovs-vsctl add-br br-int
366
#. Create the br-eth2 network bridge to handle communication between the
367
OpenStack (and Bare Metal services) and the bare metal nodes using eth2.
368
Replace eth2 with the interface on the neutron node which you are
369
using to connect to the Bare Metal Service::
371
ovs-vsctl add-br br-eth2
372
ovs-vsctl add-port br-eth2 eth2
374
#. Restart the Open vSwitch agent::
376
service neutron-plugin-openvswitch-agent restart
378
#. On restarting the Neutron Open vSwitch agent, the veth pair between
379
the bridges br-int and br-eth2 is automatically created.
381
Your Open vSwitch bridges should look something like this after
382
following the above steps::
394
Interface "int-br-eth2"
403
Interface "phy-br-eth2"
408
#. Create the flat network on which you are going to launch the
411
neutron net-create --tenant-id $TENANT_ID sharednet1 --shared \
412
--provider:network_type flat --provider:physical_network physnet1
417
Bare Metal provisioning requires two sets of images: the deploy images
418
and the user images. The deploy images are used by the Bare Metal Service
419
to prepare the bare metal server for actual OS deployment. Whereas the
420
user images are installed on the bare metal server to be used by the
421
end user. Below are the steps to create the required images and add
422
them to Glance service:
424
1. The `disk-image-builder`_ can be used to create images required for
425
deployment and the actual OS which the user is going to run.
427
.. _disk-image-builder: https://github.com/openstack/diskimage-builder
429
*Note:* `tripleo-incubator`_ provides a `script`_ to install all the
430
dependencies for the disk-image-builder.
432
.. _tripleo-incubator: https://github.com/openstack/tripleo-incubator
434
.. _script: https://github.com/openstack/tripleo-incubator/blob/master/scripts/install-dependencies
436
- Clone the project and run the subsequent commands from the project
439
git clone https://github.com/openstack/diskimage-builder.git
442
- Build the image your users will run (Ubuntu image has been taken as
445
bin/disk-image-create -u ubuntu -o my-image
447
The above command creates *my-image.qcow2* file. If you want to use
448
Fedora image, replace *ubuntu* with *fedora* in the above command.
450
- Extract the kernel & ramdisk::
452
bin/disk-image-get-kernel -d ./ -o my \
453
-i $(pwd)/my-image.qcow2
455
The above command creates *my-vmlinuz* and *my-initrd* files. These
456
images are used while deploying the actual OS the users will run,
457
my-image in our case.
459
- Build the deploy image::
461
bin/ramdisk-image-create ubuntu deploy-ironic \
464
The above command creates *my-deploy-ramdisk.kernel* and
465
*my-deploy-ramdisk.initramfs* files which are used initially for
466
preparing the server (creating disk partitions) before the actual
467
OS deploy. If you want to use a Fedora image, replace *ubuntu* with
468
*fedora* in the above command.
470
2. Add the user images to glance
472
Load all the images created in the below steps into Glance, and
473
note the glance image UUIDs for each one as it is generated.
475
- Add the kernel and ramdisk images to glance::
477
glance image-create --name my-kernel --public \
478
--disk-format aki < my-vmlinuz
480
Store the image uuid obtained from the above step as
485
glance image-create --name my-ramdisk --public \
486
--disk-format ari < my-initrd
488
Store the image UUID obtained from the above step as
491
- Add the *my-image* to glance which is going to be the OS
492
that the user is going to run. Also associate the above created
493
images with this OS image. These two operations can be done by
494
executing the following command::
496
glance image-create --name my-image --public \
497
--disk-format qcow2 --container-format bare --property \
498
kernel_id=$MY_VMLINUZ_UUID --property \
499
ramdisk_id=$MY_INITRD_UUID < my-image
501
3. Add the deploy images to glance
503
Add the *my-deploy-ramdisk.kernel* and
504
*my-deploy-ramdisk.initramfs* images to glance::
506
glance image-create --name deploy-vmlinuz --public \
507
--disk-format aki < my-deploy-ramdisk.kernel
509
Store the image UUID obtained from the above step as
510
*$DEPLOY_VMLINUZ_UUID*.
514
glance image-create --name deploy-initrd --public \
515
--disk-format ari < my-deploy-ramdisk.initramfs
517
Store the image UUID obtained from the above step as
518
*$DEPLOY_INITRD_UUID*.
523
You'll need to create a special Bare Metal flavor in Nova. The flavor is
524
mapped to the bare metal server through the hardware specifications.
526
#. Change these to match your hardware::
533
#. Create the baremetal flavor by executing the following command::
535
nova flavor-create my-baremetal-flavor auto $RAM_MB $DISK_GB $CPU
537
*Note: You can replace auto with your own flavor id.*
539
#. A flavor can include a set of key/value pairs called extra_specs.
540
In case of Icehouse version of Ironic, you need to associate the
541
deploy ramdisk and deploy kernel images to the flavor as flavor-keys.
542
But in case of Juno and higher versions, this is deprecated. Because these
543
may vary between nodes in a heterogeneous environment, the deploy kernel
544
and ramdisk images should be associated with each node's driver_info.
546
- **Icehouse** version of Ironic::
548
nova flavor-key my-baremetal-flavor set \
550
"baremetal:deploy_kernel_id"=$DEPLOY_VMLINUZ_UUID \
551
"baremetal:deploy_ramdisk_id"=$DEPLOY_INITRD_UUID
553
- **Juno** and higher versions of Ironic::
555
nova flavor-key my-baremetal-flavor set cpu_arch=$ARCH
557
Associate the deploy ramdisk and deploy kernel images each of your
560
ironic node-update $NODE_UUID add \
561
driver_info/pxe_deploy_kernel=$DEPLOY_VMLINUZ_UUID \
562
driver_info/pxe_deploy_ramdisk=$DEPLOY_INITRD_UUID \
326
564
Setup the drivers for Bare Metal Service
327
565
========================================
484
784
Ceilometer, they are:
486
786
* Temperature,Fan,Voltage,Current
788
.. _boot_mode_support:
793
The following drivers support setting of boot mode (Legacy BIOS or UEFI).
797
The boot modes can be configured in Ironic in the following way:
799
* When no boot mode setting is provided, these drivers default the boot_mode
802
* Only one boot mode (either ``uefi`` or ``bios``) can be configured for
805
* If the operator wants a node to boot always in ``uefi`` mode or ``bios``
806
mode, then they may use ``capabilities`` parameter within ``properties``
807
field of an Ironic node. The operator must manually set the appropriate
808
boot mode on the bare metal node.
810
To configure a node in ``uefi`` mode, then set ``capabilities`` as below::
812
ironic node-update <node-uuid> add properties/capabilities='boot_mode:uefi'
814
Nodes having ``boot_mode`` set to ``uefi`` may be requested by adding an
815
``extra_spec`` to the Nova flavor::
817
nova flavor-key ironic-test-3 set capabilities:boot_mode="uefi"
818
nova boot --flavor ironic-test-3 --image test-image instance-1
820
If ``capabilities`` is used in ``extra_spec`` as above, Nova scheduler
821
(``ComputeCapabilitesFilter``) will match only Ironic nodes which have
822
the ``boot_mode`` set appropriately in ``properties/capabilities``. It will
823
filter out rest of the nodes.
825
The above facility for matching in Nova can be used in heterogenous
826
environments where there is a mix of ``uefi`` and ``bios`` machines, and
827
operator wants to provide a choice to the user regarding boot modes. If
828
the flavor doesn't contain ``boot_mode`` and ``boot_mode`` is configured for
829
Ironic nodes, then Nova scheduler will consider all nodes and user may get
830
either ``bios`` or ``uefi`` machine.
836
After all services have been properly configured, you should enroll your
837
hardware with Ironic, and confirm that the Compute service sees the available
841
When enrolling Nodes with Ironic, note that the Compute service will not
842
be immediately notified of the new resources. Nova's resource tracker
843
syncs periodically, and so any changes made directly to Ironic's resources
844
will become visible in Nova only after the next run of that periodic task.
845
More information is in the `Troubleshooting`_ section below.
848
Any Ironic Node that is visible to Nova may have a workload scheduled to it,
849
if both the ``power`` and ``deploy`` interfaces pass the ``validate`` check.
850
If you wish to exclude a Node from Nova's scheduler, for instance so that
851
you can perform maintenance on it, you can set the Node to "maintenance" mode.
852
For more information see the `Troubleshooting`_ section below.
854
Some steps are shown separately for illustration purposes, and may be combined
857
#. Create a Node in Ironic. At minimum, you must specify the driver name (eg,
858
"pxe_ipmitool"). This will return the node UUID::
860
ironic node-create -d pxe_ipmitool
861
+--------------+--------------------------------------+
863
+--------------+--------------------------------------+
864
| uuid | dfc6189f-ad83-4261-9bda-b27258eb1987 |
867
| driver | pxe_ipmitool |
870
+--------------+--------------------------------------+
872
#. Update the Node ``driver_info`` so that Ironic can manage the node. Different
873
drivers may require different information about the node. You can determine this
874
with the ``driver-properties`` command, as follows::
876
ironic driver-properties pxe_ipmitool
877
+----------------------+-------------------------------------------------------------------------------------------------------------+
878
| Property | Description |
879
+----------------------+-------------------------------------------------------------------------------------------------------------+
880
| ipmi_address | IP address or hostname of the node. Required. |
881
| ipmi_password | password. Optional. |
882
| ipmi_username | username; default is NULL user. Optional. |
884
| pxe_deploy_kernel | UUID (from Glance) of the deployment kernel. Required. |
885
| pxe_deploy_ramdisk | UUID (from Glance) of the ramdisk that is mounted at boot time. Required. |
886
+----------------------+-------------------------------------------------------------------------------------------------------------+
888
ironic node-update $NODE_UUID add \
889
driver_info/ipmi_username=$USER \
890
driver_info/ipmi_password=$PASS \
891
driver_info/ipmi_address=$ADDRESS
893
Note that you may also specify all ``driver_info`` parameters during
894
``node-create`` by passing the **-i** option multiple times.
896
#. Update the Node's properties to match the baremetal flavor you created
899
ironic node-update $NODE_UUID add \
900
properties/cpus=$CPU \
901
properties/memory_mb=$RAM_MB \
902
properties/local_gb=$DISK_GB \
903
properties/cpu_arch=$ARCH
905
As above, these can also be specified at node creation by passing the **-p**
906
option to ``node-create`` multiple times.
908
#. If you wish to perform more advanced scheduling of instances based on
909
hardware capabilities, you may add metadata to each Node that will be
910
exposed to the Nova Scheduler (see: `ComputeCapabilitiesFilter`_). A full
911
explanation of this is outside of the scope of this document. It can be done
912
through the special ``capabilities`` member of Node properties::
914
ironic node-update $NODE_UUID add \
915
properties/capabilities=key1:val1,key2:val2
917
#. As mentioned in the `Flavor Creation`_ section, if using the Juno or later
918
release of Ironic, you should specify a deploy kernel and ramdisk which
919
correspond to the Node's driver, eg::
921
ironic node-update $NODE_UUID add \
922
driver_info/pxe_deploy_kernel=$DEPLOY_VMLINUZ_UUID \
923
driver_info/pxe_deploy_ramdisk=$DEPLOY_INITRD_UUID \
925
#. You must also inform Ironic of the Network Interface Cards which are part of
926
the Node by creating a Port with each NIC's MAC address. These MAC
927
addresses are passed to Neutron during instance provisioning and used to
928
configure the network appropriately::
930
ironic port-create -n $NODE_UUID -a $MAC_ADDRESS
932
#. To check if Ironic has the minimum information necessary for a Node's driver
933
to function, you may ``validate`` it::
935
ironic node-validate $NODE_UUID
937
+------------+--------+--------+
938
| Interface | Result | Reason |
939
+------------+--------+--------+
942
| management | True | |
944
+------------+--------+--------+
946
If the Node fails validation, each driver will return information as to why it failed::
948
ironic node-validate $NODE_UUID
950
+------------+--------+-------------------------------------------------------------------------------------------------------------------------------------+
951
| Interface | Result | Reason |
952
+------------+--------+-------------------------------------------------------------------------------------------------------------------------------------+
953
| console | None | not supported |
954
| deploy | False | Cannot validate iSCSI deploy. Some parameters were missing in node's instance_info. Missing are: ['root_gb', 'image_source'] |
955
| management | False | Missing the following IPMI credentials in node's driver_info: ['ipmi_address']. |
956
| power | False | Missing the following IPMI credentials in node's driver_info: ['ipmi_address']. |
957
+------------+--------+-------------------------------------------------------------------------------------------------------------------------------------+
959
.. _ComputeCapabilitiesFilter: http://docs.openstack.org/developer/nova/devref/filter_scheduler.html?highlight=computecapabilitiesfilter
965
Once all the services are running and configured properly, and a Node is
966
enrolled with Ironic, the Nova Compute service should detect the Node as an
967
available resource and expose it to the scheduler.
970
There is a delay, and it may take up to a minute (one periodic task cycle)
971
for Nova to recognize any changes in Ironic's resources (both additions and
974
In addition to watching ``nova-compute`` log files, you can see the available
975
resources by looking at the list of Nova hypervisors. The resources reported
976
therein should match the Ironic Node properties, and the Nova Flavor.
978
Here is an example set of commands to compare the resources in Nova and Ironic::
981
+--------------------------------------+---------------+-------------+--------------------+-------------+
982
| UUID | Instance UUID | Power State | Provisioning State | Maintenance |
983
+--------------------------------------+---------------+-------------+--------------------+-------------+
984
| 86a2b1bb-8b29-4964-a817-f90031debddb | None | power off | None | False |
985
+--------------------------------------+---------------+-------------+--------------------+-------------+
987
$ ironic node-show 86a2b1bb-8b29-4964-a817-f90031debddb
988
+------------------------+----------------------------------------------------------------------+
990
+------------------------+----------------------------------------------------------------------+
991
| instance_uuid | None |
992
| properties | {u'memory_mb': u'1024', u'cpu_arch': u'x86_64', u'local_gb': u'10', |
994
| maintenance | False |
995
| driver_info | { [SNIP] } |
997
| last_error | None |
998
| created_at | 2014-11-20T23:57:03+00:00 |
999
| target_provision_state | None |
1000
| driver | pxe_ipmitool |
1001
| updated_at | 2014-11-21T00:47:34+00:00 |
1002
| instance_info | {} |
1003
| chassis_uuid | 7b49bbc5-2eb7-4269-b6ea-3f1a51448a59 |
1004
| provision_state | None |
1005
| reservation | None |
1006
| power_state | power off |
1007
| console_enabled | False |
1008
| uuid | 86a2b1bb-8b29-4964-a817-f90031debddb |
1009
+------------------------+----------------------------------------------------------------------+
1011
$ nova hypervisor-show 1
1012
+-------------------------+--------------------------------------+
1013
| Property | Value |
1014
+-------------------------+--------------------------------------+
1015
| cpu_info | baremetal cpu |
1016
| current_workload | 0 |
1017
| disk_available_least | - |
1018
| free_disk_gb | 10 |
1019
| free_ram_mb | 1024 |
1020
| host_ip | [ SNIP ] |
1021
| hypervisor_hostname | 86a2b1bb-8b29-4964-a817-f90031debddb |
1022
| hypervisor_type | ironic |
1023
| hypervisor_version | 1 |
1026
| local_gb_used | 0 |
1027
| memory_mb | 1024 |
1028
| memory_mb_used | 0 |
1030
| service_disabled_reason | - |
1031
| service_host | my-test-host |
1034
| status | enabled |
1037
+-------------------------+--------------------------------------+
1039
If you need to take a Node out of the resource pool and prevent Nova from
1040
placing a tenant instance upon it, you can mark the Node as in “maintenance”
1041
mode with the following command. This also prevents Ironic from executing
1042
periodic tasks which might affect the node, until maintenance mode is disabled::
1044
$ ironic node-set-maintenance $NODE_UUID on