19
19
environment variable, you do not need to install anything.
28
mojo.py accepts a '--project' argument - this tells it which mojo project
29
should be used. If you plan on calling mojo.py directly, you will need to
30
specify the mojo project name.
32
However, cd.py contains a feature where mojo projects are stored in a pool.
33
This prevents the need to have a 1:1 relationship between services and
34
mojo projects, and allows us to use the mojo projects we have in the best
35
possible way. To use the mojo project pool feature:
37
1. Create the pool directory::
39
mkdir ~/mojo_project_pool
41
2. Create the pool lock directory::
43
mkdir ~/mojo_project_pool/locks
45
3. For each mojo project that you want to use in the pool, create an empty
46
file in the project pool directory. For example, if you have a mojo
47
project called 'ci-0'::
49
touch ~/mojo_project_pool/ci-0
51
4. When calling ci.py, pass the `--use-project-pool` argument, and *don't*
52
pass the `--project` option.
57
Most mojo specs require configuration and other files to be present in the
58
`/srv/mojo/LOCAL/<mojo_project>/<mojo_stage>/` directory. cd.py will expect
59
all these files to be in a bzr repository. To set up service config:
61
1. Create the config repository::
63
bzr init ~/service_configs
65
2. Within that repository, create a directory that matches the full mojo
66
stage you're deploying. For example::
69
mkdir -p ue/mojo-ue-snappy-proposed-selftest-agent/devel
71
3. Add whatever config files, data files, or other secrets you need to that
72
directory. Commit your changes, and cd.py will ensure those files are
73
present when mojo copies them to the spec workspace.
75
Creating new Mojo Projects
76
--------------------------
25
78
If working locally, you will have to create the shared mojo project called
26
79
"mojo-stg-ue-ci-engineering" deployments ::
31
84
mojo-project are managed by IS. You can choose a different project name,
32
85
but that will force you to pass it via '--project' to `mojo.py`.
34
Deploy "mojo-ue-ci-rabbit"::
91
Deploying services with mojo.py
92
-------------------------------
94
The mojo.py script handles actually deploying services using mojo. On a
95
jumphost, it's rarely called directly, but rather called from cd.py.
97
To deploy the "mojo-ue-ci-rabbit" service::
37
100
--stage ue/mojo-ue-ci-rabbitmq/devel \
40
103
--base ~/juju-environments \
41
104
# '--devel' only if you're deploying outside PS4. for e.g. bootstack
44
Get the IP address of the rabbit server using its dedicated environment
47
# NOTE: The SHA1_HASH below will vary according to the deployment contents
48
$ JUJU_HOME=~/juju-environments/mojo-ue-ci-rabbit-{SHA1_HASH} \
49
juju status | grep public-address
51
Deploy the "adt-cloud-worker"::
54
--stage ue/mojo-ue-adt-cloud-worker/devel \
55
--branch lp:~canonical-ci-engineering/canonical-mojo-specs/trunk \
56
--network 415a0839-eb05-4e7a-907c-413c657f4bf5 \
57
--base ~/juju-environments \
58
--amqp-uris "amqp://guest:guest@${AMQP_IP}:5672//"
59
# '--devel' only if you're deploying outside PS4. for e.g. bootstack
61
Check if "adt-cloud-worker" is correctly configured to access the
62
previously deployed "ci-rabbit"::
64
$ JUJU_HOME=/srv/juju-environments/mojo-ue-adt-cloud-worker-{SHA1_HASH}/ \
65
juju run --unit adt-cloud-worker/0 \
66
"tail /srv/adt-cloud-worker/logs/adt-cloud-worker.log"
70
Connected to amqp://guest:**@<<< GIVEN RABBIT IP ADDR >>>:5672//
106
The `--stage` environment variable specifies both the mojo spec and the mojo
107
stage. It reads the mojo spec from whatever is specified in the `--branch`
110
The `--network` option is used for auto-generated config, and is being
113
Inspecting services with list.py and juju
114
-----------------------------------------
116
The list.py script gives us a list of everything that's been deployed. An
117
example output is shown below::
119
$ ./adt-continuous-deployer/list.py
120
* mojo-ue-snappy-proposed-image-builder:
121
27c224e5: Thu Jun 4 21:14:15 2015 (2 units)
122
705f3930: Thu Jun 4 18:28:16 2015 (2 units)
123
c9ab5b8c: Thu Jun 4 18:14:15 2015 (2 units)
124
deeb15d9: Wed Jun 3 19:35:17 2015 (1 units)
125
a45035f8: Wed Jun 3 19:05:15 2015 (2 units)
126
* mojo-ue-snappy-proposed-image-tester:
127
58dd9477: Wed Jun 3 18:30:47 2015 (2 units)
128
* mojo-ue-snappy-proposed-result-checker:
129
d021d997: Thu Jun 4 01:04:01 2015 (2 units)
130
* mojo-ue-snappy-proposed-selftest-agent:
131
35edcc15: Thu Jun 4 20:40:32 2015 (2 units)
132
8b84a34a: Wed Jun 3 22:20:32 2015 (2 units)
134
The first level of output is the service name. The second level is the
135
deployment hash. The hash is calculated from all the bzr revnos of all the
136
branches involved in the deployment. Here we can see several parallel
137
deployments of the 'mojo-ue-snappy-proposed-image-builder' service.
139
We can combine the service name and a deployment hash to get the juju home
140
directory. This allows us to run arbitrary juju commands::
142
JUJU_HOME=~/juju-environments/mojo-ue-snappy-proposed-image-builder-27c224e5 \
145
This is useful for several tasks:
147
* Inspecting why a service didn't deploy properly.
148
* Finding the public IP address of a server.
149
* Logging in to a deployed service with `juju ssh`
150
* Running arbitrary commands on the deployed service with `juju run`
152
Continuous Delivery with cd.py
153
------------------------------
74
155
Continuous and isolated deployments can be done with `cd.py` which can
75
periodically (cron) inspect a mojo spec and has been updated, it call
76
`mojo.py` to do a new (and parallel) deployment::
156
periodically (via cron, for example) determine if a mojo spec, or any of the
157
branches it deploys has been updated, and call mojo.py to deploy a the new
158
revision. An example crontab setup is shown below::
78
stg-ue-ci-engineering@wendigo:~$ crontab -l
79
160
NET_ID="79126fa3-b675-4f68-b7ec-c5f4be5dbe0e"
80
161
CI_SPECS_BRANCH="lp:~canonical-ci-engineering/canonical-mojo-specs/trunk"
106
187
lp:~canonical-sysadmins/basenode/trunk 80
107
188
lp:~canonical-sysadmins/basenode/trunk 80
190
This information is also written to the nova instance(s) metadata field.
110
So, for figuring out what was deployed for a particular juju environment-name
192
To figure out what was deployed for a particular juju environment-name
111
193
/ machine-name we have to lookup for it's hash file and inspect its contents.
113
`monitor.py` is to monitor the number of deployments to not go out of hands.
114
The default number per each service deployment is now 2 and any more deployments
196
Monitoring deployed services with monitor.py
197
--------------------------------------------
199
The `monitor.py` script is used to monitor the number of deployments. Eventually
200
we will autmoatically destroy old deployments, but right now this is not
201
enabled. The policy in monitor.py is to allow two deployments (a current, and an
202
old revision, and to destroy the rest).
117
204
A cron job in wendigo employing `monitor.py` looks like the folloging::
118
206
8-59/20 * * * * . $HOME/.novarc; export CI_LOGSTASH_HOST=$INTERNAL_LOGSTASH_HOST; ~/adt-continuous-deployer/monitor.py mojo-ue-core-result-checker >> ~/core-result-checker.log 2>&1
120
`reaper.py` is to destroy an deployment that was deployed using `cd.py` along
121
with its environment by using its SHA1 hash value::
208
Destroying environments with reaper.py
209
--------------------------------------
211
The `reaper.py` script is used to destroy an deployment that was deployed using
212
`cd.py`. The only thing you need to specify is the deployment hash, as shown
122
215
./reaper.py 4c29c6aa