~bigdata-dev/charms/trusty/apache-hadoop-plugin/trunk

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
## Overview

The Apache Hadoop software library is a framework that allows for the
distributed processing of large data sets across clusters of computers
using a simple programming model.

This charm plugs in to a workload charm to provide the
[Apache Hadoop 2.4.1](http://hadoop.apache.org/docs/r2.4.1/)
libraries and configuration for the workload to use.

## Usage

This charm is intended to be deployed via one of the
[bundles](https://jujucharms.com/q/bigdata-dev/apache?type=bundle).
For example:

    juju quickstart u/bigdata-dev/apache-analytics-sql

This will deploy the Apache Hadoop platform with a workload node
which is running Apache Hive to perform SQL-like queries against your data.

If you wanted to also wanted to be able to analyze your data using Apache Pig,
you could deploy it and attach it to the same plugin:

    juju deploy cs:~bigdata-dev/apache-pig pig
    juju add-relation plugin pig


## Deploying in Network-Restricted Environments

The Apache Hadoop charms can be deployed in environments with limited network
access. To deploy in this environment, you will need a local mirror to serve
the packages and resources required by these charms.


### Mirroring Packages

You can setup a local mirror for apt packages using squid-deb-proxy.
For instructions on configuring juju to use this, see the
[Juju Proxy Documentation](https://juju.ubuntu.com/docs/howto-proxies.html).


### Mirroring Resources

In addition to apt packages, the Apache Hadoop charms require a few binary
resources, which are normally hosted on Launchpad. If access to Launchpad
is not available, the `jujuresources` library makes it easy to create a mirror
of these resources:

    sudo pip install jujuresources
    juju resources fetch --all apache-hadoop-plugin/resources.yaml -d /tmp/resources
    juju resources serve -d /tmp/resources

This will fetch all of the resources needed by this charm and serve them via a
simple HTTP server. You can then set the `resources_mirror` config option to
have the charm use this server for retrieving resources.

You can fetch the resources for all of the Apache Hadoop charms
(`apache-hadoop-hdfs-master`, `apache-hadoop-yarn-master`,
`apache-hadoop-compute-slave`, `apache-hadoop-plugin`, etc) into a single
directory and serve them all with a single `juju resources serve` instance.


## Contact Information

[Big Data Juju mailing list](mailto:bigdata-dev@lists.launchpad.net)


## Hadoop

- [Apache Hadoop](http://hadoop.apache.org/) home page
- [Apache Hadoop bug trackers](http://hadoop.apache.org/issue_tracking.html)
- [Apache Hadoop mailing lists](http://hadoop.apache.org/mailing_lists.html)
- [Apache Hadoop Juju Charm](http://jujucharms.com/?text=hadoop)