Creating a distributed Redis system using Docker

redis-on-docker

A common problem I face on a daily basis is a lack of hardware / resource in order to test things out to the fullest. For example, in days gone by I’d have needed 3 servers for what i’m about to do – and in more recent times, 3 virtual machines. I dont have the time to continuously build these items, nor the resource if we were going physical. This is where my new found interest in Docker can help me out!

What I want to do on my Ubuntu ‘host’ server is create 3 Docker containers running Redis, and link them all together so that I can then develop and test the best way to monitor h-scaled Redis. Below I will show you how i’ve done it, and the benefits (even beauty) of it!

1. Download and configure our base image

For my base images, I use the excellent ‘phusion’ image which adds in a lot features that Docker omits (by design or not), including ordered startup of services and more. For more information on the phusion-base image, click here.

First, lets pull the phusion image as below, using Docker:

On completion, you should be able to run ‘docker’ images and see the new image as below:

This image is great as you can tell it easily to startup certain services on start of the container, which is a bit of a pain in the arse otherwise (in my other blog i outlined how when using a standard base image you need to use bash to essentially start things!). One of the things I wanted to ADD to this image is the ability to login via SSH using a password instead of keys; given this is only going to be running on a locally contained box im not too worried around security!

To do this on phusion, we need to create an instance of phusion, login via SSH and modify the config, then restart SSH. We will then be able to login using the root user.

First, create a new container from the phusion base image:

The ‘/sbin/my_init’ is what allows phusion to start your services on boot of the container. The ‘enable-insecure-key’ allows us to ssh into the new container using an ‘insecure key’ (see below).

Now that the container has been deployed, we need to get its IP address. To do this, use ‘docker inspect’ as below:

Next, lets pull down the ‘insecure key’ and using it to login to our new container:

Congratulations, you are now ssh’d into the container! Next we need to modify SSH so we dont need to use the insecure key in the future. To do this, open up /etc/ssh/sshd_config and find and uncomment the following line:

Then save the file and exit the text editor. Finally, we need to give our root user a password. To do this run ‘passwd’ as below:

And thats that for SSH on our base image. Next we will install redis on the container.

2. Install Redis

Now that we have our base image configured, we will need to download and install Redis. For my example I am using the latest version of redis, which can be downloaded from the page here. For my example i will be using Redis 3.0.0 RC1 from this link – https://github.com/antirez/redis/archive/3.0.0-rc1.tar.gz .

Note: You will need to install wget, gcc, make and a few other tools – these are properly base images :)

First we will need to download the Redis 3.0.0 RC1:

Next, lets unpack it and make it:

Finally, move the binaries into /usr/bin/ so we can access them system-wide (easier!):

Now open up /etc/redis/redis.conf, find the line ‘daemonize no’ and change it to ‘daemonize yes’. Next, lets start it up!

And thats that – Redis is now installed and running on your container, using the config file at /etc/redis/redis.conf!

3. Tell Docker to start services on boot

Now that our container is running redis and has SSH access, we need to tell it to start redis automatically on ‘start’ of the container.

To do this using the phusion base image, its actually remarkably simple. First, because we are starting docker using /sbin/my_init it will run anything we put in /etc/service/* on start which is excellent. SO, for our situation we need to create a new folder and ‘run’ file for docker here, as below:

Within run, we need to paste the following:

Finally, remember to set the run file to be executable:

And thats all we need to do! Now, lets commit this image to our local library so we can start deploying it en-mass!

4. Commit your image so you can re-use it

Now that our ‘redis’ container is working a treat, we want to save it as a pseudo ‘template’ so we can deploy it over and over again on Docker – VERY quickly.

To do this is remarkably simple: we just ‘docker commit ..’ the image to our local library, as below:

There we have it – our image is saved locally. Next we can deploy it over and over to ours hearts content, as below.

5. Deploy 3 instances of your new template

For claritys sake, I want to delete my ‘build’ container now as it is confusing to have around in the future. To do this first ‘docker stop redis’ and ‘docker rm redis’ as below:

Simples. Next, lets deploy 3 instances of our template. I like to port forward using docker so i can access it using my ‘host servers’ IP. So for 3 instances, I am going to associate:

  • node1 with hostip:7001
  • node2 with hostip:7002
  • node3 with hostip:7003

So that i can hit ‘redis-cli -h 192.168.0.2 -p 7001′ and it will redirect to ‘172.17.0.x -p 6379′, for example. Below, I have deployed 3 instances from my image:

Now I can do ‘docker ps -as’ and view all 3 of my new, shiny Redis containers:

Here we can see 3 redis containers, named node1, node2 and node3 – all with a dedicated ‘7001-7003′ port that maps straight to the Redis servers port.

To test this is working, we can use ‘redis-cli’ on another server and try and login to the ‘host server’ using the port 7001 and the host servers 192.168.x.x IP, as below:

As you can see, I can connect through to the 3 instances using their port + the host servers IP (all docker IPs’ are on 172.17.0.0/24 range). When i try an incorrect port it fails (just to prove that ALL ports dont end up in a redis server!!).

6. Configure Redis slaves

Now we have 3 individual redis servers, we want to link them together so that we can test scalability/monitoring of clusters or whatever our reason is for setting this all up! :)

We will need to configure node2 and node3 to be slaves of node1 which will be our master; we can do this easily by modifying /etc/redis/redis.conf.

To do this we will take advantage of our SSH configuration earlier – simply find out the IP address of the containers for node2 and node3, SSH into them, modify the config and then restart the container, as below:

In this config file we need to find the line ‘slaveof’ and uncomment and modify it to the IP address of node1 – similar to below:

Finally, restart the container using ‘docker restart node2′ (for example), and it should now be a slave of the master Redis server on node1. To verify this, use redis-cli to login to node1 and node2 on seperate terminals and use ‘set hello world’ on node1, and ‘get hello’ on node2. If it works it should look like the following:

And there you have it! 3 redis servers setup in a h-scale fashion (well, using slaves!) in docker – using your own phusion-based image. Marvellous!

Next steps

In my next blog, I will show you how to monitor your Redis cluster using Opsview so that you can get a nice pretty dashboard for your Redis stats, as below:

Screen Shot 2015-01-13 at 17.24.03

Docker: A how-to

Over Christmas/New Years I had a fair amount of spare time for relaxation; so naturally this was spent tinkering around with various bits of software and kit I havent had time to play with during the past few months. One of the things I wanted to test and try in anger was Docker; a wrapper/software suite that wraps LXC into something a bit more usable. There is a video below that explains what Docker is and how it works:

One of the biggest benefits of Docker is with a bit of skill, it allows you to satisfy your tech curiosity without compromising your production environments / home environments, i.e. “Oh cock, I spent 2 hours playing around with X and getting it working and its actually not very good; now i have to go remove databases, apache entries, etc”. In Docker, you can simply just spin up a new ‘ubuntu container’ (instance; like a virtual machine essentially – if you think ‘abstract’), play around and develop for a few hours, then either save it, or crash and burn it – without any impact to your underlying server.

This is perfect for those of us who can spend 8+ hours testing out redis across 5 servers, or elasticsearch with packetbeat, etc yet dont want to spend 30 minutes making a new virtual machine (KVM ftw), or even worse running it on our stable kit.

Therefore the purpose of this blog is give you all the tools and commands to turn Docker into something useful that you can use on a daily basis. So, lets begin!

1. Getting started

At home I run Ubuntu 14.04 as my operating system of choice, therefore all instructions pertaining to the installation of Docker are for Ubuntu (Debian too, I imagine) – if you require RHEL/Centos then follow the guide here.

For detailed instructions (and per platform instructions), follow the Docker on Ubuntu installation guides here – but for brevitys sake, here are the steps you need to take in order to get docker ready to roll on  Ubuntu 14.04:

To test that everything worked, run:

This should return the following:

If you get:

Then the Docker service isnt running. On Ubuntu, run ‘service docker start’, and re run ‘docker ps -as’, and it should now work.

2. Using Docker

So now you’ve got Docker, how do you actually use it!?

Docker works using containers – which are essentially your ‘virtual machines’, if you are familiar with virtualization. You can view all your containers by running ‘docker ps -as’, as below:

As you can see, I have a single container called ‘CentosBox’ running that is of the type ‘centos:centos7′. The container ID is 283ac561a135, which can be used if you dont give your container a human name (i.e. CentosBox!).

So how did i get a Centos container and how does it work? Docker works using something called ‘images’, which are essentially templates (like virtual appliances, essentially). You can create your own image, share it with others, and vice versa, use other peoples images on your box. To find an image you can run the command:

See below for an example:

Here we can see a number of images that contain the word Debian; the one at the top has the most stars and is also a ‘semi’ official image (??), so lets use that one! To download it locally, use the ‘pull’ command as below:

Now lets make sure that it has been downloaded; run the ‘images’ command to view all the images stored locally on your Docker server:

 

Now that the image has been downloaded, lets go ahead and deploy a container based on the image so we can get cracking and start to use Debian!

Within seconds (i.e. <3 seconds), we now have a container running Debian ontop of our Ubuntu 14.04 server, cool hey! To prove this is a Debian box and not Ubuntu, you can go ahead and install lsb_release (apt-get update & apt-get install lsb-release), and run it:

Cool huh! Now, you can go ahead and tinker to your hearts content, set stuff up, install apache, etc and its all safely contained within Docker. To shutdown the container, simply exit the shell:

You can then restart it using the command ‘docker start’, as below:

And if you want to view the IP address without having to go into the container, run the command:

If you do wish to connect to the console again, run the ‘attach’ command:

One of the coolest things about Docker, in my opinion, is the ability to save your containers locally for re-use in the future.

Say for example, I wanted to install my companies software / configure a series of packages and software on my Debian box and then save it locally so i can quickly deploy it in the future.

After installing apache2, mysql-server, and configuring so your super happy with your Debian container, you can use the ‘commit’ command to save the container locally:

Here we have saved debserver as an image, and called that image ‘smnet’. I can then go and delete that container safely, as below:

Then, in a minute, or years time, i can easily redeploy that Debian container that i installed lsb-release on, using the command:

Very cool! This is so powerful for a development and testing perspective as it allows you to RAPIDLY spin up a large number of platforms (Ubuntu, RHEL, you name it) and configure them as you like. You can then rapidly and safely tear them down, or save them locally to be redeployed back onto the same box, or sent to other servers to be deployed there – very awesome.

One final thing – removing images! Say you dont like smnet anymore, you can simply remove it using the command ‘rmi’ (ReMove Image):

And c’est tout, its gone!

3. Even cooler stuff

There are even cooler things that be done with Docker.

For example, you can expose container ports via the public IP of the underlying server. What does that mean? Well, say for example I have apache2 running on debserver, i can currently only access it via http://172.17.0.9. What Docker allows you to do is essentially NAT ports from the host server through to the container, so I can say:

‘If anyone hits http://192.168.0.2:8082, redirect the traffic to http://172.17.0.9′ i.e. if anyone goes to ubuntu:8082, they’ll get apache on the Debian container. To do this, simply run the command:

Then you can get your browser and go to http://192.168.0.2:8082 and voila, you have an apache box!

One of the other neat things you can do is quickly deploy and link containers, similar to Ubuntu Juju (see my write-up from 18 months ago here:  http://www.everybodyhertz.co.uk/ubuntu-juju/).

One example I’ve tested which works a treat is to deploy a mysql container, and deploy a wordpress container, and link them together using just 2 commands:

Then, hit your host server IP address:8084 in your browser and voila:

Screen Shot 2015-01-05 at 15.21.59

4. Gotchas / Limitations

So far I have found a few limitations of Docker, namely that is ignorant of sysv/upstart and an ordered startup. The real world impact of this (mainly) is that your services dont start when you ‘docker start debserver'; i.e. if i installed apache2 on my debserver and used svc to tell it to run on start, it wouldnt be running when i started that container – as docker doesnt listen to svc.

This is a pain in the arse, but the way around it (at the moment at least) is to modify /etc/bash.bashrc to contain the startup commands you need to execute via bash, i.e.:

‘service apache2 start’

This should startup apache2 for you when the container starts, so you can simply get the IP and test apache2.

Setting up a RELK stack from scratch (Redis, Elasticsearch, Logstash and Kibana)

Screen Shot 2014-12-15 at 12.34.15

Recently I thought i’d re-do all of my ELK stack setup, as i didnt fully understand every facet of it and i was really interested in introducing Redis into the mix. I’ve also messed around with the existing Kibana and Logstash front-end to the point it was fairly bricked, so it was ripe for a change.

What I wanted to get to, was having my 2 servers and my main router having their logs and syslog data sent into my log box so I could view and correlate across multiple systems. Heres a pretty diagram to explain what i wanted:

RELK-image-1

To achieve this setup I used a stack of Redis, Elasticsearch, Logstash and Kibana. I used logstash forwarders on my servers to send the specified logs into a redis queue on my Kibana server. Once in the queue, Logstash would carve and process the logs and store them within Elasticsearch, from where Kibana would give me a nice front end to analyze the data. Simple right?

RELK-image-2

1. Redis

First, lets install Redis on our log monitoring server (Kibana.home, from herein). You can run all of the constituent parts of this setup on different boxes, just modify the IP’s/hostnames in the config files and remember to open up firewall ports if need be. On my small scale setup, running all of the parts on one VM was simply enough

To install redis, do the following:

You may need to install gcc / make (apt-get install make gcc) if your system doesnt have them. At this point it would be prudent to have 2 terminals (split vertically in iTerm or similar). Next, copy the redis.conf file from the extracted packages to the same location as the binary, i.e:

Open this file and modify it, if you wish to change the IP address its bound to, port, etc. Next, you need to startup redis using the command:

In a seperate window, run:

You should get a ‘pong’ reply, which tells you that redis is up and running. Finally, daemonize redis so that is set to run even when you kill the terminal. Open up /usr/local/bin/redis.conf and set ‘daemonize yes’, then restart redis.

Thats redis done...

2. Logstash forwarders

Next, on the client servers (devices we went to send logs FROM), run the following.

Create your logstash config file (where you will set WHAT is exported) in /etc/logstash/logstash-test.conf and put the following in it:

Basically, we are going to take whatever we type in the console, and output it to the screen to test logstash is indeed working:

As you can see, whatever we have typed (hi hi hi) is spat back out in a formatted fashion. So, that shows logstash is working (in a very limited way at least). Next, we need to test that logstash on this server can send data into our kibana.home server’s redis queue. To do this, create another config file in /etc/logstash called logstash-redis-test.conf, and in it add the following (obviously change my IP to the IP of your redis server!):

Next, start up logstash with this new config file (you may need to do ‘ps aux | grep java’ and then ‘kill -9 pid-of-the-java-instance‘), using the command:

Now, whatever we type should not only be spat back to us on the screen in a formatted fashion but should also appear in the redis-queue. So, on your 2nd terminal that is on the CLI of kibana.home (your server running redis), connect to redis so we can watch whats coming in:

Now, back to server.home – lets generate some traffic! Type some random rubbish in and hit enter:

On our kibana.home console, run the following 2 command – ‘LPOP logstash’ and ‘LLEN logstash'; the latter will tell you how many items are in the queue currently and the former will pop an item off the top of the queue / stack and display it to you, as below:

This shows that our logstash-forwarder can send events straight into the redis queue on our kibana.home server. This is where we are at the moment then:

relk-image-3

Now, lets get some real data into redis instead of our testing! Create another file called /etc/logstash/logstash-shipper.conf which will be our ‘production config file’. In my example, I want to send my Apache log and Syslogs from /var/log into the queue, therefore i have a config as follows:

What you will notice, or should notice, is the ‘type’ line – this is VERY important for later on. Essentially, our redis queue will receive data and that data will be taged with a ‘type’. This type tells logstash later on, HOW to parse/process that log – i.e. which filters to apply. I’ve also got the IP address of my kibana.home in the output line; this config file essentially tells the logstash forwarder to send the 3+ log files to redis, using the type (tags) specified.

Note: The java process we are running will obviously die when the terminal is closed. To prevent this from happening, run the following command – which will daemonise it:

We're now shipping logs..

3. Elasticsearch

Now, firmly back on kibana.home, lets install Elasticsearch. This is where the log data will eventually live. To do this, install java and then download and install the Elasticsearch package (im running all of my boxes on Ubuntu):

Elasticsearch should have started after installation – to test that it is indeed running and accessible, use CURL as below:

Note: Elasticsearch at :9200 needs to be accessible from your browser – so if you have elasticsearch only available on 127.0.0.1 or localhost it wont work and kibana will be upset.

We will also want to setup a ‘limit’ on the elasticsearch data, so we dont save logs for longer than we need (And thus run out of space!). To do this, we need to download and run a program called ‘curator’, via the method below:

Then in crontab, add the following line:

This essentially tells the curator program to delete any syslogs / data that is older than 60 days (you can make it longer / shorter depending).

Now that elasticsearch is installed, we now need to link the redis queue to it – i.e. take data off the queue (LPOP..), parse it, and store it within elasticsearch. To do this, we will use logstash.

Elasticsearch is now stretching..

4. Logstash indexer

To start, lets install logstash on kibana.home:

For all intents and purposes, you can ignore logstash-web, just ensure that logstash is running (the daemon).  Next, lets create the config file which this logstash instance will be using, at /etc/logstash/conf.d/logstash-indexer.conf:

Here we have a few things going on. we have an input section, and an output section – similar to the previous configurations. In this input section, we are taking 3 syslog files and tagging them with ‘syslog’, we are specifying port 5145 for udp/tcp to receive ‘syslog-network’ type data on, and we are also taking data from our redis-queue as an input also.  We are then outputting this data into elasticsearch to be stored. Simple right?

Note: Because you are reading /var/log/auth.log and others in /var/log, you will need to setup access control to allow the ‘logstash’ user to view these logs.

The best way to do this is to use setfacl/getfacl. You will need to install the package ‘acl’ to do this, and then run a command similar to:

You can test this quickly by editing /etc/passwd and giving the logstash user a shell, and then trying to ‘cd /var/log’. If it works, then logstash will be able to see these logs – if not, your setfacl command was wrong!

Now, back to that big config file. What you’ll notice is that we dont have here are any filters – we arent acting on the ‘type’ parameters we specified. The beauty of logstash is you can seperate your config out into seperate files – so instead of one god-awful long configuration file, you can have multiple little ones:

Here i have files for parsing different ‘types’ of traffic, for example anything that gets sent in with the type ‘syslog-network’ (i.e. logs from my draytek router), are pushed through rules in this config file:

This takes the raw data recieved from my router, and chops it into usable fields using Grok.  I have a seperate .conf file for Opsview log traffic, Syslog traffic and also Apache traffic (i will put the output of these at the bottom!).

Essentially, you are telling Logstash – “Hey, if you see a log that has this type, then prepare it for storage using this filter”.

Now we have a configuration file(s), we can restart logstash:

We now have logstash-forwarders sending data into redis, and logstash-indexer on kibana.home is taking that data and chomping it up and storing it in Elasticsearch, as below:

relk-image-4

Note: If there are errors in any of your config files, logstash will die after around 10 seconds.

It is therefore recommend to run ‘watch /etc/init.d/logstash status’ for about 20 seconds to make sure it doesnt fall over. It is does (i.e. your missing a quote or parenthesis, etc) then tail the logstash log using:

This will tell you generally where you are going wrong. BUT, ideally you wont have made any errors! :)

We can test that logstash, redis and elasticsearch are playing nicely together by running ‘LLEN logstash’ in redis-cli (as we did earlier) and seeing it at 0 or reducing, i.e. 43 dropped to 2. This means that logstash is popping from the queue, parsing it through our filters, and storing it in elasticseach. Now, all we need to do is slap a front-end on it!

Almost there

5. Kibana and Nginx

As i’m running nginx as my front-end, I used a config file i found which worked a treat. Put this config file at /etc/nginx/sites-available:

This helps get around the problems with elasticsearch being exposed outside of 127.0.0.1, etc. Now, hit up ‘http://kibana.home/’ (the address/IP of your log server, obviously!) and you should see Kibana! Here is an example dashboard i have built using the apache logs, router logs, Opsview logs and a few others:

Screen Shot 2014-12-15 at 12.34.15
You did it

6. Wash-up and notes

So there you have it; logs being sent via logstash-forwarder’s into a central redis queue, which is watched and processed by a logstash-indexer and stored in elasticsearch – where it is interpreted using Kibana running on nginx. The following places are the items to mentally bookmark for your fingers:

On the Kibana/Elasticsearch/Logstash/Redis server:

  • Logstash directory (where all your configs are): /etc/logstash/conf.d/
  • Redis: /usr/local/bin/redis.conf
  • Elasticsearch: /etc/elasticsearch/elasticsearch.yml
  • Kibana: /var/www/kibana3

On the servers you are sending logs from:

  • Logstash: /etc/logstash/logstash-shipper.conf

One final hint / tip – To have named log all of its requests to syslog, run the command:

Grok filters
Apache logs filter:

 Draytek router logs filter:

 Opsview filter:

 Syslog filter:

Integrating Opsview with ELK Log Monitoring

Hello all!

This is a brief blog post to explain how I quickly integrated my existing Opsview server, with my existing ELK deployment. I basically wanted a way that within Opsview, i can see that a host has failed or is having problems and go “Hmm, lets have a look at the logs to see whats happening” without:

A) Having to SSH to the box and start tailing or

B) Have to fire up ELK and start filtering.

To step back just a moment, what is ELK? ELK stands for ElasticSearch / Logstash / Kibana, and essentially it uses ElasticSearch/Logstash to collect and handle the log data, and Kibana as the graphical front end through which users can create their own filters, graphs, etc.

So, onto the integration. My current setup is as follows:

  • Opsview is monitoring all of my servers, network devices, virtual machines (KVM) and so forth – for items such as load average, memory usage, LVM capacity, temperatures, processes/services running, response times and so forth.
  • ELK is collecting logs from all of the aforementioned devices.

What I wanted to be able to do off-the-bat with ELK was use URL-based syntax to filter the ELK view, however this isnt immediately possible out of the box it appears, so you will have to make slight modifications to the .json file (default.json or whatever your ELK view is saved as). Open up your json view (i.e. /var/www/kibana3/app/dashboards/default.json) and edit the top part to look similar to the following:

Essentially what we are doing, is allowing URL-based querying by parsing through the ‘?q=MYFILTER’ variable straight to the query box, which wasnt available by default. This allows us to open http://my-elk-server/default.json?q=opsview and ELK will be opened with a filter of ‘opsview’ by default. Neat huh!

So, now that is working – test it out as above, you should get something similar to the following:

URL: http://192.168.0.38/index.html#/dashboard/file/default.json?query=host:192.168.0.16

Screen:

Screen Shot 2014-11-25 at 10.52.55

 

If not then filtering we created/edited above isnt working. If it is working, then great – proceed to the next section!

Setting up in Opsview

In Opsview we are going to use the in-built ‘Management URLs’ functionality (docs here), which allows users to create a host template i.e. ‘My Linux Template’ and create a management URL of ‘ssh://$HOSTADDRESS$:22′ for example. This allows the user to dive straight into an SSH shell on that box from within the Opsview UI, when that template is applied to a host. Cool huh? You can use this for anything, wiki’s, confluence, service desks, you name it – i.e. create a ‘Wiki’ host template of ‘http://wiki.internal.com/?query=$HOSTADDRESS$’ – when this is applied to a series of hosts, you will be able to load the wiki and search it for the name of the server you are looking at, from one menu option.

For our purposes, we are planning on creating an ‘ELK’ host template, whom we will apply to all of the host’s whose logs we collecting with ELK.

Step 1: Create the host template

Fairly simply, go to ‘Settings > Host Templates > Add new’, and populate it with a name and description as below:

Screen Shot 2014-11-25 at 09.50.22

Step 2: Create the management URL

After clicking ‘Submit changes’, you will now be able to click on the previously-greyed-out ‘Management URLs’ button. In here we will need to create our ELK link, as below:

Screen Shot 2014-11-25 at 09.50.55

 

For reference, the syntax is ‘http://elk-log-server/index.html#/dashboard/file/default.json?query=$HOSTADDRESS$’. The important part here is $HOSTADDRESS$ – this variable or macro will be substituted out for the address of the host, i.e. if this template is applied to ‘exchange-server-1.microsoft.com’, when the management URL is clicked on that host the full URL will be http://elk-log-server/index.html#/dashboard/file/default.json?query=exchange-server-1.microsoft.com.

Step 3: Apply the template to the hosts

Next, we will need to apply the template to the hosts whose logs we want to monitor. You can do this via ‘Host template’ tab using the hosts section, but because Im lazy I did it via the host itself (Settings > Hosts > and then clicked on my host in question), as below:

Screen Shot 2014-11-25 at 09.51.23

..and thats it, the template is applied.

Step 4: View the logs

After a quick reload, go to your Opsview monitoring screens and click on the contextual menu and you will now see an extra option there – ELK:

Screen Shot 2014-11-25 at 10.00.00

And thats pretty much it! Now, all you need to do is apply the ‘ELK’ host template to the hosts whose logs you are monitoring and this option will appear ^^. That way in the future, you can see ‘oo we have a host failure’ and dive straight into the logs at the click of a button, as below:

Screen Shot 2014-11-25 at 10.52.55