Perl Nagios plugins on Ubuntu 15.04 – Segmentation fault error

Recently, I upgraded my home server to run Ubuntu 15.04 from the previous 14.04 LTS version. The upgrade (via 14.10) was a breeze, aside from the pain in the arse of systemd and having to fix things like plexmediaserver which were no longer running – ugh.

One problem I did encounter was that none of my Perl-based nagios plugins were running anymore – giving me an ‘nrpe unable to read output’ from within the Opsview GUI.  After running the plugins locally, we could see the following error:

To investigate, I set the core file size to unlimited, ran the script again and then ran the core file through gdb as below:

Here we can see that the Params:Validate perl module is the culprit, presumably as its not compatible with the new 5.20 Perl version. To fix this, I simply removed the old Params:Validate module and installed a new one (5.20 version) via apt:

Now we can test the plugins, and voila – they work:

Happy monitoring!

Atlassian Hipchat and Opsview

Overview

I’ve recently been on a bit of an integration push with Opsview, wanting to have my software integrate with other software tools to make not only my customers lives easier, my also my own!

At Opsview, I run a range of tools from JIRA and Jenkins, through to Opsview – and also look at Twitter, Salesforce and more. This is a lot of stuff, therefore as mentioned in my late 2014 piece “Collaboration and innovation in 2014” I wanted to find a way to unify all of this disparate information into a single source of truth, or as marketers like to say  “Single pane of glass”, yikes.

Introducing Atlassian Hipchat

logoHipchatPNG

Atlassian Hipchat is a ‘team chat, file sharing and integrations’ software. Its main benefit is that like the uber-popular Slack (valued at $2.8BN, yes BN, after its most recent VC round), it allows you to create users (i.e. your entire company), and numerous individual rooms. Users then talk to each other either via the rooms or via IM – thus negating the amount of bullshit emails sent internally and theoretically getting shit done quicker. Thats my saying of the year.

The real beauty of Hipchat over a software like Slack, in my humble opinion, is that is has VoIP, and to a lesser extent video chat. This means that you dont need to run skype alongside, you can just use the inbuilt voice calling functionality. This is a game changer compared to Slack, which is a prettier, ‘cooler’ (ProductHunt.com anyone?) tool.

In this blog, I’ll quickly show you how to setup a new Hipchat room (thus assuming you know what Hipchat is by now and have already signed up!), and how to get alerts from your Opsview monitoring system (must be running 4.6.2 and above) into the aforementioned room. So, lets begin!

Creating your Hipchat room

To create the new Hipchat room you will need to use either the web client or the desktop client.

Firstly, click on ‘New Chat’ and then click ‘Create a new room’. You will be presented with a new modal window asking you for the room name, Topic and some access control radio buttons. Once configured, click ‘Create room’, as below.

Screenshot_01_04_2015_12_43

Getting the Room ID/Token

This step is fairly easy but let me walk you through it. First, log-in your www.hipchat.com with your account and navigate to ‘Group admin’ in the top right.

HipChat_-_Group_Info

Next, click on ‘Rooms’ and then the room you’ve just created. In my example it is ‘DevOps’, as below. Make a note of the ‘API ID’, this will be your ‘Room ID’ from hereon.

HipChat_-_Room_-_DevOps

Next we need to get our Token. To do this, click on ‘Tokens’ on the left hand side (just under Integrations).

Enter a label, i.e. ‘Opsview’ and then click create. And voila. The big ugly thing is your token. Make a note of that too.

HipChat_-_HipChat_Room_Notification_Tokens

Configure Opsview to talk to Hipchat

Next, we need to login to Opsview and tell it the Room ID and Token which we just created – in order to allow it to talk to Hipchat and send alerts into the room.

To do this, login to Opsview and go to ‘Settings > Notification Methods’ and click on ‘Hipchat’. In here you will see 2 fields as mentioned earlier. Copy and paste your token and room ID into these respective fields, and also SET THE ACTIVE TICK BOX TO YES (if we dont do this then nothing will work!).

See below for a working example:

Opsview___Notification_Method___Edit__HipChat

Next we need to tell Opsview what to send to Hipchat, i.e. EVERYTHING (including load averages and stuff) or just the key things, i.e. website down, disk full, processes not running, etc. To do this, you need to configure notification profiles. You can do this on a per user basis via ‘Settings > Contacts > $CONTACTNAME > Submit and edit notification profiles’ or on a group basis via ‘Settings > Shared notification profiles’.

In my example I just want to quickly send ALL problems to Hipchat, so i’ve clicked:

1. Settings > Contacts

2. Clicked on my username ‘admin’

3. Clicked on ‘submit and edit notification profiles’.

4. Clicked on the green ‘plus’/’add’ symbol in the top right.

Now this screen is the ‘create new notification profile’ screen. This is basically where you create your rule, i.e. between X and Y, tell me about Z problems on these hosts: {…} using hipchat/email/etc’.

For my simple profile, tick everything in ‘Host and Service Groups’,’Keywords’ and ‘BSM’. In the ‘Settings’ tab, ensure that Hipchat is ticked as below.

Opsview___Notification_Profile___New

.. and thats pretty much it! Click ‘Submit changes’, then click ‘Settings > Apply Changes’ and finally click the ‘Reload configuration’ button and voila, its all setup. Next time anything goes critical/warning within your Opsview system you will receive an alert in Hipchat, as below:

Screenshot 2015-04-01 12.14.28

Creating a distributed Redis system using Docker

redis-on-docker

A common problem I face on a daily basis is a lack of hardware / resource in order to test things out to the fullest. For example, in days gone by I’d have needed 3 servers for what i’m about to do – and in more recent times, 3 virtual machines. I dont have the time to continuously build these items, nor the resource if we were going physical. This is where my new found interest in Docker can help me out!

What I want to do on my Ubuntu ‘host’ server is create 3 Docker containers running Redis, and link them all together so that I can then develop and test the best way to monitor h-scaled Redis. Below I will show you how i’ve done it, and the benefits (even beauty) of it!

1. Download and configure our base image

For my base images, I use the excellent ‘phusion’ image which adds in a lot features that Docker omits (by design or not), including ordered startup of services and more. For more information on the phusion-base image, click here.

First, lets pull the phusion image as below, using Docker:

On completion, you should be able to run ‘docker’ images and see the new image as below:

This image is great as you can tell it easily to startup certain services on start of the container, which is a bit of a pain in the arse otherwise (in my other blog i outlined how when using a standard base image you need to use bash to essentially start things!). One of the things I wanted to ADD to this image is the ability to login via SSH using a password instead of keys; given this is only going to be running on a locally contained box im not too worried around security!

To do this on phusion, we need to create an instance of phusion, login via SSH and modify the config, then restart SSH. We will then be able to login using the root user.

First, create a new container from the phusion base image:

The ‘/sbin/my_init’ is what allows phusion to start your services on boot of the container. The ‘enable-insecure-key’ allows us to ssh into the new container using an ‘insecure key’ (see below).

Now that the container has been deployed, we need to get its IP address. To do this, use ‘docker inspect’ as below:

Next, lets pull down the ‘insecure key’ and using it to login to our new container:

Congratulations, you are now ssh’d into the container! Next we need to modify SSH so we dont need to use the insecure key in the future. To do this, open up /etc/ssh/sshd_config and find and uncomment the following line:

Then save the file and exit the text editor. Finally, we need to give our root user a password. To do this run ‘passwd’ as below:

And thats that for SSH on our base image. Next we will install redis on the container.

2. Install Redis

Now that we have our base image configured, we will need to download and install Redis. For my example I am using the latest version of redis, which can be downloaded from the page here. For my example i will be using Redis 3.0.0 RC1 from this link – https://github.com/antirez/redis/archive/3.0.0-rc1.tar.gz .

Note: You will need to install wget, gcc, make and a few other tools – these are properly base images :)

First we will need to download the Redis 3.0.0 RC1:

Next, lets unpack it and make it:

Finally, move the binaries into /usr/bin/ so we can access them system-wide (easier!):

Now open up /etc/redis/redis.conf, find the line ‘daemonize no’ and change it to ‘daemonize yes’. Next, lets start it up!

And thats that – Redis is now installed and running on your container, using the config file at /etc/redis/redis.conf!

3. Tell Docker to start services on boot

Now that our container is running redis and has SSH access, we need to tell it to start redis automatically on ‘start’ of the container.

To do this using the phusion base image, its actually remarkably simple. First, because we are starting docker using /sbin/my_init it will run anything we put in /etc/service/* on start which is excellent. SO, for our situation we need to create a new folder and ‘run’ file for docker here, as below:

Within run, we need to paste the following:

Finally, remember to set the run file to be executable:

And thats all we need to do! Now, lets commit this image to our local library so we can start deploying it en-mass!

4. Commit your image so you can re-use it

Now that our ‘redis’ container is working a treat, we want to save it as a pseudo ‘template’ so we can deploy it over and over again on Docker – VERY quickly.

To do this is remarkably simple: we just ‘docker commit ..’ the image to our local library, as below:

There we have it – our image is saved locally. Next we can deploy it over and over to ours hearts content, as below.

5. Deploy 3 instances of your new template

For claritys sake, I want to delete my ‘build’ container now as it is confusing to have around in the future. To do this first ‘docker stop redis’ and ‘docker rm redis’ as below:

Simples. Next, lets deploy 3 instances of our template. I like to port forward using docker so i can access it using my ‘host servers’ IP. So for 3 instances, I am going to associate:

  • node1 with hostip:7001
  • node2 with hostip:7002
  • node3 with hostip:7003

So that i can hit ‘redis-cli -h 192.168.0.2 -p 7001′ and it will redirect to ‘172.17.0.x -p 6379′, for example. Below, I have deployed 3 instances from my image:

Now I can do ‘docker ps -as’ and view all 3 of my new, shiny Redis containers:

Here we can see 3 redis containers, named node1, node2 and node3 – all with a dedicated ‘7001-7003′ port that maps straight to the Redis servers port.

To test this is working, we can use ‘redis-cli’ on another server and try and login to the ‘host server’ using the port 7001 and the host servers 192.168.x.x IP, as below:

As you can see, I can connect through to the 3 instances using their port + the host servers IP (all docker IPs’ are on 172.17.0.0/24 range). When i try an incorrect port it fails (just to prove that ALL ports dont end up in a redis server!!).

6. Configure Redis slaves

Now we have 3 individual redis servers, we want to link them together so that we can test scalability/monitoring of clusters or whatever our reason is for setting this all up! :)

We will need to configure node2 and node3 to be slaves of node1 which will be our master; we can do this easily by modifying /etc/redis/redis.conf.

To do this we will take advantage of our SSH configuration earlier – simply find out the IP address of the containers for node2 and node3, SSH into them, modify the config and then restart the container, as below:

In this config file we need to find the line ‘slaveof’ and uncomment and modify it to the IP address of node1 – similar to below:

Finally, restart the container using ‘docker restart node2′ (for example), and it should now be a slave of the master Redis server on node1. To verify this, use redis-cli to login to node1 and node2 on seperate terminals and use ‘set hello world’ on node1, and ‘get hello’ on node2. If it works it should look like the following:

And there you have it! 3 redis servers setup in a h-scale fashion (well, using slaves!) in docker – using your own phusion-based image. Marvellous!

Next steps

In my next blog, I will show you how to monitor your Redis cluster using Opsview so that you can get a nice pretty dashboard for your Redis stats, as below:

Screen Shot 2015-01-13 at 17.24.03

Docker: A how-to

Over Christmas/New Years I had a fair amount of spare time for relaxation; so naturally this was spent tinkering around with various bits of software and kit I havent had time to play with during the past few months. One of the things I wanted to test and try in anger was Docker; a wrapper/software suite that wraps LXC into something a bit more usable. There is a video below that explains what Docker is and how it works:

One of the biggest benefits of Docker is with a bit of skill, it allows you to satisfy your tech curiosity without compromising your production environments / home environments, i.e. “Oh cock, I spent 2 hours playing around with X and getting it working and its actually not very good; now i have to go remove databases, apache entries, etc”. In Docker, you can simply just spin up a new ‘ubuntu container’ (instance; like a virtual machine essentially – if you think ‘abstract’), play around and develop for a few hours, then either save it, or crash and burn it – without any impact to your underlying server.

This is perfect for those of us who can spend 8+ hours testing out redis across 5 servers, or elasticsearch with packetbeat, etc yet dont want to spend 30 minutes making a new virtual machine (KVM ftw), or even worse running it on our stable kit.

Therefore the purpose of this blog is give you all the tools and commands to turn Docker into something useful that you can use on a daily basis. So, lets begin!

1. Getting started

At home I run Ubuntu 14.04 as my operating system of choice, therefore all instructions pertaining to the installation of Docker are for Ubuntu (Debian too, I imagine) – if you require RHEL/Centos then follow the guide here.

For detailed instructions (and per platform instructions), follow the Docker on Ubuntu installation guides here – but for brevitys sake, here are the steps you need to take in order to get docker ready to roll on  Ubuntu 14.04:

To test that everything worked, run:

This should return the following:

If you get:

Then the Docker service isnt running. On Ubuntu, run ‘service docker start’, and re run ‘docker ps -as’, and it should now work.

2. Using Docker

So now you’ve got Docker, how do you actually use it!?

Docker works using containers – which are essentially your ‘virtual machines’, if you are familiar with virtualization. You can view all your containers by running ‘docker ps -as’, as below:

As you can see, I have a single container called ‘CentosBox’ running that is of the type ‘centos:centos7′. The container ID is 283ac561a135, which can be used if you dont give your container a human name (i.e. CentosBox!).

So how did i get a Centos container and how does it work? Docker works using something called ‘images’, which are essentially templates (like virtual appliances, essentially). You can create your own image, share it with others, and vice versa, use other peoples images on your box. To find an image you can run the command:

See below for an example:

Here we can see a number of images that contain the word Debian; the one at the top has the most stars and is also a ‘semi’ official image (??), so lets use that one! To download it locally, use the ‘pull’ command as below:

Now lets make sure that it has been downloaded; run the ‘images’ command to view all the images stored locally on your Docker server:

 

Now that the image has been downloaded, lets go ahead and deploy a container based on the image so we can get cracking and start to use Debian!

Within seconds (i.e. <3 seconds), we now have a container running Debian ontop of our Ubuntu 14.04 server, cool hey! To prove this is a Debian box and not Ubuntu, you can go ahead and install lsb_release (apt-get update & apt-get install lsb-release), and run it:

Cool huh! Now, you can go ahead and tinker to your hearts content, set stuff up, install apache, etc and its all safely contained within Docker. To shutdown the container, simply exit the shell:

You can then restart it using the command ‘docker start’, as below:

And if you want to view the IP address without having to go into the container, run the command:

If you do wish to connect to the console again, run the ‘attach’ command:

One of the coolest things about Docker, in my opinion, is the ability to save your containers locally for re-use in the future.

Say for example, I wanted to install my companies software / configure a series of packages and software on my Debian box and then save it locally so i can quickly deploy it in the future.

After installing apache2, mysql-server, and configuring so your super happy with your Debian container, you can use the ‘commit’ command to save the container locally:

Here we have saved debserver as an image, and called that image ‘smnet’. I can then go and delete that container safely, as below:

Then, in a minute, or years time, i can easily redeploy that Debian container that i installed lsb-release on, using the command:

Very cool! This is so powerful for a development and testing perspective as it allows you to RAPIDLY spin up a large number of platforms (Ubuntu, RHEL, you name it) and configure them as you like. You can then rapidly and safely tear them down, or save them locally to be redeployed back onto the same box, or sent to other servers to be deployed there – very awesome.

One final thing – removing images! Say you dont like smnet anymore, you can simply remove it using the command ‘rmi’ (ReMove Image):

And c’est tout, its gone!

3. Even cooler stuff

There are even cooler things that be done with Docker.

For example, you can expose container ports via the public IP of the underlying server. What does that mean? Well, say for example I have apache2 running on debserver, i can currently only access it via http://172.17.0.9. What Docker allows you to do is essentially NAT ports from the host server through to the container, so I can say:

‘If anyone hits http://192.168.0.2:8082, redirect the traffic to http://172.17.0.9′ i.e. if anyone goes to ubuntu:8082, they’ll get apache on the Debian container. To do this, simply run the command:

Then you can get your browser and go to http://192.168.0.2:8082 and voila, you have an apache box!

One of the other neat things you can do is quickly deploy and link containers, similar to Ubuntu Juju (see my write-up from 18 months ago here:  http://www.everybodyhertz.co.uk/ubuntu-juju/).

One example I’ve tested which works a treat is to deploy a mysql container, and deploy a wordpress container, and link them together using just 2 commands:

Then, hit your host server IP address:8084 in your browser and voila:

Screen Shot 2015-01-05 at 15.21.59

4. Gotchas / Limitations

So far I have found a few limitations of Docker, namely that is ignorant of sysv/upstart and an ordered startup. The real world impact of this (mainly) is that your services dont start when you ‘docker start debserver'; i.e. if i installed apache2 on my debserver and used svc to tell it to run on start, it wouldnt be running when i started that container – as docker doesnt listen to svc.

This is a pain in the arse, but the way around it (at the moment at least) is to modify /etc/bash.bashrc to contain the startup commands you need to execute via bash, i.e.:

‘service apache2 start’

This should startup apache2 for you when the container starts, so you can simply get the IP and test apache2.