Deploying RancherOS on Vultr instances

July 15th, 2016 § 1 comment § permalink


One of things I’ve been playing with of late is a Docker container orchestration system called Rancher. I’ve been very excited to discover the features and capabilities the developers have created and despite being quite new it’s extremely well polished – for the most part it just works.

One of the projects they’d developed alongside their software is an operating system called RancherOS. It’s a light weight linux installation that runs it’s userspace entirely from within containers; the init process is docker itself. This allows the systems to run with very low overhead once booted and devote as much of their resources to the serving of your containers as is possible.

RancherOS being a relatively new kid on the block though is not available in the selection of operating systems that you can install when creating a new virtual machine on my cloud provider of choice Vultr. Fortunately for me though Vultr lets you use PXE network booting and let you add OS install options by uploading the appropriate scripts. In my searches on how to use this capability I stumbled across Vultr’s own documentation on installing RancherOS using this method. Using this I was up and running RancherOS instances within minutes and I was happy.

But soon I wasn’t quite so happy; although the PXE script got me up and running in a couple of minutes it missed out a few things that I really wanted to be automated such as the setting of my ssh key, the addition of the private network and the setting of the hostname. It was at this point I discovered that RancherOS supports cloud-init and all the goodies that provides and so I was set. I just needed a way to pull the necessary information into a cloud-init.yml file and have that read by the booting operating system. RancherOS to the rescue again as it allows you to run a script in place of a cloud-init.yml file. I was ready to go.

First up my iPXE script

# Boots RancherOS in Ramdisk with persistent storage on disk /dev/vda
# Location of Kernel/Initrd images
set base-url
kernel ${base-url}/vmlinuz rancher.state.formatzero=true \
initrd ${base-url}/initrd

You can see just how simple this is. I’m saying where the newest RancherOS is, I’m passing a couple of parameters, then I’m telling it to boot. The formatzero parameter allows me to reset a host by writing 1MB of zeros to the start of the disk and then rebooting. Upon startup the system will reinstall itself using the instructions from the cloud-init file, which I then tell it about in the datasources parameter.

The is a shell file that allows us to fetch the information we need and get installing.


V4_PRIVATE_IP=`wget -q -O -`
HOSTNAME=`wget -q -O -`

cat > "cloud-config.yaml" <<EOF
hostname: $HOSTNAME
  - ssh-rsa ...
  - path: /etc/ssh/sshd_config
    permissions: "0600"
    owner: root:root
    content: |
      AuthorizedKeysFile .ssh/authorized_keys
      ClientAliveInterval 180
      Subsystem	sftp /usr/libexec/sftp-server
      UseDNS no
      PermitRootLogin no
      ServerKeyBits 2048
      AllowGroups docker
        dhcp: true
        address: $V4_PRIVATE_IP/16
        mtu: 1450
   fstype: auto
     - /dev/vda

sudo ros install --no-reboot -f -c cloud-config.yaml -d /dev/vda
sudo reboot

This script does essentially three things. First it uses the Vultr API to pull down some information we need to setup our VM correctly. It then writes out a YML file using this information. This YML file sets my SSH key, sets up SSHD to be a little more secure, adds the private network and tell RancherOS how to go about installing itself to the virtual harddisk (/dev/vda). Finally, it tells the RancherOS system to install itself on the specified disk, using the config file just created and then reboot.

When it comes back up the system is ready to be added to the Rancher software as an available host. This isn’t currently automated but now that I think about it i’m fairly certain thats something achievable.

Ansible deployment of DNS zone files.

January 8th, 2016 § 0 comments § permalink


I’ve recently started to refactor my server configuration. It’s always been built with Ansible but it was one of the first things I ever did using that and I was fairly certain that every way I could have been doing it wrong I was.

One of the things I’d wanted to do was rationalise what was in my playbooks. They should ideally be all code and no configuration but I was using a lot of templates for various system files and they were mostly configuration for content, not services – the chief culprits being dns zone and nginx/apache vhost files. When you refer to a template from with a playbook it expects that the files are inside the playbook itself. This just doesn’t seem right to me. You can specify absolute locations though and so with a bit of finagling I was able to get the files where I wanted. The magic is to do something like

{{ inventory_dir }}/../templates/zones/*.j2

inventory_dir is the absolute path to your main inventory file and so we can use this to point at a global templates folder with a bit of path manipulation. As you can see I’m stepping up out of the hosts folder and then down into templates.

This works really well but there was another thing I wanted to do. I was configuring the zones that I had to populate on the system using normal vars files (a simple list object with domain names in), and then using that to grab the templates to put on the system. This struck me as wasteful. The addition of a new domain meant that I had to add the file and then amend an array just to inform Ansible to read the file. There had to be a better way. Sure enough with_fileglob came to the rescue. This allows Ansible to parse a path for files and then gives us the tools necessary to feed our provisioning. With a bit of Jinja2 manipulation magic I ended up with this

- name: Install zone files
  src: "{{ item }}"
  dest: /etc/bind/zones/{{ item | basename | regex_replace("\.j2$","") }}
  owner: root
  group: root
  mode: 0644
  register: zone_files
    - "{{ inventory_dir }}/../templates/zones/*.j2"
    - reload bind9

The final piece of this puzzle was to make sure that each of these zone files were referred to in the Bind configuration. I scratched my head over this for a bit and then it occurred I could register the results of the above action as a variable and use it in the template for that file. So I registered zone_files and set about concocting a template loop

{% for zone in zone_files.results %}
zone "{{ zone.item | basename | regex_replace("\.db\.j2$","") }}" {
   type master;
   file "/etc/bind/zones/{{ zone.item | basename | regex_replace("\.j2$","") }}";
   allow-transfer { slaves; };
{% endfor %}

It didn’t turn out too complicated in the end – certainly very readable but the end result is a much nicer playbook with all configuration being both more succinct and living in appropriate places.

Why do my sockets keep disappearing?

May 14th, 2015 § 0 comments § permalink

I’m working on getting my froxlor instance setup with PHP5-FPM and Nginx and was encountering an issue whereupon reboot the PHP functionality would be broken. Looking in syslog would give me about 30 lines of PHP5-FPM failing to start and then giving up. Looking in /var/log/php5-fpm.log would tell me nothing useful other then the configtest passed.

I eventually found a helpful message in the /var/log/upstart/php5-fpm.log file:

[14-May-2015 09:21:54] ERROR: unable to bind listening socket for 
address '/var/run/nginx/': No 
such file or directory (2)

True enough the /var/run/nginx folder did not exist. I could not figure out where it was going.

After hair pulling research I found that the /var/run mount point is run as tmpfs and so is deleted on reboot. In order to fix this I had to ensure the directory was recreated before PHP5-FPM started. It turns out this if fairly easy with upstart. I added this stanza:

    ... other stuff ...
    [ -d /var/run/nginx ] || mkdir -p /var/run/nginx
end script

to the /etc/init/nginx.conf file to have the folder created on boot.

Problem solved.

Where Am I?

You are currently browsing the Programming and Dev Ops category at Real Men Wear