ManagedCloud Servers

High performance handled and monitored by us 24/7/365. A complete solution to provide you with our in house expertise 24/7 tailored to your specific needs. We'll setup a bespoke server for your site using the latest tehnologies so you can get the most out of your hardware and get your website loading quickly and reliably. Find out more..

cPanelCloud Servers

Recommended - High performance cloud servers with no technical knowledge required. If you're hosting multiple websites already and you're looking to consolidate, or if you're looking to isolate yourself from the shared hosting environment but you don't have the time or knoweldge to manage a server, then the Managed cPanel Servers are for you. Find out more..

UnmanagedCloud Servers

Our unmanaged range gives you complete control at rock bottom prices and our cloud platform boasts super fast multipath 40Gb/s network, the latest Intel Xeon V3 CPUs and enterprise grade redundant SSDs. If you're a sysadmin look no further, we offer some of the best specification to price ratio servers available. Find out more..

Want your very own server? Get our 1GB memory, Xeon V4, 20GB SSD VPS for £10.00 / month.

View Plans

Developing for Snappy Ubuntu from Arch Linux Using LXC


Snappy Ubuntu Core is the perfect system for large-scale cloud container deployments, it is perfect for Docker deployments. It is a very nice, minimal and stripped down version of Ubuntu, specially designed to run securely on autonomous machines.


  • Arch Linux desktop installed on your system.
  • A non-root user account with sudo privilege set up on your system.

Install LXC

Before starting, you will need to update your system. You can update your system by running the following command:

sudo pacman -S update

Once your system is up-to-date, you can install LXC and other required components with the following command:

sudo pacman -S lxc arch-install-scripts bridge-utils

Make sure that LXC is appropriately configured by running the following command:

sudo lxc-checkconfig

The output looks like the following:

    Kernel configuration not found at /proc/config.gz; searching...
    Kernel configuration found at /boot/config-3.13.0-32-generic
    --- Namespaces ---
    Namespaces: enabled
    Utsname namespace: enabled
    Ipc namespace: enabled
    Pid namespace: enabled
    User namespace: enabled
    Network namespace: enabled
    Multiple /dev/pts instances: enabled

--- Control groups --- Cgroup: enabled Cgroup clone_children flag: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled

--- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled Bridges: enabled Advanced netfilter: enabled CONFIG_NF_NAT_IPV4: enabled CONFIG_NF_NAT_IPV6: enabled CONFIG_IP_NF_TARGET_MASQUERADE: enabled CONFIG_IP6_NF_TARGET_MASQUERADE: enabled CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled

--- Checkpoint/Restore --- checkpoint restore: enabled CONFIG_FHANDLE: enabled CONFIG_EVENTFD: enabled CONFIG_EPOLL: enabled CONFIG_UNIX_DIAG: enabled CONFIG_INET_DIAG: enabled CONFIG_PACKET_DIAG: enabled CONFIG_NETLINK_DIAG: enabled File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

If you see the above output then your LXC setup should be ok.

Creating Containers

First, you need to download the Ubuntu Trusty (14.04) template.

Run the following command to get the list of all different OS/ARCH combinations.

sudo lxc-create -t download -n snappydev

You should see the all available templates:

    Setting up the GPG keyring
    Downloading the image index

--- DIST RELEASE ARCH VARIANT BUILD --- centos 6 amd64 default 20160921_02:16 centos 6 i386 default 20160921_02:16 centos 7 amd64 default 20160921_02:16 debian jessie amd64 default 20160921_22:42 debian jessie arm64 default 20160921_22:42 debian jessie armel default 20160921_22:42 debian jessie armhf default 20160921_22:42 debian jessie i386 default 20160921_22:42 debian jessie powerpc default 20160921_22:42 debian jessie ppc64el default 20160921_22:42 debian jessie s390x default 20160921_22:42 debian sid amd64 default 20160921_22:42 debian sid arm64 default 20160921_22:42 debian sid armel default 20160920_22:42 debian sid armhf default 20160921_22:42 debian sid i386 default 20160921_22:42 debian sid powerpc default 20160921_22:42 debian sid ppc64el default 20160921_22:42 debian sid s390x default 20160921_22:42 debian stretch amd64 default 20160921_22:42 debian stretch arm64 default 20160921_22:42 debian stretch armel default 20160921_22:42 debian stretch armhf default 20160921_22:42 debian stretch i386 default 20160921_22:42 debian stretch powerpc default 20160921_22:42 debian stretch ppc64el default 20160921_22:42 debian stretch s390x default 20160921_22:42 debian wheezy amd64 default 20160921_22:42 debian wheezy armel default 20160920_22:42 debian wheezy armhf default 20160921_22:42 debian wheezy i386 default 20160921_22:42 debian wheezy powerpc default 20160921_22:42 debian wheezy s390x default 20160921_22:42 fedora 22 amd64 default 20160922_01:27 fedora 22 i386 default 20160922_01:27 fedora 23 amd64 default 20160922_01:49 fedora 23 i386 default 20160922_01:27 fedora 24 amd64 default 20160922_01:27 fedora 24 i386 default 20160922_01:49 gentoo current amd64 default 20160921_14:12 gentoo current i386 default 20160921_14:12 oracle 6 amd64 default 20160921_11:40 oracle 6 i386 default 20160921_11:40 oracle 7 amd64 default 20160921_11:40 plamo 5.x amd64 default 20160921_21:36 plamo 5.x i386 default 20160921_21:36 plamo 6.x amd64 default 20160921_21:36 plamo 6.x i386 default 20160921_21:36 ubuntu precise amd64 default 20160922_03:49 ubuntu precise armel default 20160922_07:02 ubuntu precise armhf default 20160922_03:49 ubuntu precise i386 default 20160922_03:49 ubuntu precise powerpc default 20160922_03:49 ubuntu trusty amd64 default 20160922_03:49 ubuntu trusty arm64 default 20160922_03:49 ubuntu trusty armhf default 20160922_07:02 ubuntu trusty i386 default 20160922_03:49 ubuntu trusty powerpc default 20160922_03:49 ubuntu trusty ppc64el default 20160922_03:49 ubuntu wily amd64 default 20160922_03:49 ubuntu wily arm64 default 20160922_03:49 ubuntu wily armhf default 20160922_03:49 ubuntu wily i386 default 20160922_03:49 ubuntu wily powerpc default 20160922_03:49 ubuntu wily ppc64el default 20160922_03:49 ubuntu xenial amd64 default 20160922_03:49 ubuntu xenial arm64 default 20160922_03:49 ubuntu xenial armhf default 20160922_03:49 ubuntu xenial i386 default 20160922_03:49 ubuntu xenial powerpc default 20160718_03:49 ubuntu xenial ppc64el default 20160922_03:49 ubuntu xenial s390x default 20160922_03:49 ubuntu yakkety amd64 default 20160922_03:49 ubuntu yakkety arm64 default 20160922_03:49 ubuntu yakkety armhf default 20160922_07:02 ubuntu yakkety i386 default 20160922_03:49 ubuntu yakkety powerpc default 20160922_03:49 ubuntu yakkety ppc64el default 20160922_03:49 ubuntu yakkety s390x default 20160922_03:49 ---

Distribution: Release:

Now type : ubuntu, trusty, amd64 and wait until the proper image and root file system has been downloaded. Once the download is completed, your container will have been created, but it is not running.

Setting up Networking

Before proceed further thing, you need to configure networking.

You will need to setup a network bridge between the host and the container. You can setup network bridge using netctl utility. By default netctl is not installed on your system, so you need to install it first. You can install it by running the following command:

sudo pacman -S netctl

Next, you will need to create the file /etc/netctl/lxcbridge in order to setup network bridge.

sudo nano /etc/netctl/lxcbridge

Add the following lines:

    Description="LXC bridge"

You can use the interface you would like to bind by changing the value of BindsToInterfaces. Now, start the bridge by running the following command:

sudo netctl switch-to lxcbridgesudo netctl start lxcbridge

Now, run ifconfig command to see that the bridge interface is up and running.

sudo ifconfig


    docker0   Link encap:Ethernet  HWaddr 02:42:9c:9b:67:68  
              inet addr:  Bcast:  Mask:
              UP BROADCAST MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0 
              RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

lo Link encap:Local Loopback inet addr: Mask: inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:10442 errors:0 dropped:0 overruns:0 frame:0 TX packets:10442 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:866354 (866.3 KB) TX bytes:866354 (866.3 KB)

br0 Link encap:Ethernet HWaddr fe:78:f2:9a:af:00 inet addr: Bcast: Mask: inet6 addr: fe80::d465:f9ff:fe4f:390b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:41 errors:0 dropped:0 overruns:0 frame:0 TX packets:146 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3776 (3.7 KB) TX bytes:22516 (22.5 KB)

wlan0 Link encap:Ethernet HWaddr 4c:bb:58:9c:f5:55 inet addr: Bcast: Mask: inet6 addr: fe80::4ebb:58ff:fe9c:f555/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9601 errors:0 dropped:0 overruns:0 frame:0 TX packets:10301 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5397499 (5.3 MB) TX bytes:2272749 (2.2 MB)

Next, edit the /etc/default/lxc and change as shown below:

sudo nano /etc/default/lxc
    # LXC_AUTO - whether or not to start containers at boot

# BOOTGROUPS - What groups should start on bootup? # Comma separated list of groups. # Leading comma, trailing comma or embedded double # comma indicates when the NULL group should be run. # Example (default): boot the onboot group first then the NULL group BOOTGROUPS="onboot,"

# SHUTDOWNDELAY - Wait time for a container to shut down. # Container shutdown can result in lengthy system # shutdown times. Even 5 seconds per container can be # too long. SHUTDOWNDELAY=5

# OPTIONS can be used for anything else. # If you want to boot everything then # options can be "-a" or "-a -A". OPTIONS=

# STOPOPTS are stop options. The can be used for anything else to stop. # If you want to kill containers fast, use -k STOPOPTS="-a -A -s"

USE_LXC_BRIDGE="true" # overridden in lxc-net

[ ! -f /etc/default/lxc-net ] || . /etc/default/lxc-net

Also, you need to allows packet forwarding between interfaces, you can do this by running the following command:

sudo sysctl net.ipv4.ip_forward


    net.ipv4.ip_forward = 1

To enable forwarding, run:

sudo sysctl net.ipv4.ip_forward=0


    net.ipv4.ip_forward = 0

Now, enable host to perform NAT by running the following command:

sudo iptables -t nat -A POSTROUTING -o eno1 -j MASQUERADE

Configuring the Container

The main configuration file of the container is located at /var/lib/lxc/snappydev/config. Next, you will need to edit /var/lib/lxc/snappydev/config file :

sudo nano /var/lib/lxc/snappydev/config

Change the file as shown below:

    # Template used to create this container: /usr/share/lxc/templates/lxc-download
    # Parameters passed to the template:
    # For additional config options, please look at lxc.container.conf(5)

# Distribution configuration lxc.include = /usr/share/lxc/config/ubuntu.common.conf lxc.arch = x86_64

# Container specific configuration lxc.rootfs = /var/lib/lxc/snappydev/rootfs lxc.utsname = snappydev

# Network configuration = veth = br0 = up = eth0 # = # =

Save and close the file when you have finished.

If you have setup the bridge with a static IP then uncomment and parameter as shown in above file.

After seting up everything, you can start the container by running the following command:

sudo lxc-start -n snappydev

Now, check the container is running by running the following command:

sudo lxc-ls -f

You should see the following output:

    NAME       STATE    IPV4           IPV6  GROUPS  AUTOSTART  
    snappydev  RUNNING  -     -       NO

You can also debug the container if it has not started properly by running the following command:

sudo lxc-start -n snappydev -F --logpriority=DEBUG

The output should provide you enough information to debug what's wrong with your container.

Once the container is started successfully, it's time to enter in to container. You can easily enter to container using lxc-attach command:

sudo lxc-attach -n snappydev

You should see the following error:

    group: cannot find name for group ID 19

To resolve this error, you will need to make some changes in /root/.bashrc file:

root@snappydev:/# nano .bashrc

Add the following line at the end of file:

    export PATH="/bin/:/usr/bin/:/sbin/$PATH"

Next, set the password of ubuntu user:

root@snappydev:/# passwd ubuntu Enter new UNIX password Retype new UNIX password passwd: password updated successfully

Now, exit from the console and login as a normal user by typing:

sudo lxc-console -n snappydev

Enter username and password which you have created above: ``` language-bash Connected to tty 1 Type to exit the console, to enter Ctrl+a itself

Ubuntu 14.04 snappydev pts/0

snappydev login: ubuntu Password: Last login: Sat Sep 323 10:40:01 UTC 2016 on pts/0 ubuntu@snappydev:~$

Want your very own server? Get our 1GB memory, Xeon V4, 20GB SSD VPS for £10.00 / month.

View Plans