Playing doctor with a Linux system

Monitoring Linux system health is a route to peace of mind. When a fleet of machines is serving an application, it is comforting to know that they are each and collectively operating within hardware performance limits.

There are countless libraries, tools, and services available to monitor Linux system health. It is also very easy to acquire system health information directly, allowing construction of a bespoke health monitoring subsystem.

There are five critical metrics of system health:

  1. Available memory
  2. CPU utilisation
  3. Network I/O (data transmission and receipt)
  4. Available disk space
  5. Disk I/O (reads and writes to disk)

Let’s take a look at how we can examine each one. This article is written from the perspective of Ubuntu 16.04, but many of the commands are available across Linux distributions. Some of them only require the Linux kernel itself.

Available Memory

We can get available memory using the free command. On its own, free will give us a bunch of columns describing the state of physical and swap memory.

$ free
       total  used  free  shared  buff/cache  available
Mem:   498208 47676 43408 5568    407124      410968
Swap:       0     0     0

There’s a lot going on here. What we are looking for, in plain English, is ‘how much memory is available to do stuff’. If such a number was low, we would know that the system was in danger of running out of memory.

The number we want is counterintuitively not the one in the column labelled ‘free’. That column tells us how much memory the system is not using for anything at all. Linux uses memory to cache regularly accessed files, and for other purposes that don’t preclude its allocation to a running program.

What we want is column 7, ‘available’. We can get just that number by using grep and awk. We can also use the -m flag to return results in megabytes, rather than bytes, thus making the output more readable.

$ free -m | grep 'Mem:' | awk '{print $7}'

That’s much better! A single integer representing how many megabytes of memory are available for the system to do things.

On its own, this is not very useful. You are not going to go around SSH’ing to every box in your fleet, running commands and noting numbers down on a piece of paper. The magic happens when the output is combined with some program that can collate all the data. For example, in Python, we could use the Subprocess module to run the command then store the number:

import subprocess

command = "free -m | grep 'Mem:' | awk '{print $7}'"
memory_available = int(subprocess.getoutput(command))

CPU Utilisation

To monitor Linux system cpu utilisation, we can use the top command. top produces while bunch of output measuring the cpu utilisation of every process on the system. To get an overall sense of system health, we can zero in on the third line:

$ top
%Cpu(s):  0.3 us,  0.3 sy,  0.0 ni, 99.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st

Four numbers are of use to us; Those succeeded by us, sy, id, and wa, which indicate the proportion of CPU time allocated to user processes, system processes, idling, and I/O wait respectively.

To acquire these numbers programmatically, we need to adjust top‘s output slightly.  We’ll use a few flags:

  • -b: Run in a non-interactive mode. Top will return to the shell rather than running indefinitely
  • -n: Sample for a specified number of iterations. We will use two iterations, and take numbers from the second.
  • -d: Delay time between iterations. We will supply a non-zero number so that top acquires data over some time

The whole command will be:

$ top -d 3 -b -n 2 | grep "%Cpu"

In Python, we can execute the command and split the output into individual floating point numbers. To do so, we take advantage of the fixed-width position of top’s output.

import subprocess

command = 'top -d 3 -b -n 2 | grep "%Cpu"'
output = subprocess.getoutput(command)
data = output.split('\n')[1]
cpu_user = float(data[8:13])
cpu_system = float(data[17:22])
cpu_idle = float(data[35:40])
cpu_io_wait = float(data[44:49])

Network I/O

All the hardware in the world won’t save you if your network connection can’t keep up. Monitoring transmit and receive volumes is, fortunately, pretty easy. The kernel provides us with a convenient window onto network activity, in /sys/class/net.

$ ls /sys/class/net
eth0 lo tun0

On this example system, /sys/class/net contains three network interfaces. An ethernet adapter eth0, the local loopback lo, and a vpn tunnel adapter tun0.

How you proceed to gather the information available about these interfaces is going to depend heavily on your situation. The following technique satisfies a couple of assumptions:

  1. We don’t know the number or disposition of network interfaces in advance
  2. We want to gather transmit / receive statistics for all interfaces except the local loopback
  3. We know that the local loopback interface name alone will always start with the character l.

These assumptions might not apply to you. Even if they don’t, you might be able to apply some of the techniques used herein to your situation.

Inside each interface, there is a statistics directory containing a wealth of information.

$ ls /sys/class/net/tun0/statistics
collisions        rx_packets
multicast         tx_aborted_errors
rx_bytes          tx_bytes
rx_compressed     tx_carrier_errors
rx_crc_errors     tx_compressed
rx_dropped        tx_dropped
rx_errors         tx_errors
rx_fifo_errors    tx_fifo_errors
rx_frame_errors   tx_heartbeat_errors
rx_length_errors  tx_packets
rx_missed_errors  tx_window_errors

To get a general overview of network activity, we will zero in on rx_bytes and tx_bytes.


$ cat /sys/class/net/tun0/statistics/rx_bytes
$ cat /sys/class/net/tun0/statistics/tx_bytes

These integer counters tick upwards since, effective, system boot. To sample network traffic, you can take readings of the counters at two points in time. The counters will wrap, so if you have a very busy or long-lived system you should account for potential wrapping.

Here is a Python program that samples current network activity in kilobytes per second.

""" - sample snippet"""
root = 'cat /sys/class/net/'
root += interface + '/statistics/'
rx_command = root + 'rx_bytes'
tx_command = root + 'tx_bytes'
start_rx = int(subprocess.getoutput(rx_command))
start_tx = int(subprocess.getoutput(tx_command))
end_rx = int(subprocess.getoutput(rx_command)
end_tx = int(subprocess.getoutput(tx_command))
rx_delta = end_rx - start_rx
tx_delta = end_tx - start_tx
if rx_delta <0:
   rx_delta = 0
if tx_delta <0:
   tx_delta = 0
rx_kbs = int(rx_delta / seconds / 1000)
tx_kbs = int(tx_delta / seconds / 1000)

Note that this program includes a hard coded interface, tun0. To gather all interfaces, you might loop through the output of ls and exclude the loopback interface.  For purposes that will become clearer later on, we will store each interface name as a dictionary key.

""" - interface loop snippet"""
output = subprocess.getoutput('ls /sys/class/net')
all_interfaces = output.split('\n')
data = dict()
for interface in interfaces:
   if interface[0] == 'l':
   data[interface] = None

On a system with multiple interfaces, it would be misleading to measure the traffic across each interface in sequence. Ideally we would sample each interface at the same time. We can do this by sampling each interface in a separate thread. Here is a Python program that ties everything together and does just that. The above two snippets, “sample” and “interface loop”, should be included where annotated.

import subprocess
import time
from multiprocessing.dummy import Pool as ThreadPool


def network(seconds: int) -> {str: (int, int)}:
   Return a dictionary, in which each string
   key is the name of a network interface,
   and in which each value is a tuple of two
   integers, the first being sampled transmitted
   kb/s and the second received kb/s, averaged
   over the supplied number of seconds.

   The local loopback interface is excluded.
   # Include 'interface loop' snippet here
   def sample(interface) -> None:
      # Include 'sample' snippet here
      data[interface] = (tx_kbs, tx_kbs)

   pool = ThreadPool(len(data))
   arguments = [key for key in data]
   _ =, arguments)
   return data

if __name__ == '__main__':
   result = network(DEFAULT_SAMPLE_SECONDS)
   output = 'Interface {iface}: {rx} rx kb/s
   output += ', {tx} tx kb/s'
   for interface in result:

Running the whole thing gives us neat network output for all intefaces:

$ python3
Interface tun0: 10 rx kb/s, 64 tx kb/s
Interface eth0: 54 rx kb/s, 25 tx kb/s

Of course, printing is fairly useless. We can import the module and function elsewhere:

from import network as network_io

sample = network_io(2)
for interface in sample:
   tx = sample[interface][0]
      if tx > TX_DANGER_THRESHOLD:
         # Raise alarm 
# Do other stuff with sample

Disk Space

After all that hullaballoo with network I/O, disk space monitoring is trivial. The df command gives us information about free disk usage:

$ df
Filesystem     1K-blocks    Used Available Use% Mounted on
udev              239812       0    239812   0% /dev
tmpfs              49824    5540     44284  12% /run
/dev/xvda1       8117828 3438396   4244156  45% /
tmpfs             249104       0    249104   0% /dev/shm
tmpfs               5120       0      5120   0% /run/lock
tmpfs             249104       0    249104   0% /sys/fs/cgroup
tmpfs              49824       0     49824   0% /run/user/1000

This is a bit of a mess. We want column four, ‘available’, for the partition you wish to monitor, which in this case is /dev/xvda1. The picture will get much messier if you have more than one partition on the system. In the case of a system with one partition, you will likely find it mounted at /dev/somediskname1. Common disk names include:

  • sd: SATA and virtualised SATA disks
  • xvd: Xen virtual disks. You will see this if you are on EC2 or other Xen based hypervisors
  • hd: IDE and virtualised IDE disks

The final letter will increment upwards with each successive disk. For example, a machine’e second SATA disk would be sdb. An integer partition number is appended to the disk name. For example, the third partition on a machine’s third Xen virtual disk would be xvdc3.

You will have to think about how best to deal with getting the data out of df. In my case, I know that all machines on my network are Xen guests with a single partition,  so I can safely assume that /dev/xvda1 will be the partition to examine on all of them. A command to get the available megabytes of disk space on those machines is:

$ df -m | grep "^/" | awk '{print $4}'

The grep phrase "^/" will grab every line beginning with "/". On a machine with a single partition, this will give you that partition, whether the disk is sd, xvd, hd, and so on.

Programmatically acquiring the available space is then trivial. For example, in Python:

import subprocess

command = 'df -m | grep "^/" | awk \'{print $4}\''
free = int(subprocess.getoutput(command))

Disk I/O

A system thrashing its disks is a system yielding unhappy users. /proc/diskstats contains data that allow us to monitor disk I/O. Like df, /proc/diskstats output is a messy pile of numbers.

$ cat /proc/diskstats
202       1 xvda1 2040617 57 50189642 1701120 3799712 2328944 85759400 1637952 0 1064928 3338520

Column 6 is the number of sectors read, and column 10  is the number of sectors written since, effectively, boot. On a long lived or shockingly busy system these numbers could wrap. To measure I/O per second, we can sample these numbers over a period of time.

Like with disk space monitoring, you will need to consider disk names and partition numbers. Because I know this system will only ever have a single xvd disk with a single partition, I can safely hardcode xvda1 as a grep target:

$ cat /proc/diskstats | grep "xvda1" | awk '{print $6, $10}'
50192074 85761968
 Once we have the number of sectors read and written, we can multiply by the sector size to get I/O in bytes per second. To get sector size, we can use the fdisk command, which will require root privileges.
$ sudo disk -l | grep "Sector size" | awk '{print $4}'

On a machine with more than one disk, you will need to think about getting sector sizes for each disk.

Here’s a Python program that ties all that together:

import subprocess
import time

seconds = 2

command = 'sudo fdisk -l | grep'
command += '"Sector size" | awk \'{print $4}\''
sector_size = int(subprocess.getoutput(command))
command = 'cat /proc/diskstats | grep "xvda1"'
command += ' | awk \'{{print $6, $10}}\''

sample = subprocess.getoutput(command)
start_read = int(sample.split(' ')[0])
start_write int(sample.split(' ')[1])


sample = subprocess.getoutput(command)
end_read = int(sample.split(' ')[0])
end_write = int(sample.split(' ')[1])

delta_read = end_read - start_read * sector_size
delta_write = end_write - start_write * sector_size
read_kb_s = int(delta_read / seconds / 1000)
write_kb_s = int(delta_write / seconds / 1000)

A Bespoke Suit

Now that we’ve collected all these data, we can decide what to do with them. I like to gather up all the data into a json package and shoot them off to a telemetry aggregating machine elsewhere on the network. From there it is a hop, skip and a jump to pretty graphs and fun SQL queries.

By gathering the data yourself, you have the freedom to store, organise, and present the data as you see fit. Sometimes, it is most appropriate to reach for a third party tool. In others, a bespoke solution gives unique and powerful insight.

Automating Application Installation on Linux with Python

Perhaps you have a shiny new web application. It responds to HTTPS requests, delivering pure awesome in response. You would like to install the application on a Linux server, perhaps in Amazon EC2.

Performing the installation by hand is a Bad Idea™. Manual installation means you cannot easily scale across multiple machines, you cannot recover from failure, and you cannot iterate on the machine configuration.

You can automate the installation process with Python. The following are examples of procedures that introduce principles for automation. This is not a step-by-step guide for an entire deployment, but it will give you the tools you need to build your own.

Connecting via SSH with Paramiko

A manual installation process might involve executing lots of commands inside an SSH session. For example:

$ sudo apt update
$ sudo apt install nginx

All of your hard-won SSH skills can be transferred to a Python automation. The Paramiko library offers SSH interaction inside Python programs. I like to shorthand my use of Paramiko by wrapping it in a little container:


from paramiko import SSHClient
from paramiko import SFTPClient
from paramiko import AutoAddPolicy

class SSHSession:
    """Abstraction of a Paramiko SSH session"""
    def __init__(
        hostname: str,
        keyfile: str,
        username: str

        self._ssh = SSHClient()


    def execute(self, command: str) -> str:
        """Return the stdout of an SSH command"""
        _, stdout, _ = self._ssh.exec_command(

    def open_sftp(self) -> SFTPClient:
        """Return an SFTP client"""
        return self.ssh.open_sftp()

We use paramiko.AutoAddPolicy to automatically add the server to our known_hosts file. This effectively answers ‘yes’ to the prompt you would see if initiating a first time connection in an interactive terminal:

The authenticity of host '
(' can't be established. ECDSA key
fingerprint is 
Are you sure you want to continue connecting

You should only do this if you have otherwise secured the network path to your server. If you have not, connect manually first via a terminal and check the key fingerprint.

We initialise an SSHSession instance with a set of parameters that conveniently match what you might already have in your SSH config file. For example:

$ cat ~/.ssh/config
Host some_friendly_server_name
    User hugh
    IdentityFile ~/.ssh/some_private_key

The matching Paramiko session would be:

from ssh_session import SSHSession

SSH = SSHSession(

We now have a convenient little object that can run SSH commands for us.  Note that the object ignores errors in stderror returned by paramiko.SSHClient.exec_command(). While this is convenient when we are confident of our commands, it makes debugging difficult. I recommend debugging in an interactive SSH session rather than in Python.

Installing Dependencies

Let’s start by installing Nginx and Git. You could substitute these with any dependency of your application.

_ = SSH.execute('sudo apt update')
_ = SSH.execute('sudo apt install nginx -y')
_ = SSH.execute('sudo apt install git -y')

Note the ‘-y’ at the end of the apt install command. Without it, the session will hang at the apt continuance prompt:

After this operation, 4,816 kB of additional disk
space will be used.

Do you want to continue? [Y/n]

The requirement to bypass interactive prompts will be a common thread throughout this article. When automating your process, step through it manually and take careful note of where interactive prompts are required.

Creating a Linux User

Our application should, of course, run under its own user. Let’s automate that process:

APP_USER = 'farquad'
command = 'sudo adduser --system --group '
command += APP_USER
_ = SSH.execute(command)

Note that we establish the username as a constant, and don’t hardcode it into our command. This external definition, whether through a constant or a function parameter or however else it is done, is important for several reasons.

  1. It allows you to re-use the command with multiple parameters. For example, perhaps your application requires multiple users.
  2. It implements Don’t-Repeat-Yourself ‘DRY’ principle. We will likely need the username elsewhere, and by externally defining it we have created a single source of authority.

Automating File Transfer using Paramiko SFTP

Suppose your application is stored in a Git repository, like Bitbucket or Github, and that the repository is private. It is no use having an automated installation process if you need to answer an HTTPS password prompt when pulling a repository.

Instead, let’s automate the process by using SSH and installing a repository key on the machine. First, the transfer process:

KEY_FILEPATH = '~/some/key'
with open(KEY_FILEPATH, 'r') as keyfile:
    key =

sftp = SSH.open_sftp()
remote_file = sftp.file('~/repository_key', 'w')

Note that we first SFTP’d the key into our privileged user’s home directory, rather than directly into the application user’s directory. This is because our privileged user does not have permission to write into the application users’ home directory without sudo elevation, which we can’t do in the SFTP session.

Let’s move it into the appropriate place, and modify permissions appropriately:

command = 'sudo mkdir /home/' + APP_USER + '/.ssh'
_ = SSH.execute(command)

command = 'sudo mv ~/repository_key'
command += ' /home/' + APP_USER + '/.ssh/
_ = SSH.execute(command)

command = 'sudo chmod 600 /home/' + APP_USER
command += '/.ssh/repository_key'
_ = SSH.execute(command)

The file is now in the appropriate location, with the appropriate permissions. We repeat the process to install an ssh configuration file. I won’t lay out the entire process, but the principle is the same: Open an SFTP session, plop the file on the server, and move it  and re-permission around as necessary.

There is one important consideration. Because we have been creating directories as our privileged user, we need to turn over those directories to the application user:

command = 'sudo chown -R '
command += APP_USER + ':' + APP_USER
command += ' /home/' + APP_USER + '/.ssh'
_ = SSH.execute(command)

In the end, there should be an SSH configuration file on the server owned by the application user. Here is an example, using all the same names we have been using so far:

$ cat /home/farquad/.ssh/config
    User git
    IdentityFile ~/.ssh/repository_key
$ ls -la /home/farquad
drwxrwxr-x 2 farquad farquad 4096 Mar 23 18:47 .ssh

Pulling the Repository

The next step is easy mode. You’ve set things up such that your wonderful application can be pulled down in a single command:

command = 'cd /home/' + APP_USER
command += '; sudo -u ' + APP_USER
command += ' git clone ' + REPOSITORY
_ = SSH.execute(command)

Well, maybe almost easy mode. There’s a bit going on here. Note the separation of commands via a semicolon. Consider your Python SSH connection to be a very ‘loose’ one. It won’t retain environment information, including current directory, between executions. Therefore, to use conveniences like cd, we chain commands with semicolons.

Also note the sudo -u farquad. We do this so that the git repository is pulled down as the property of our application user,  not our privileged user. This saves us all the dicking about with permissions that plagued the SFTP steps above.

Paramiko and Virtual Environments, like Virtualenv

The ‘loose’ nature of the Paramiko session referenced above becomes particularly important when working with virtual environments. Consider the following:

$ virtualenv -p python3 /home/farquad/app
$ cd /home/farquad/app
$ source bin/activate
(app) $ pip install gunicorn

If executed as distinct commands via a Paramiko SSH session, the gunicorn library will end up installed via systemwide pip. If you then attempt to run the application inside the virtual environment, say inside an Systemd configuration file…


… Then your application will fail because gunicorn was missing from the virtual environment. Instead, be sure to execute commands that require a particular environment in an atomic manner.

Your Move!

Once your application deployment is automated,  you have freed yourself from having to trudge through SSH command sequences every time you want to adjust your deployment. The fear of breaking a server disappears, because you can fire up a replacement at will. Enjoy the freedom!