Monitoring Linux system health is a route to peace of mind. When a fleet of machines is serving an application, it is comforting to know that they are each and collectively operating within hardware performance limits.
There are countless libraries, tools, and services available to monitor Linux system health. It is also very easy to acquire system health information directly, allowing construction of a bespoke health monitoring subsystem.
There are five critical metrics of system health:
- Available memory
- CPU utilisation
- Network I/O (data transmission and receipt)
- Available disk space
- Disk I/O (reads and writes to disk)
Let’s take a look at how we can examine each one. This article is written from the perspective of Ubuntu 16.04, but many of the commands are available across Linux distributions. Some of them only require the Linux kernel itself.
Available Memory
We can get available memory using the free
command. On its own, free
will give us a bunch of columns describing the state of physical and swap memory.
$ free total used free shared buff/cache available Mem: 498208 47676 43408 5568 407124 410968 Swap: 0 0 0
There’s a lot going on here. What we are looking for, in plain English, is ‘how much memory is available to do stuff’. If such a number was low, we would know that the system was in danger of running out of memory.
The number we want is counterintuitively not the one in the column labelled ‘free’. That column tells us how much memory the system is not using for anything at all. Linux uses memory to cache regularly accessed files, and for other purposes that don’t preclude its allocation to a running program.
What we want is column 7, ‘available’. We can get just that number by using grep
and awk
. We can also use the -m flag to return results in megabytes, rather than bytes, thus making the output more readable.
$ free -m | grep 'Mem:' | awk '{print $7}' 400
That’s much better! A single integer representing how many megabytes of memory are available for the system to do things.
On its own, this is not very useful. You are not going to go around SSH’ing to every box in your fleet, running commands and noting numbers down on a piece of paper. The magic happens when the output is combined with some program that can collate all the data. For example, in Python, we could use the Subprocess module to run the command then store the number:
"""memory.py""" import subprocess command = "free -m | grep 'Mem:' | awk '{print $7}'" memory_available = int(subprocess.getoutput(command))
CPU Utilisation
To monitor Linux system cpu utilisation, we can use the top
command. top
produces while bunch of output measuring the cpu utilisation of every process on the system. To get an overall sense of system health, we can zero in on the third line:
$ top // %Cpu(s): 0.3 us, 0.3 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st //
Four numbers are of use to us; Those succeeded by us
, sy
, id
, and wa
, which indicate the proportion of CPU time allocated to user processes, system processes, idling, and I/O wait respectively.
To acquire these numbers programmatically, we need to adjust top
‘s output slightly. We’ll use a few flags:
-b
: Run in a non-interactive mode. Top will return to the shell rather than running indefinitely-n
: Sample for a specified number of iterations. We will use two iterations, and take numbers from the second.-d
: Delay time between iterations. We will supply a non-zero number so thattop
acquires data over some time
The whole command will be:
$ top -d 3 -b -n 2 | grep "%Cpu"
In Python, we can execute the command and split the output into individual floating point numbers. To do so, we take advantage of the fixed-width position of top’s output.
"""cpu.py""" import subprocess command = 'top -d 3 -b -n 2 | grep "%Cpu"' output = subprocess.getoutput(command) data = output.split('\n')[1] cpu_user = float(data[8:13]) cpu_system = float(data[17:22]) cpu_idle = float(data[35:40]) cpu_io_wait = float(data[44:49])
Network I/O
All the hardware in the world won’t save you if your network connection can’t keep up. Monitoring transmit and receive volumes is, fortunately, pretty easy. The kernel provides us with a convenient window onto network activity, in /sys/class/net
.
$ ls /sys/class/net eth0 lo tun0
On this example system, /sys/class/net contains three network interfaces. An ethernet adapter eth0
, the local loopback lo
, and a vpn tunnel adapter tun0
.
How you proceed to gather the information available about these interfaces is going to depend heavily on your situation. The following technique satisfies a couple of assumptions:
- We don’t know the number or disposition of network interfaces in advance
- We want to gather transmit / receive statistics for all interfaces except the local loopback
- We know that the local loopback interface name alone will always start with the character
l
.
These assumptions might not apply to you. Even if they don’t, you might be able to apply some of the techniques used herein to your situation.
Inside each interface, there is a statistics directory containing a wealth of information.
$ ls /sys/class/net/tun0/statistics collisions rx_packets multicast tx_aborted_errors rx_bytes tx_bytes rx_compressed tx_carrier_errors rx_crc_errors tx_compressed rx_dropped tx_dropped rx_errors tx_errors rx_fifo_errors tx_fifo_errors rx_frame_errors tx_heartbeat_errors rx_length_errors tx_packets rx_missed_errors tx_window_errors
To get a general overview of network activity, we will zero in on rx_bytes and tx_bytes.
$ cat /sys/class/net/tun0/statistics/rx_bytes 11880392069 $ cat /sys/class/net/tun0/statistics/tx_bytes 128763654271
These integer counters tick upwards since, effective, system boot. To sample network traffic, you can take readings of the counters at two points in time. The counters will wrap, so if you have a very busy or long-lived system you should account for potential wrapping.
Here is a Python program that samples current network activity in kilobytes per second.
"""network.py - sample snippet""" // root = 'cat /sys/class/net/' root += interface + '/statistics/' rx_command = root + 'rx_bytes' tx_command = root + 'tx_bytes' start_rx = int(subprocess.getoutput(rx_command)) start_tx = int(subprocess.getoutput(tx_command)) time.sleep(seconds) end_rx = int(subprocess.getoutput(rx_command) end_tx = int(subprocess.getoutput(tx_command)) rx_delta = end_rx - start_rx tx_delta = end_tx - start_tx if rx_delta <0: rx_delta = 0 if tx_delta <0: tx_delta = 0 rx_kbs = int(rx_delta / seconds / 1000) tx_kbs = int(tx_delta / seconds / 1000) //
Note that this program includes a hard coded interface, tun0
. To gather all interfaces, you might loop through the output of ls
and exclude the loopback interface. For purposes that will become clearer later on, we will store each interface name as a dictionary key.
"""network.py - interface loop snippet""" // output = subprocess.getoutput('ls /sys/class/net') all_interfaces = output.split('\n') data = dict() for interface in interfaces: if interface[0] == 'l': continue data[interface] = None //
On a system with multiple interfaces, it would be misleading to measure the traffic across each interface in sequence. Ideally we would sample each interface at the same time. We can do this by sampling each interface in a separate thread. Here is a Python program that ties everything together and does just that. The above two snippets, “sample” and “interface loop”, should be included where annotated.
"""network.py""" import subprocess import time from multiprocessing.dummy import Pool as ThreadPool DEFAULT_SAMPLE_SECONDS = 2 def network(seconds: int) -> {str: (int, int)}: """ Return a dictionary, in which each string key is the name of a network interface, and in which each value is a tuple of two integers, the first being sampled transmitted kb/s and the second received kb/s, averaged over the supplied number of seconds. The local loopback interface is excluded. """ # # Include 'interface loop' snippet here # def sample(interface) -> None: # # Include 'sample' snippet here # data[interface] = (tx_kbs, tx_kbs) return pool = ThreadPool(len(data)) arguments = [key for key in data] _ = pool.map(sample, arguments) pool.close() pool.join() return data if __name__ == '__main__': result = network(DEFAULT_SAMPLE_SECONDS) output = 'Interface {iface}: {rx} rx kb/s output += ', {tx} tx kb/s' for interface in result: print(output.format( iface=interface, rx=result[interface][1], tx=result[interface][0] ))
Running the whole thing gives us neat network output for all intefaces:
$ python3 network.py Interface tun0: 10 rx kb/s, 64 tx kb/s Interface eth0: 54 rx kb/s, 25 tx kb/s
Of course, printing is fairly useless. We can import the module and function elsewhere:
"""someprogram.py""" from network.py import network as network_io TX_DANGER_THRESHOLD = 5000 #kb/s sample = network_io(2) for interface in sample: tx = sample[interface][0] if tx > TX_DANGER_THRESHOLD: # Raise alarm # Do other stuff with sample
Disk Space
After all that hullaballoo with network I/O, disk space monitoring is trivial. The df command gives us information about free disk usage:
$ df Filesystem 1K-blocks Used Available Use% Mounted on udev 239812 0 239812 0% /dev tmpfs 49824 5540 44284 12% /run /dev/xvda1 8117828 3438396 4244156 45% / tmpfs 249104 0 249104 0% /dev/shm tmpfs 5120 0 5120 0% /run/lock tmpfs 249104 0 249104 0% /sys/fs/cgroup tmpfs 49824 0 49824 0% /run/user/1000
This is a bit of a mess. We want column four, ‘available’, for the partition you wish to monitor, which in this case is /dev/xvda1
. The picture will get much messier if you have more than one partition on the system. In the case of a system with one partition, you will likely find it mounted at /dev/somediskname1
. Common disk names include:
sd
: SATA and virtualised SATA disksxvd
: Xen virtual disks. You will see this if you are on EC2 or other Xen based hypervisorshd
: IDE and virtualised IDE disks
The final letter will increment upwards with each successive disk. For example, a machine’e second SATA disk would be sdb
. An integer partition number is appended to the disk name. For example, the third partition on a machine’s third Xen virtual disk would be xvdc3
.
You will have to think about how best to deal with getting the data out of df
. In my case, I know that all machines on my network are Xen guests with a single partition, so I can safely assume that /dev/xvda1
will be the partition to examine on all of them. A command to get the available megabytes of disk space on those machines is:
$ df -m | grep "^/" | awk '{print $4}' 4145
The grep phrase "^/"
will grab every line beginning with "/".
On a machine with a single partition, this will give you that partition, whether the disk is sd
, xvd
, hd
, and so on.
Programmatically acquiring the available space is then trivial. For example, in Python:
"""disk.py""" import subprocess command = 'df -m | grep "^/" | awk \'{print $4}\'' free = int(subprocess.getoutput(command))
Disk I/O
A system thrashing its disks is a system yielding unhappy users. /proc/diskstats contains data that allow us to monitor disk I/O. Like df, /proc/diskstats output is a messy pile of numbers.
$ cat /proc/diskstats
//
202 1 xvda1 2040617 57 50189642 1701120 3799712 2328944 85759400 1637952 0 1064928 3338520
//
Column 6 is the number of sectors read, and column 10 is the number of sectors written since, effectively, boot. On a long lived or shockingly busy system these numbers could wrap. To measure I/O per second, we can sample these numbers over a period of time.
Like with disk space monitoring, you will need to consider disk names and partition numbers. Because I know this system will only ever have a single xvd
disk with a single partition, I can safely hardcode xvda1
as a grep
target:
$ cat /proc/diskstats | grep "xvda1" | awk '{print $6, $10}' 50192074 85761968
$ sudo disk -l | grep "Sector size" | awk '{print $4}' 512
On a machine with more than one disk, you will need to think about getting sector sizes for each disk.
Here’s a Python program that ties all that together:
"""diskio.py""" import subprocess import time seconds = 2 command = 'sudo fdisk -l | grep' command += '"Sector size" | awk \'{print $4}\'' sector_size = int(subprocess.getoutput(command)) command = 'cat /proc/diskstats | grep "xvda1"' command += ' | awk \'{{print $6, $10}}\'' sample = subprocess.getoutput(command) start_read = int(sample.split(' ')[0]) start_write int(sample.split(' ')[1]) time.sleep(seconds) sample = subprocess.getoutput(command) end_read = int(sample.split(' ')[0]) end_write = int(sample.split(' ')[1]) delta_read = end_read - start_read * sector_size delta_write = end_write - start_write * sector_size read_kb_s = int(delta_read / seconds / 1000) write_kb_s = int(delta_write / seconds / 1000)
A Bespoke Suit
Now that we’ve collected all these data, we can decide what to do with them. I like to gather up all the data into a json package and shoot them off to a telemetry aggregating machine elsewhere on the network. From there it is a hop, skip and a jump to pretty graphs and fun SQL queries.
By gathering the data yourself, you have the freedom to store, organise, and present the data as you see fit. Sometimes, it is most appropriate to reach for a third party tool. In others, a bespoke solution gives unique and powerful insight.