Asynchronous Object Initialisation in Swift

Baby birds, rockets, freshly roasted coffee beans, and … immutable objects. What do all these things have in common? I love them.

An immutable object is one that cannot change after it is initialised. It has no variable properties. This means that when using it in a program, my pea brain does not have to reason about the state of the object. It either exists, fully ready to complete its assigned duties, or it does not.

Asynchronous programming presents a challenge to immutable objects. If the creation of an object requires network I/O, then we will have to unblock execution after we have decided to create the object.

As an example, let’s consider the Transaction class inside Amatino Swift. Amatino is a double entry accounting API, and Amatino Swift allows macOS & iOS developers to build finance capabilities into their applications.

To allow developers to build rich user-interfaces, it is critical that Transaction operations be smoothly asynchronous. We can’t block rendering the interface while the Amatino API responds! To lower the cognitive load yielded by Amatino Swift, Transaction should be immutable.

We’ll use a simplified version of Transaction that only contains two properties: transactionTime and description. Let’s build it out from a simple synchronous case, to a full fledged asynchronous case.

class Transaction {
  let description: String
  let transactionTime: Date 
  
  init(description: String, transactionTime: Date) {
    self.description = description
    self.transactionTime = transactionTime
  }
}

So far, so obvious. We can instantly initialise Transaction. In real life, Transaction is not initialised with piecemeal values, it is initialised from decoded JSON data received from an HTTP request. That JSON might look like this:

{
  "transaction_time": "2008-08",
  "description": "short Lehman Bros. stock"
}

And we can decode that JSON into our Transaction class like so:

/* Part of Transaction definition */
enum JSONObjectKeys: String, CodingKey {
  case txTime = "transaction_time"
  case description = "description"
}

init(from decoder: Decoder) throws {
  let container = try decoder.container(
    keyedBy: JSONObjectKeys.self
  )
  description = try container.decode(
    String.self,
    forKey: .description
  )
  let dateFormatter = DateFormatter()
  dateFormatter.dateFormat = "yyyy-MM" //...
  let rawTime = try container.decode(
    String.self,
    forKey: .txTime
  )
  guard let txTime: Date = dateFormatter.date(
    from: rawTime
  ) else {
    throw Error
  }
  transactionTime = txTime
  return
}

Whoah! What just happened! We decoded a JSON object into an immutable Swift object. Nice! That was intense, so lets take a breather and look at a cute baby bird:

Break time is over! Back to it: Suppose at some point in our application, we want to create an instance of Transaction. Perhaps a user has tapped ‘save’ in an interface. Because the Amatino API is going to (depending on geography) take ~50ms to respond, we need to perform an asynchronous initialisation.

We can do this by giving our Transaction class a static method, like this one:

static func create(
  description: String,
  transactionTime: Date,
  callback: @escaping (Error?, Transaction?) -> Void
) throws {
  /* dummyHTTP() stands in for whatever HTTP request
     machinery you use to make an HTTP request. */
  dummyHTTP() { (data: Data?, error: Error?) in
    guard error == nil else { 
      callback(error, nil)
      return
    }
    guard dataToDecode: Data = data else {
      callback(Error(), nil)
      return
    }
    let transaction: Transaction
    guard transaction = JSONDecoder().decode(
      Transaction.self,
      from: dateToDecode
    ) else {
      callback(Error(), nil)
      return
    }
    callback(nil, transaction)
    return
  }
}

This new Transaction.create() method follows these steps:

  1. Accepts the parameters of the new transaction, and a function to be called once that transaction is available, the callback(Error?:Transaction?). Because something might go wrong, this function might receive an error, (Error?) or it might receive a Transaction (Transaction?)
  2. Makes an HTTP request, receiving optional Data and Error in return, which are used in a closure. In this example, dummyHTTP() stands in for whatever machinery you use to make your HTTP requests. For example, check out Apple’s guide to making HTTP requests in Swift
  3. Looks for the presence of an error, or the absence of data and, if they are found, calls back with those errors: callback(error, nil)
  4. Attempts to decode a new instance of Transaction and, if successful, calls back with that transaction:callback(nil, transaction)

The end result? An immutable object. We don’t have to reason about whether or not it is fully initialised, it either exists or it does not. Consider an alternative, wherein the Transaction class tracks internal state:

class Transaction {
  var HTTPRequestInProgress: bool
  var hadError: Bool? = nil
  var description: String? = nil
  var transactionTime: Date? = nil

  init(
    description: String,
    transactionTime: Date,
    callback: (Error?, Transaction?) -> Void
  ) {
    HTTPRequestInProgress = true
    dummyHTTP() { data: Data?, error: Error? in 
       /* Look for errors, try decoding, set
          `hadError` as appropriate */
       HTTPRequestInProgress = false
       callback(nil, self)
       return
    }
  }
}

Now we must reason about all sorts of new possibilities. Are we trying to utilise a Transaction that is not yet ready? Have we guarded against nil when utilising a Transaction that is ostensibly ready?  Down this path lies a jumble of guard statements, if-else clauses, and sad baby birdies.

Don’t make the baby birdies sad, asynchronously initialise immutable objects! 💕

Further Reading

– Hugh

Lessons from releasing a personal project as a commercial product

Aliens. It all begins with aliens. Rewind to San Francisco, and a game developer named Unknown Worlds.  Unknown Worlds is awesome.  We’re chilled out, but we create wonderful products. The games we make bring joy to millions of people around the world. The founders, Charlie and Max, are just the coolest and most inspirational blokes.

Before Unknown Worlds, I was at KPMG. A bean-counter, not a programmer. I couldn’t tell computers what to do. But now, making games, I was surrounded by people who could.

I was so inspired by Brian Cronin, Dushan Leska, Jonas Bötel,  Steve An, and others. They were gods. They would sit in a trance for days, occasionally typing incantations on their keyboards, and eventually show us some amazing new game feature. I was in awe.

Dushan would say to me: ‘Just automate something you do every day. It will be hard, you will have to learn a lot, but it will teach you how to write code‘. So I did.

I hold Dushan (mostly) responsible for this mess

At KPMG I spent a lot of time doing battle with Microsoft Excel.  There is nothing fundamentally wrong with Excel. The problem is that it is an extremely generalised tool, and the work we were doing was not generalised. Too much time was spent copying and pasting data, sanitising data, shuffling data by hand.

When I arrived at Unknown Worlds, I started monitoring our sales. I channeled my inner KPMG and created glorious spreadsheets with pretty graphs. It was an awfully manual process. So, on Dushan’s advice, I started automating it.

The process was agonisingly slow. I would devote time after work, on weekends, at lunches: I had no teacher. Once I got going though, I was hooked. Tasks that used to take us hours at KPMG evaporated in moments in the hands of the machine. I felt like a magician.

With great power comes great responsibility. Soon I was writing code in our games. I thought I was pretty damn clever. Some of the stuff I wrote was super cool, one feature even got me invited to speak at Game Developer’s Conference. But damn, most of it was hot garbage.

Working on Subnautica taught me that mediocre programmers are dangerous to the health of large projects. Also dangerous: Reaper Leviathans.

There is nothing more dangerous on a big software project than a mediocre programmer. We’re like a radioactive prairie dog on heat: Running around contaminating codebases with bugs, indecipherable intent, zero documentation, no testing, and poor security.

Eventually I learned enough to realise I needed to ban myself from our game’s codebases. I was desperate to be better: I wanted to be able to contribute to Unknown Worlds games in a sustainable, positive way. One day I read a recommendation: Create a personal project. A project you can sculpt over a long period of time, learning new skills and best practices as you go.

Channeling Dushan again, I decided to start an accounting software project. Accounting software gives me the shits. As I learned more about code, I realised that most accounting software is shit. And it’s near impossible to integrate the big accounting software packages into other software.

How many software startups can you fit in one frame?

Piece by piece, after hours, over weekends, and at any time a healthier person would take a holiday, I put together a beast I called Amatino. It was always supposed to be something small. A side project that I would use myself. Haha… ha. Oh dear.

Today Amatino is available to anyone. It’s a globally-distributed, high-performance, feature-rich accounting wet dream. You can actually subscribe to Amatino and real money will arrive in my bank account. That’s just fucking outrageous!

Still can’t believe this is a real screenshot

Even better, I’ve achieved my original goal. I feel comfortable digging around in code on Unknown Worlds games, and am no longer a dangerous liability to our code quality. I can finally do some of what I saw Max, Charlie, Dushan, Steve, Jonas and Brian doing all those years ago.

Along the way I picked up a few lessons.

Lesson 1: Do it

Creating your own product is utterly exhilarating and mind expanding. I’m about as artistic as an Ikea bar stool, but I imagine this is how artists feel when they make art. It just feels great.

Lesson 2: Keep your day job

Alright, maybe quit your day job if it doesn’t make you happy. But if you are happy, keep at it. Over the past years I’ve given Unknown Worlds 100% and more. Unknown Worlds makes me super happy. To build Amatino simultaneously, I had to develop discipline: Every night, every weekend, every holiday, code like hell.

Spend enough time around Max (L) and Charlie (R), the founders of Unknown Worlds, and you will be inspired to do cool stuff

There are many benefits. First, you don’t lose contact with your work mates. Charlie, Max, Scott, Brandt, and many others are constant inspirations to me. Second, you don’t have to worry about funding, because you have a job. Third, you are kept grounded.

I think if I didn’t spend all day making games, Amatino would have sent me insane. I would have lacked direction, and woke up not knowing what to do. Instead, I worked on making games, structured my day around Unknown Worlds, and devoted focused, intense energy to Amatino when possible.

Lesson 3: Your partner comes first

No matter how important a milestone is, or how deep in thought you are, or how good you think your ideas are, you drop everything for your partner. You lift up your partner, you encourage your partner, you support your partner. Every day, without fail, without exception.

This was a hard lesson to learn. It is the most important lesson.

Without Jessica, Amatino would not have happened. And it is precisely because she took me away from Amatino that she helped. The ritual of cooking for her, sharing meals with her, going on dates with her, doing household chores with her, listening attentively to her thoughts, concerns, and dreams. All these things take immense time, time you might wish to devote to your project instead.

You must not make that trade. It is a false economy. Your productivity will suffer, your health and emotional wellbeing will suffer. The energy you devote to your partner instead of your project will come back to you tenfold and more.

Don’t bore your partner to death by constantly talking about your project. Most importantly, don’t put off big life decisions because you think the time will be right after your project is released.

Don’t put off the big decisions!

Lesson 4: Eat well, exercise, and don’t get drunk

You all hear this enough elsewhere. You have a day job, a personal project, and perhaps a partner too: You cannot waste time recovering from the ingestion of cognitive impediments.  Any social value you get from being drunk is utterly dwarfed by the opportunity cost of brain-cells not functioning at peak efficiency.

Your mates might give you hell for this. Don’t worry, they will still love you in the long run.

Lesson 5: Ignore the framework brigade

I’m building a Dockerized cloud Node app with React-native frontend on GCP powered by the blockchain.” Don’t be those people. Learn from first principles. Start with abstract design thought, not a list of software for your ‘stack’. Don’t be afraid to build your own systems.

Reach for third-party dependencies judiciously and only where absolutely necessary. Learn by dabbling in languages where you need to allocate your own memory, while leveraging the speed boost that comes with those in which you don’t. Build computers. Tinker with them.

You will learn a lot from building, breaking, and upgrading your own computers. This one was maybe me taking it a bit too far

Hot tip: If your elevator pitch contains the brand name of a third party dependency, you are violating Lesson 5.

Lesson 6: Be humble

Maybe some people get ahead in life by being arrogant, self-assured dickheads. In fact, I am sure that is true. If you want to build and release a product, you need to check your ego at the door.

Suck in information from everyone and everything around you. Approach the world with unabridged, unhinged curiosity. Even when you don’t agree with someone, give them your undivided attention and listen, don’t talk. Consider their advice most especially if it conflicts with your own assumptions.

Good luck!

Principles for safe and clean JavaScript

Perhaps you like writing JavaScript. Perhaps you also like poking your eyes out with sticks. The rest of us like type-safety, a clean object model, and being able to assert against our own stupidity.

Yet no matter how much we loathe it, no one can avoid writing JavaScript. Any half serious product will eventually need a website, and that website is going to need JavaScript. Here are some principles that, when applied, may reduce the probability of insanity.

No vars

Don’t spray your scope everywhere. If you write var, you’re doing something wrong. There is no circumstance under which you should need var over const or let. If you need a variable in a higher scope, then put it there explicitly, don’t imply scope by hammering it with var.

// Bad
function bloop(pigeons) {

   if (pigeons > 42) {
      var output = pigeons + '!';
      console.log(output);
   }

   // `output` has no business being
   // in scope here. Use let instead!

}

By restricting scope, you restrict the shit your brain needs to reason about. Your brain is a pile of mush that has trouble thinking about more than six thing at once. Throw it a bone.

And before you say ‘but not all users have ECMA XXXX’, let me stop you. You’re not Google. Settle down. Whatever tiny proportion of your users don’t have access to modern syntax are not material to your business.

In fact, allowing dinosaur devices to use your service is to do them a disservice. They’re a security risk to themselves and to you. Let the ancient devices go. Just let them go.

Prefer const

You’re already going to be suffering enough pain trying to untangle your types. At least you can protect yourself against accidental mutation. const everything unless you are absolutely sure you need to mutate. Then, ask yourself if you can redesign your code to avoid mutation.

const AWESOME_SUFFIX = ' is awesome!';

function preach(truth) {
   const output = truth + AWESOME_SUFFIX;
   console.log(output);
   // We can't accidentally mutate
   // output.
}

preach('Immutability');

Enforcing immutability allows you to spend less time reasoning about state, and more time reasoning about the problem you are trying to solve.

Build your own types

The days of something.prototype.what.was.the.syntax are gone. Let them be gone, and lean on an object-oriented approach.

const DESCRIPTION_RAGE = 'induces rage!';
const DESCRIPTION_CALM = 'is quite pleasant.';

class Language {

   constructor(name, induces_rage) {
      this._name = name;
      this._induces_rate = induces_rage;
      return;
   }

   description() {
      if (this._induces_rage) {
         return this._name + DESCRIPTION_RAGE;
      }
      return this._name + DESCRIPTION_CALM;
   }

}

const SWIFT = new Language(
   'Swift',
   false
);
const JAVASCRIPT = new Language(
   'Javascript',
   true
);
const PYTHON = new Language(
   'Python',
   false
);

If you ever find yourself wrangling low-level types outside a method, you’ve probably got yourself a good case for defining a custom type. Modern JavaScript syntax makes it super easy and there is no excuse to avoid it.

Avoid 3rd party abstraction layers

Hey there, welcome to 2018. The days of needing jQuery are long, long gone. Browsers are generally highly standards compliant. Yes, document.getElementById('bleep') will just work.

There is no point in abstracting the core DOM API anymore. You’re not supporting IE6 and if you are, just get out, don’t @ me. The only thing that DOM API abstractions are good for are:

  • Bloating page weight
  • Excessive allocations
  • Making ~2009 school JavaScript devs who never moved with the times feel good about themselves by validating their desire to start every line of code with a dollar sign despite the fact that there are no material benefits and they just end up splintering community knowledge into needless silos while wasting precious cycles

Check yourself while you wreck yourself

It’s not easy to practice safe JavaScript. My go-to condom in Python is assert, and in Swift it’s guard. Meanwhile, running JavaScript is like rolling around naked and blindfolded on a Thai beach during a full-moon party.  You can’t be sure what’s going to happen, but you can bet it’s not all going to be enjoyable.

You can take some action to protect yourself. throw liberally where you make assumptions. That way, your code will at least hard crash in the off-nominal case, rather than merrily trucking on with an undefined just waiting to ruin your day.

const INVALID_INPUT = 'oink oink';

class InputField {
   
   constructor(element_id) {
      
      this._element = document.getElementById(
          element_id
      );
      
      if (!this._element) { throw 'Abort!' }
      return;

   )

   is_valid() {
      
      const value = this._element.value;
      if (value === INVALID_INPUT) {
         return false;
      }
      return true;

   }

}

In the above case, the throw catches a case in which we supplied a non-existent DOM element id. Without the throw, execution would proceed and we would have no indication that shit was sideways until we called is_valid().

Lean into the wind

JavaScript is here to stay. Websites are important parts of product development. No matter how much you dislike it, it is important to develop JavaScript skills. Do it with a bit of modern, clean syntax and it can be less painful that you might expect.

Playing doctor with a Linux system

Monitoring Linux system health is a route to peace of mind. When a fleet of machines is serving an application, it is comforting to know that they are each and collectively operating within hardware performance limits.

There are countless libraries, tools, and services available to monitor Linux system health. It is also very easy to acquire system health information directly, allowing construction of a bespoke health monitoring subsystem.

There are five critical metrics of system health:

  1. Available memory
  2. CPU utilisation
  3. Network I/O (data transmission and receipt)
  4. Available disk space
  5. Disk I/O (reads and writes to disk)

Let’s take a look at how we can examine each one. This article is written from the perspective of Ubuntu 16.04, but many of the commands are available across Linux distributions. Some of them only require the Linux kernel itself.

Available Memory

We can get available memory using the free command. On its own, free will give us a bunch of columns describing the state of physical and swap memory.

$ free
       total  used  free  shared  buff/cache  available
Mem:   498208 47676 43408 5568    407124      410968
Swap:       0     0     0

There’s a lot going on here. What we are looking for, in plain English, is ‘how much memory is available to do stuff’. If such a number was low, we would know that the system was in danger of running out of memory.

The number we want is counterintuitively not the one in the column labelled ‘free’. That column tells us how much memory the system is not using for anything at all. Linux uses memory to cache regularly accessed files, and for other purposes that don’t preclude its allocation to a running program.

What we want is column 7, ‘available’. We can get just that number by using grep and awk. We can also use the -m flag to return results in megabytes, rather than bytes, thus making the output more readable.

$ free -m | grep 'Mem:' | awk '{print $7}'
400

That’s much better! A single integer representing how many megabytes of memory are available for the system to do things.

On its own, this is not very useful. You are not going to go around SSH’ing to every box in your fleet, running commands and noting numbers down on a piece of paper. The magic happens when the output is combined with some program that can collate all the data. For example, in Python, we could use the Subprocess module to run the command then store the number:

"""memory.py"""
import subprocess

command = "free -m | grep 'Mem:' | awk '{print $7}'"
memory_available = int(subprocess.getoutput(command))

CPU Utilisation

To monitor Linux system cpu utilisation, we can use the top command. top produces while bunch of output measuring the cpu utilisation of every process on the system. To get an overall sense of system health, we can zero in on the third line:

$ top
//
%Cpu(s):  0.3 us,  0.3 sy,  0.0 ni, 99.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
//

Four numbers are of use to us; Those succeeded by us, sy, id, and wa, which indicate the proportion of CPU time allocated to user processes, system processes, idling, and I/O wait respectively.

To acquire these numbers programmatically, we need to adjust top‘s output slightly.  We’ll use a few flags:

  • -b: Run in a non-interactive mode. Top will return to the shell rather than running indefinitely
  • -n: Sample for a specified number of iterations. We will use two iterations, and take numbers from the second.
  • -d: Delay time between iterations. We will supply a non-zero number so that top acquires data over some time

The whole command will be:

$ top -d 3 -b -n 2 | grep "%Cpu"

In Python, we can execute the command and split the output into individual floating point numbers. To do so, we take advantage of the fixed-width position of top’s output.

"""cpu.py"""
import subprocess

command = 'top -d 3 -b -n 2 | grep "%Cpu"'
output = subprocess.getoutput(command)
data = output.split('\n')[1]
cpu_user = float(data[8:13])
cpu_system = float(data[17:22])
cpu_idle = float(data[35:40])
cpu_io_wait = float(data[44:49])

Network I/O

All the hardware in the world won’t save you if your network connection can’t keep up. Monitoring transmit and receive volumes is, fortunately, pretty easy. The kernel provides us with a convenient window onto network activity, in /sys/class/net.

$ ls /sys/class/net
eth0 lo tun0

On this example system, /sys/class/net contains three network interfaces. An ethernet adapter eth0, the local loopback lo, and a vpn tunnel adapter tun0.

How you proceed to gather the information available about these interfaces is going to depend heavily on your situation. The following technique satisfies a couple of assumptions:

  1. We don’t know the number or disposition of network interfaces in advance
  2. We want to gather transmit / receive statistics for all interfaces except the local loopback
  3. We know that the local loopback interface name alone will always start with the character l.

These assumptions might not apply to you. Even if they don’t, you might be able to apply some of the techniques used herein to your situation.

Inside each interface, there is a statistics directory containing a wealth of information.

$ ls /sys/class/net/tun0/statistics
collisions        rx_packets
multicast         tx_aborted_errors
rx_bytes          tx_bytes
rx_compressed     tx_carrier_errors
rx_crc_errors     tx_compressed
rx_dropped        tx_dropped
rx_errors         tx_errors
rx_fifo_errors    tx_fifo_errors
rx_frame_errors   tx_heartbeat_errors
rx_length_errors  tx_packets
rx_missed_errors  tx_window_errors

To get a general overview of network activity, we will zero in on rx_bytes and tx_bytes.

 

$ cat /sys/class/net/tun0/statistics/rx_bytes
11880392069
$ cat /sys/class/net/tun0/statistics/tx_bytes
128763654271

These integer counters tick upwards since, effective, system boot. To sample network traffic, you can take readings of the counters at two points in time. The counters will wrap, so if you have a very busy or long-lived system you should account for potential wrapping.

Here is a Python program that samples current network activity in kilobytes per second.

"""network.py - sample snippet"""
//
root = 'cat /sys/class/net/'
root += interface + '/statistics/'
rx_command = root + 'rx_bytes'
tx_command = root + 'tx_bytes'
start_rx = int(subprocess.getoutput(rx_command))
start_tx = int(subprocess.getoutput(tx_command))
time.sleep(seconds)
end_rx = int(subprocess.getoutput(rx_command)
end_tx = int(subprocess.getoutput(tx_command))
rx_delta = end_rx - start_rx
tx_delta = end_tx - start_tx
if rx_delta <0:
   rx_delta = 0
if tx_delta <0:
   tx_delta = 0
rx_kbs = int(rx_delta / seconds / 1000)
tx_kbs = int(tx_delta / seconds / 1000)
//

Note that this program includes a hard coded interface, tun0. To gather all interfaces, you might loop through the output of ls and exclude the loopback interface.  For purposes that will become clearer later on, we will store each interface name as a dictionary key.

"""network.py - interface loop snippet"""
//
output = subprocess.getoutput('ls /sys/class/net')
all_interfaces = output.split('\n')
data = dict()
for interface in interfaces:
   if interface[0] == 'l':
      continue
   data[interface] = None
//

On a system with multiple interfaces, it would be misleading to measure the traffic across each interface in sequence. Ideally we would sample each interface at the same time. We can do this by sampling each interface in a separate thread. Here is a Python program that ties everything together and does just that. The above two snippets, “sample” and “interface loop”, should be included where annotated.

"""network.py"""
import subprocess
import time
from multiprocessing.dummy import Pool as ThreadPool

DEFAULT_SAMPLE_SECONDS = 2

def network(seconds: int) -> {str: (int, int)}:
   """
   Return a dictionary, in which each string
   key is the name of a network interface,
   and in which each value is a tuple of two
   integers, the first being sampled transmitted
   kb/s and the second received kb/s, averaged
   over the supplied number of seconds.

   The local loopback interface is excluded.
   """
   # 
   # Include 'interface loop' snippet here
   #
   
   def sample(interface) -> None:
      #
      # Include 'sample' snippet here
      #
      data[interface] = (tx_kbs, tx_kbs)
      return

   pool = ThreadPool(len(data))
   arguments = [key for key in data]
   _ = pool.map(sample, arguments)
   pool.close()
   pool.join()
   return data

if __name__ == '__main__':
   result = network(DEFAULT_SAMPLE_SECONDS)
   output = 'Interface {iface}: {rx} rx kb/s
   output += ', {tx} tx kb/s'
   for interface in result:
      print(output.format(
         iface=interface,
         rx=result[interface][1],
         tx=result[interface][0]
      ))

Running the whole thing gives us neat network output for all intefaces:

$ python3 network.py
Interface tun0: 10 rx kb/s, 64 tx kb/s
Interface eth0: 54 rx kb/s, 25 tx kb/s

Of course, printing is fairly useless. We can import the module and function elsewhere:

"""someprogram.py"""
from network.py import network as network_io

TX_DANGER_THRESHOLD = 5000 #kb/s
sample = network_io(2)
for interface in sample:
   tx = sample[interface][0]
      if tx > TX_DANGER_THRESHOLD:
         # Raise alarm 
# Do other stuff with sample

Disk Space

After all that hullaballoo with network I/O, disk space monitoring is trivial. The df command gives us information about free disk usage:

$ df
Filesystem     1K-blocks    Used Available Use% Mounted on
udev              239812       0    239812   0% /dev
tmpfs              49824    5540     44284  12% /run
/dev/xvda1       8117828 3438396   4244156  45% /
tmpfs             249104       0    249104   0% /dev/shm
tmpfs               5120       0      5120   0% /run/lock
tmpfs             249104       0    249104   0% /sys/fs/cgroup
tmpfs              49824       0     49824   0% /run/user/1000

This is a bit of a mess. We want column four, ‘available’, for the partition you wish to monitor, which in this case is /dev/xvda1. The picture will get much messier if you have more than one partition on the system. In the case of a system with one partition, you will likely find it mounted at /dev/somediskname1. Common disk names include:

  • sd: SATA and virtualised SATA disks
  • xvd: Xen virtual disks. You will see this if you are on EC2 or other Xen based hypervisors
  • hd: IDE and virtualised IDE disks

The final letter will increment upwards with each successive disk. For example, a machine’e second SATA disk would be sdb. An integer partition number is appended to the disk name. For example, the third partition on a machine’s third Xen virtual disk would be xvdc3.

You will have to think about how best to deal with getting the data out of df. In my case, I know that all machines on my network are Xen guests with a single partition,  so I can safely assume that /dev/xvda1 will be the partition to examine on all of them. A command to get the available megabytes of disk space on those machines is:

$ df -m | grep "^/" | awk '{print $4}'
4145

The grep phrase "^/" will grab every line beginning with "/". On a machine with a single partition, this will give you that partition, whether the disk is sd, xvd, hd, and so on.

Programmatically acquiring the available space is then trivial. For example, in Python:

"""disk.py"""
import subprocess

command = 'df -m | grep "^/" | awk \'{print $4}\''
free = int(subprocess.getoutput(command))

Disk I/O

A system thrashing its disks is a system yielding unhappy users. /proc/diskstats contains data that allow us to monitor disk I/O. Like df, /proc/diskstats output is a messy pile of numbers.

$ cat /proc/diskstats
//
202       1 xvda1 2040617 57 50189642 1701120 3799712 2328944 85759400 1637952 0 1064928 3338520
//

Column 6 is the number of sectors read, and column 10  is the number of sectors written since, effectively, boot. On a long lived or shockingly busy system these numbers could wrap. To measure I/O per second, we can sample these numbers over a period of time.

Like with disk space monitoring, you will need to consider disk names and partition numbers. Because I know this system will only ever have a single xvd disk with a single partition, I can safely hardcode xvda1 as a grep target:

$ cat /proc/diskstats | grep "xvda1" | awk '{print $6, $10}'
50192074 85761968
 Once we have the number of sectors read and written, we can multiply by the sector size to get I/O in bytes per second. To get sector size, we can use the fdisk command, which will require root privileges.
$ sudo disk -l | grep "Sector size" | awk '{print $4}'
512

On a machine with more than one disk, you will need to think about getting sector sizes for each disk.

Here’s a Python program that ties all that together:

"""diskio.py"""
import subprocess
import time

seconds = 2

command = 'sudo fdisk -l | grep'
command += '"Sector size" | awk \'{print $4}\''
sector_size = int(subprocess.getoutput(command))
command = 'cat /proc/diskstats | grep "xvda1"'
command += ' | awk \'{{print $6, $10}}\''

sample = subprocess.getoutput(command)
start_read = int(sample.split(' ')[0])
start_write int(sample.split(' ')[1])

time.sleep(seconds)

sample = subprocess.getoutput(command)
end_read = int(sample.split(' ')[0])
end_write = int(sample.split(' ')[1])

delta_read = end_read - start_read * sector_size
delta_write = end_write - start_write * sector_size
read_kb_s = int(delta_read / seconds / 1000)
write_kb_s = int(delta_write / seconds / 1000)

A Bespoke Suit

Now that we’ve collected all these data, we can decide what to do with them. I like to gather up all the data into a json package and shoot them off to a telemetry aggregating machine elsewhere on the network. From there it is a hop, skip and a jump to pretty graphs and fun SQL queries.

By gathering the data yourself, you have the freedom to store, organise, and present the data as you see fit. Sometimes, it is most appropriate to reach for a third party tool. In others, a bespoke solution gives unique and powerful insight.

Pacioli’s Equality, a better name for double entry accounting

Euclid, Pythagoras, Newton, Einstein, Heisenberg, Planck. These names conjure a sense of elegance, of fundamental knowledge, of natural order. They have each contributed foundation stones to our understanding of the natural world. Together, theirs and many other ideas act as first principles upon which we lean as we build spacecraft, cure disease, and create art.

The association of a name with a theory or theorem is not just administrative. We do not use the name ‘Newton’s Third Law’ as a database reference, an abstracted name with which to retrieve associated knowledge inside our brains. Instead, the name ‘Newton’ carries emotion. An emotion that is collectively understood across society.

Newton’s name lends an energy above and beyond the phrase ‘the third law of motion in classical mechanics’. That energy matters: It excites the mind, invites inquiry, provokes the imagination. Present a child with the phrase: ‘A theory on the fundamental limit of precision in measurement of pairs of properties of a physical particle’, and they may well give up on studying physics.

Present them with: ‘Heisenberg’s Uncertainty Principle’, and you might well spark their curiosity. This phenomena is visible beyond ideas describing the natural world: We don’t name brands ‘Very Fast And Good Looking Cars’, we name them ‘Tesla’.

Absolutely zero curiosity, energy, or wonder is sparked by the phrase ‘Double Entry Accounting.’ It is more likely to spark the gag reflex.

This is a crying shame. Such a shame, and such a waste, that I won’t refer to the above mentioned idea in those terms again. Instead, I will refer to it as ‘Pacioli’s Equality’.

This name is not quite fair. Luca Pacioli, an Italian of 15th century vintage, did not invent the accounting principles to which I am lending his name. Instead, his is the first known work to codify them: Summa de arithmetica  geometria, Proportioni et proportionalita, published in Venice in 1494.

Many artefacts predate Summa. Records kept in France in the 13th century, authored by Amatino Manucci, are the earliest surviving example of Pacioli’s Equality. But we must pick a name, and the author of the first known textbook on the topic is as good as any. Pacioli it is.

In the most general terms, Pacioli’s Equality could be described as: ‘Every change has an equal and observable origin.’ Which sounds idiotically simplistic. No less simplistic, I say, than ‘an object that is at rest will stay at rest unless a force acts upon it.’ These phrases are deceptively simple, for they act as foundations upon which immensely valuable paradigms can be constructed.

In the world of money, where Pacioli’s Equality is most often applied, but which is certainly not the only domain in which it can be applied, we could re-word the description as: ‘Everything you have is equal to everything you have received.’ Or, as a very formal equation:

Everything I Have = Stuff People Gave Me

Of course, we might also have some stuff that might not belong to us. Say, ‘everything you have is equal to everything you have received, plus what you have borrowed.’ Formally:

Everything I Have = Stuff I Borrowed + Stuff People Gave Me

Any increase on the left side (what I have), must be balanced by an increase on the right side (I must have either borrowed it, or someone must have given it to me). That blindingly, but deceptively, obvious equality is presented to first-year accounting students as the ‘double-entry accounting equation’:

Assets = Liabilities + Equity

At which point their eyes glaze over, Instagram is opened, and somewhere, an adorable kitten dies.

This is where the shame lies. Pacioli’s Equality is the basis for arbitrarily complex information storage systems. By recording both the origin and destination of a resource, we can construct, for any point in time, both the current state of an entity and every quantum of change that led to that state.

In other words, Pacioli’s Equality allows us to observe both the position and performance of an entity, measured in some arbitrary unit. That unit is most often money: The common names for a measurement of position is a ‘Balance Sheet.’ The common name for a measure of performance is an ‘Income Statement’.

The fundamental elegance of Pacioli’s Equality is utterly absent from modern accounting practice. Load any piece of accounting software, and you will be presented with ‘invoices, customers, credit cards, bank accounts, trial balances’: These are domain-specific objects, each of which is an implementation of the equality, rather than a window onto it.

Sometimes, software might open an interface to ‘journals’, or allow direct manipulation of ‘ledgers’. These are edging closer to a fundamental expression of Pacioli’s Equality, but they are treated as second class citizens. Interacting with them, especially programmatically, is generally painful.

We have combined computers and fundamental knowledge to create wonderful outcomes. Program Einstein’s theories into a computer system and you can model the position of a space probe orbiting Saturn to pinpoint accuracy. Build on Euclid’s theorem and a computer can create nigh-unbreakable cryptographic constructs that allow distributed virtual currencies.

Where is the fundamental computerised expression of Pacioli’s Equality? It is surely not manifest in the current accounting software landscape. That is a shame. We are poorer for it. We can make a baby step towards encouraging innovation by replacing the awful name. Exit double-entry accounting. Enter Pacioli’s Equality.

Automating Application Installation on Linux with Python

Perhaps you have a shiny new web application. It responds to HTTPS requests, delivering pure awesome in response. You would like to install the application on a Linux server, perhaps in Amazon EC2.

Performing the installation by hand is a Bad Idea™. Manual installation means you cannot easily scale across multiple machines, you cannot recover from failure, and you cannot iterate on the machine configuration.

You can automate the installation process with Python. The following are examples of procedures that introduce principles for automation. This is not a step-by-step guide for an entire deployment, but it will give you the tools you need to build your own.

Connecting via SSH with Paramiko

A manual installation process might involve executing lots of commands inside an SSH session. For example:

$ sudo apt update
$ sudo apt install nginx

All of your hard-won SSH skills can be transferred to a Python automation. The Paramiko library offers SSH interaction inside Python programs. I like to shorthand my use of Paramiko by wrapping it in a little container:

"""ssh_session.py"""

from paramiko import SSHClient
from paramiko import SFTPClient
from paramiko import AutoAddPolicy

class SSHSession:
    """Abstraction of a Paramiko SSH session"""
    def __init__(
        self,
        hostname: str,
        keyfile: str,
        username: str
    ):

        self._ssh = SSHClient()
        self._ssh.set_missing_key_policy(
            AutoAddPolicy()
        )
        self._ssh.connect(
            hostname,
            key_filename=keyfile,
            username=username
        )

        return

    def execute(self, command: str) -> str:
        """Return the stdout of an SSH command"""
        _, stdout, _ = self._ssh.exec_command(
            command
        )
        return stdout.read().decode('utf-8')

    def open_sftp(self) -> SFTPClient:
        """Return an SFTP client"""
        return self.ssh.open_sftp()

We use paramiko.AutoAddPolicy to automatically add the server to our known_hosts file. This effectively answers ‘yes’ to the prompt you would see if initiating a first time connection in an interactive terminal:

The authenticity of host 'some.fqdn.com
(172.16.101.244)' can't be established. ECDSA key
fingerprint is 
SHA256:PsldJAyjANGYeLiHNPknfI95CNxvaCmeC4HWSEe6+Y.
Are you sure you want to continue connecting
(yes/no)?

You should only do this if you have otherwise secured the network path to your server. If you have not, connect manually first via a terminal and check the key fingerprint.

We initialise an SSHSession instance with a set of parameters that conveniently match what you might already have in your SSH config file. For example:

$ cat ~/.ssh/config
Host some_friendly_server_name
    HostName some.fqdn.com
    User hugh
    IdentityFile ~/.ssh/some_private_key

The matching Paramiko session would be:

from ssh_session import SSHSession

SSH = SSHSession(
    'some.fqdn.com',
    'hugh',
    '~/.ssh/some_private_key'
)

We now have a convenient little object that can run SSH commands for us.  Note that the object ignores errors in stderror returned by paramiko.SSHClient.exec_command(). While this is convenient when we are confident of our commands, it makes debugging difficult. I recommend debugging in an interactive SSH session rather than in Python.

Installing Dependencies

Let’s start by installing Nginx and Git. You could substitute these with any dependency of your application.

_ = SSH.execute('sudo apt update')
_ = SSH.execute('sudo apt install nginx -y')
_ = SSH.execute('sudo apt install git -y')

Note the ‘-y’ at the end of the apt install command. Without it, the session will hang at the apt continuance prompt:

After this operation, 4,816 kB of additional disk
space will be used.

Do you want to continue? [Y/n]

The requirement to bypass interactive prompts will be a common thread throughout this article. When automating your process, step through it manually and take careful note of where interactive prompts are required.

Creating a Linux User

Our application should, of course, run under its own user. Let’s automate that process:

APP_USER = 'farquad'
command = 'sudo adduser --system --group '
command += APP_USER
_ = SSH.execute(command)

Note that we establish the username as a constant, and don’t hardcode it into our command. This external definition, whether through a constant or a function parameter or however else it is done, is important for several reasons.

  1. It allows you to re-use the command with multiple parameters. For example, perhaps your application requires multiple users.
  2. It implements Don’t-Repeat-Yourself ‘DRY’ principle. We will likely need the username elsewhere, and by externally defining it we have created a single source of authority.

Automating File Transfer using Paramiko SFTP

Suppose your application is stored in a Git repository, like Bitbucket or Github, and that the repository is private. It is no use having an automated installation process if you need to answer an HTTPS password prompt when pulling a repository.

Instead, let’s automate the process by using SSH and installing a repository key on the machine. First, the transfer process:

KEY_FILEPATH = '~/some/key'
with open(KEY_FILEPATH, 'r') as keyfile:
    key = keyfile.read()

sftp = SSH.open_sftp()
remote_file = sftp.file('~/repository_key', 'w')
remote_file.write(key)
remote_file.flush()
remote_file.close()
sftp.close()

Note that we first SFTP’d the key into our privileged user’s home directory, rather than directly into the application user’s directory. This is because our privileged user does not have permission to write into the application users’ home directory without sudo elevation, which we can’t do in the SFTP session.

Let’s move it into the appropriate place, and modify permissions appropriately:

command = 'sudo mkdir /home/' + APP_USER + '/.ssh'
_ = SSH.execute(command)

command = 'sudo mv ~/repository_key'
command += ' /home/' + APP_USER + '/.ssh/
_ = SSH.execute(command)

command = 'sudo chmod 600 /home/' + APP_USER
command += '/.ssh/repository_key'
_ = SSH.execute(command)

The file is now in the appropriate location, with the appropriate permissions. We repeat the process to install an ssh configuration file. I won’t lay out the entire process, but the principle is the same: Open an SFTP session, plop the file on the server, and move it  and re-permission around as necessary.

There is one important consideration. Because we have been creating directories as our privileged user, we need to turn over those directories to the application user:

command = 'sudo chown -R '
command += APP_USER + ':' + APP_USER
command += ' /home/' + APP_USER + '/.ssh'
_ = SSH.execute(command)

In the end, there should be an SSH configuration file on the server owned by the application user. Here is an example, using all the same names we have been using so far:

$ cat /home/farquad/.ssh/config
Host bitbucket.org
    HostName bitbucket.org
    User git
    IdentityFile ~/.ssh/repository_key
$ ls -la /home/farquad
//
drwxrwxr-x 2 farquad farquad 4096 Mar 23 18:47 .ssh
//

Pulling the Repository

The next step is easy mode. You’ve set things up such that your wonderful application can be pulled down in a single command:

REPOSITORY = 'git@bitbucket.org:super/app.git'
command = 'cd /home/' + APP_USER
command += '; sudo -u ' + APP_USER
command += ' git clone ' + REPOSITORY
_ = SSH.execute(command)

Well, maybe almost easy mode. There’s a bit going on here. Note the separation of commands via a semicolon. Consider your Python SSH connection to be a very ‘loose’ one. It won’t retain environment information, including current directory, between executions. Therefore, to use conveniences like cd, we chain commands with semicolons.

Also note the sudo -u farquad. We do this so that the git repository is pulled down as the property of our application user,  not our privileged user. This saves us all the dicking about with permissions that plagued the SFTP steps above.

Paramiko and Virtual Environments, like Virtualenv

The ‘loose’ nature of the Paramiko session referenced above becomes particularly important when working with virtual environments. Consider the following:

$ virtualenv -p python3 /home/farquad/app
$ cd /home/farquad/app
$ source bin/activate
(app) $ pip install gunicorn

If executed as distinct commands via a Paramiko SSH session, the gunicorn library will end up installed via systemwide pip. If you then attempt to run the application inside the virtual environment, say inside an Systemd configuration file…

//
[Service]
User=farquad
Group=farquad
WorkingDirectory=/home/farquad/app
Environment="PATH=/home/farquad/app/bin"
//

… Then your application will fail because gunicorn was missing from the virtual environment. Instead, be sure to execute commands that require a particular environment in an atomic manner.

Your Move!

Once your application deployment is automated,  you have freed yourself from having to trudge through SSH command sequences every time you want to adjust your deployment. The fear of breaking a server disappears, because you can fire up a replacement at will. Enjoy the freedom!

Game Developer Unions are a Daft Idea

Some game developers would like to unionise. This is not an inherently bad idea. Unionisation is an effective way for people to improve their working conditions when there is a chronic imbalance in bargaining power between workers and management across an industry.

Such an imbalance might occur because regulation makes it hard to start or destroy companies. Or because workers cannot easily move between industries, perhaps because re-training is hard, or because a social security system ties benefits to an individual career . Or for many other real world reasons that affect many people.

Game development does not suffer from such an imbalance. Quite the opposite:

  • Companies making games generally struggle to find and retain skilled workers
  • Strong competition between companies makes capable development teams their only competitive advantage

For workers to enjoy the best working conditions, poorly performing companies must be destroyed as quickly as possible. Yes, that includes studios that we might fondly remember for being very good in the past, but are now falling behind more innovative competitors.

Fortunately, it is very easy to start and destroy game development studios. Capital costs are low, regulation is light, markets are near fully globalised, and geography is largely irrelevant. Under such circumstances, it is relatively easy for a hungry entrepreneur to pull together a motivated team and beat established players.

The best thing that game developers can do is to maintain an atmosphere of ruthless innovation: Bad companies get destroyed, good ones keep popping up. That way, talented game developers can choose from a wide array of companies, allowing demand for their talent to force competition for the acquisition of their labour.

Of course, there is an elephant in the room. If competition in games is so intense, why is pay generally low? Game development attracts lots of people who perceive it as more enjoyable work than say, finance or accounting. At the macroeconomic level, the game development labour market is heavily supplied.

If you are working in game development, someone with equal or lesser talents than you is working in a fin-tech startup earning twice as much as you while working half the hours. If you don’t like that, you need to go work in a fin-tech.

If you try to force higher pay by controlling supply of labour through a union, then the company you work for is going to go bankrupt. Someone hungrier than you is going to supply their labour elsewhere, the company they work for is going to produce an equal product at lower cost,  and customers are going to end your fantasy with their wallets.

Architecting a WiFi Hotspot on a Remote Island

Internet access on Lord Howe Island is very limited. The island is extremely remote. I am intensely interested in providing affordable, accessible, and reliable internet connections to residents and guests.

The ‘Thornleigh Farm’ Internet (internally code-named ‘Nike’) is a newly launched service that offers public internet access on Lord Howe Island. Here are some of the architectural choices I made in developing the service.

Island Cloud

WiFi hotspot solutions require a server that acts to authenticate, authorise, and account for network access. The current industry trend appears to be toward doing this in the ‘cloud’ – I.e. remote data-centres.

Such a solution is not suitable for Lord Howe Island, because of satellite latency. Signals travel well over 70,000 kilometres through space between transmitting and receiving stations, yielding a practical minimum latency of around 600ms, often higher. This high latency creates a crappy customer experience during sign-on.

Instead, Nike utilises local servers for network control. Power is very expensive on Lord Howe Island, which led to a choice of low voltage Intel CPU’s for processing. Two dual-core Intel ‘NUC’ machines serve as hypervisors for an array of network control virtual machines.

Intel NUC machines and Ubiquiti switching equipment

Going local means replicating infrastructure we take for granted in the cloud. Nike utilises local DNS (Bind9), database (Postgres), cache (Redis), and web (NGINX) servers. It’s like stepping back in time, and really makes you appreciate Amazon Web Services (AWS)!

DNS Spaghetti

Bringing the Nike application “island-side” meant dealing extensively with the Domain Name System (DNS). Local requests to the application domain, thornleigfarm.com, need to be routed locally or via satellite depending on their purpose.

For example, new clients are served the Nike purchase page from a local server. Clients of the Thornleigh Farm Store, which offers food orders, are served from an AWS server via satellite.

A local Bind9 DNS captures all thornleighfarm.com domain traffic on our network, and punts it to the local Nginx server. Nginx then chooses to proxy the request to local applications, or to the external thornleighfarm.com AWS Route 53 DNS, depending on the request path.

An island-side client receiving content served from island servers

This request spaghetti has some cool effects: Clients requesting thornleighfarm.com/internet receive an information page when off the island, and a purchase page when they are on it.

Client Identification

From the outset, I wanted to avoid requiring user accounts. Older customers in particular react very poorly to needing to create a new user account, set a password, remember it, and so on.

Also, I am a privacy psychopath and I want to collect the absolute bare minimum customer data necessary to provide the service.

Instead, Nike identifies clients by device Media Access Control (MAC) address. This is uniquely possible on the Thornleigh network because all public clients are on the same subnet. The Nike application can get the MAC associated with a particular IP address in real-time by making a request to the network router.

Part of the Nike codebase that identifies clients by MAC

A small custom HTTP API runs on our Ubiquiti Edgemax router, that looks up a given MAC in its routing table and returns the associated IP if available.

Payments

Stripe is an amazing payments provider, full-stop. Their API is fantastically well documented, customer service brilliant, and tools of exceptional quality. They pay out every day, and offer low fees. I cannot recommend them highly enough.

Nike ran into a minor problem with the Stripe Checkout system: It does not work in Android WebViews. Android uses WebViews in a manner analogous to the Apple Captive Network Assistant: They sandbox public WiFi DNS capture. In Android’s case, the sandboxing is strict enough to kill Checkout.

Stripe Elements inside the MacOS Captive Network Assistant

This problem was easily solved by moving to Stripe Elements, and building a simple custom payments form utilising existing Nike styling.

Layer 1

Deploying physical network infrastructure on Lord Howe Island presents a few challenges. First, power is scarce. Second, regulatory approvals for any sort of island-modifying work are very difficult to obtain.

The property that serves as the nexus for Nike, Thornleigh Farm, is hidden inside a shield of palm forest. It is not possible to broadcast any meaningful signal out of the property, though we do offer the Public Internet network across the farm for the use of farm customers.

Fortunately, the property includes a glorious old boat-shed sitting on Lagoon Beach. Even more fortunately, an old copper conduit runs under the forest between farm and boat-shed. This enabled the installation of an optical fibre. The shed then acts as the southernmost network node.

Ubiquiti NanoBeam AC Gen2 radios provide multiple radio links in the Nike Layer 1 network

A 5Ghz link then penetrates a treeline to the north, linking to another island business, with whom we have joined forces, and who serve as the northernmost node.

All in all, a mixture of Cat6 copper, 5Ghz point-to-point radios, and optical fibre connect the satellite dishes with our server room and then on to the boat sheds on the beach.

Access Control

The Thornleigh Farm network is mostly built from Ubiquiti Unifi equipment. The WiFi networks, including the Nike ‘Public Internet’ network, are controlled by the proprietary Unifi Controller (UC), running on a local virtual machine.

The UC has a publicly documented API that ostensibly allows fairly fine grained manipulation of client network access. In practice, the documentation is not developer-friendly, and interacting with the UC was the most difficult part of the project outside construction of the physical network.

For a while, I flirted with deploying a fully custom system utilising open-source RADIUS and ChilliSpot software. This path did not bare fruit, and I settled back on bashing through the UC API.

An example of some of the calculations that occur while authorising a client

Nike functions as a Python application that interfaces with the UC whenever it needs to authorise, de-authorise, or check usage by a client. Data usage tracking is handled by custom code and stored in our local Postgres database.

The custom implementation allows us to do some fun stuff, like offer refunds of partial usage, and allow customers to stack multiple data packs on top of each other. Nike continuously updates the UC whenever a client’s remaining quota changes, and then the UC internally handles disconnecting the client when they exceed their quota .

Final Thoughts

Isolation, latency, and high operating costs make Lord Howe Island a difficult environment in which to deploy public internet. The internet is, however, a more and more crucial element of modern life. Participation in modern economic activity requires reliable connection to the internet, and I hope that in the long term Nike can serve a valuable service to residents and guests of Lord Howe Island.

If you’d like to discuss the project, hit me up on Twitter.

It’s Time To Change Section 44

Ludlam, Waters, Canavan, Roberts, Joyce – The Section 44 pain train is rolling through an ever growing list of representatives and senators. It is time for this absurdity to stop. Section 44, specifically subsection (i), reflects an outmoded, irrelevant view of what it means to be an Australian citizen. It is actively harmful to our ability to grow and prosper as a nation.

Any person who –

Is under any acknowledgement of allegiance, obedience, or adherence to a foreign power, or is a subject or a citizen or entitled to the rights or privileges of a subject or citizen of a foreign power

[…]

shall be incapable of being chosen or of sitting as a senator or a member of the House of Representatives.

The members embroiled in the dual-citizenship fiasco hail from across the political spectrum. Regardless of our persuasions, we can surely agree that all these members are patriots, acting for the best interests of the Australian people, even if they disagree about how those interests are best served.

Australia is a diverse nation of immigrants. This is our great strength, the secret sauce that has propelled us to great wealth, peace, and prosperity. We should want our parliament to reflect our diversity, to contain bridges to the world’s peoples. Such bridges, manifesting in dual-citizenships, are tools to allow our parliament  to better act in our collective interest.

If senator Canavan’s mother signed him up to be an Italian citizen without his knowledge, if senator Waters immigrated here from Canada as an infant, we should savour and welcome and support their links to these foreign lands. We should welcome senators Canavan, Waters, and others dual-citizens as a strength, a representation in the legislature of our collection diverse, immigrant selves.

The alternative, the status quo, is to admit a great insecurity about our way of life. We suggest that a dual-citizenship, however tenuous, is sufficient corruption to be likely to sway a member to act against Australia. Forget foreign spies, bribery, infiltration, inducements. No: It is enough that an infant was born in a Canadian hospital, that infant is likely to be a traitor!

We grow stronger by embracing the ties that bind us to our fellow creatures around this Earth. We have grown wealthy, safe, strong, and prosperous through such embraces, while more insular, inward looking nations struggle with the limitations such insulation imposes.

Trade, treaties, the movement of people, the flow of finance, the exchange of ideas: An Australia more tightly bound to Canada, to Italy, or to any other nation is a stronger Australia. Members of parliament with ties to these and other lands are a source of strength, not weakness. It’s time to amend the constitution, re-write section 44(i), and embrace our own strength.

Bigint vs Numeric Performance in PostgreSQL 9.6

PostgreSQL 9.6

Suppose we have an application that stores two big lists of numbers in a relational database. These numbers represent money – Both the amount of money units in transactions, and the price of that money.

In response to a client query, we must multiply the big list of transaction values by the big list of prices, and sum a running total of those multiples. We assume we can’t pre-calculate and store the running values, and that we must supply the client with strings, perhaps because they are communicating with us via a web API. Suppose clients expect responses in real time.

Storing money requires absolute precision. We can’t use floats, for obvious reasons. So we would probably use an arbitrary precision type. If we were using PostgreSQL 9.6, as this article assumes, then we might choose to use a numerictype to store both values and prices.

Numerics give us arbitrary precision, but come with some baggage. Numerics are binary coded decimals, and CPUs can’t interact with them directly using hardware instructions. Whereas two 64-bit integers will fit in a CPU register and be happily summed, two numerics need to go through a bunch of software shuffling to perform a calculation.

Therefore, if we need to perform lots of calculations quickly, numerics might slow us down when compared to using a 64-bit integer. In PostgreSQL, a 64-bit integer is called a bigint.

Hypothesis: Performing all calculations on integers of type bigint, rather than binary coded decimals of type numeric, will vastly reduce computation times when responding to our client’s queries. Let’s assume the following:

  • A price is stored as a bigintwith an implicit precision of four decimal places. For example, a price of £4.4352 is stored as 44352 (NB the absence of the decimal point).
  • An amount is stored as abigint with an implicit precision of two decimal places. This suits currencies with two decimal places of minor unit precision, such as the US Dollar (USD). For example, an amount of $12,045.42 is stored as 1204542.

The value of any transaction is therefore the price of the money (44352) times the amount of money units (1204542) divided by the extra orders of magnitude introduced by the implicit decimal places.

In our example, the price has introduced four decimal places, and the amount has introduced two decimal places, for a total of six. Therefore, the calculation is as follows:

44352 * 1204542 / 10^6

This yields 53423.846784, which may be nicely formatted for client consumption as $53,423.85.

So, to get to our true value with explicit decimal places, we need to perform division. In PostgreSQL we can’t divide two values of type bigint or we will lose precision. For example,  select (10::bigint / 3::bigint) yields 3. Because we are dealing with money, that is not acceptable.

Numerics, as arbitrary precision types, can retain precision on division. We will cast to anumeric, and divide by anothernumeric of appropriate size for our implied decimal places.

Here’s the example above written in Postgres 9.6 SQL:

SELECT 
    to_char(value::numeric(15,4) / 1000000::numeric(15,4), 'FM999,999,999,999.99') AS str_representation
FROM (
    SELECT
        amount * price AS interim_value
    FROM (
        SELECT
            1204542::bigint AS amount,
            44352::bigint AS price
    ) AS random_data
) AS calculation

Is this really faster than using binary coded decimals? To answer that question, we’ll create a big set of random data that we can operate on.

We will assume a largish organisation that processes 250 transactions per day every day for 10 years, or 912,500 transactions, and round it up to 1,000,000 for luck. We will assume these transaction amounts are in USD, with two decimal places of implied precision, and are sized somewhere between $0 and $200. Each amount is processed at a price to convert the USD to some Pounds Sterling (GBP). Let’s assume (somewhat absurdly) that each GBP price is somewhere between £0 and £10 with four decimal places of implied precision.

SELECT
    generate_series (1, 1000000) as row_id,
    (random() * 20000)::bigint AS amount,
    (random() * 100000)::bigint AS price

My machine features an i7-5557U CPU, and it takes ~670ms to generate these data, which look like this:

row_id   | amount | price  
---------+--------+--------
       1 |  14179 |  48998
       2 |  16948 |  56369
       3 |  12760 |  16965
       4 |  17177 |    977
       5 |  11632 |  38872
       6 |  18370 |  44416
       7 |  14370 |  89625

Now, let’s perform the necessary calculations. For each of the million rows, we need to multiply the amount by the price, and then sum those multiples into a running balance column.

SELECT 
    value, 
    SUM(value) OVER (ORDER BY row_id) AS balance
FROM (
    SELECT 
        row_id, amount * price AS value
    FROM (
        SELECT 
            generate_series (1,1000000) as row_id,
            (random()*20000)::bigint AS amount, 
            (random()*10000)::bigint AS price
    ) AS random_data
) AS raw_values

This takes ~1,550ms on my i7-5557U. Less the ~670ms to generate the data yields a calculation time of ~880ms. Computers are so cool. That’s just 880 nanoseconds per row! Here’s some sample output:

   value   |    balance     
-----------+----------------
  15987240 |       15987240
   5282332 |       21269572
  32625180 |       53894752
 125860125 |      179754877
   3753301 |      183508178
  79953750 |      263461928
  74762297 |      338224225

These huge integers, with their implicit decimal places, are not very useful. They are certainly not human readable. We need to convert them into decimal numbers with explicit decimal places. To do so, we will cast them to type numeric. Finally, recall that our clients require string output, so we will convert the numericinto a nicely formatted stringusing the to_char()function.

SELECT
    to_char(value::numeric / 1000000::numeric, 'FM999,999,999,990.90') as str_value,
    to_char(balance::numeric / 1000000::numeric, 'FM999,999,999,990.90') as str_balance
FROM (
    SELECT 
        value, 
        SUM(value) OVER (ORDER BY row_id) AS balance
    FROM (
        SELECT 
            row_id, amount * price AS value
        FROM (
            SELECT
                generate_series (1, 1000000) as row_id,
                (random() * 20000)::bigint AS amount, 
                (random() * 100000)::bigint AS price
        ) AS random_data
    ) AS raw_values
) AS balances

Bam! Approximately 2,700ms, Less generation (~670ms) and calculation (~880ms), that’s ~1,150ms. All that bigint::numerictype casting andnumeric / numericdivision adds a good chunk of change. Here is some sample output:

 string_value | string_balance 
--------------+----------------
 1,029.03     | 1,029.03
 85.97        | 1,115.00
 291.78       | 1,406.78
 670.64       | 2,077.42
 62.33        | 2,139.75
 142.92       | 2,282.68
 2.41         | 2,285.09
 31.01        | 2,316.10

Alright, let’s try the same thing using numeric the whole way through. Then we will compare the difference. First, we’ll generate the random data. We’ll use a numeric with 19 digits of precision, to match the 19 positive number digits we get from a PostgreSQL bigint.

SELECT
    generate_series (1, 1000000) as row_id,
    (random() * 200)::numeric(17,2) AS amount, 
    (random() * 1)::numeric(15,4) AS price

Generating the random data takes ~2,200ms, and looks as follows:

 row_id  | amount | price  
---------+--------+--------
       1 |  84.33 | 0.8137
       2 | 178.67 | 0.0793
       3 | 115.89 | 0.1935
       4 |   5.61 | 0.9763
       5 |  32.45 | 0.8218
       6 | 154.49 | 0.4451
       7 | 127.43 | 0.3493

Now the basic balance calculation, wherein we multiply amount by price, and then sum a running total of those multiples:

SELECT 
    value, 
    SUM(value) OVER (ORDER BY row_id) AS balance
FROM (
    SELECT 
        row_id, amount * price AS value
    FROM (
        SELECT
            generate_series (1, 1000000) as row_id,
            (random() * 200)::numeric(17,2) AS amount, 
            (random() * 1)::numeric(15,4) AS price
    ) AS random_data
) AS raw_values
That takes ~3,550ms. Less generation time (2,200ms) yields a calculation time of ~1,350ms. The output looks like this:
   value    |     balance     
------------+-----------------
  32.893335 |       32.893335
   9.424890 |       42.318225
  46.160686 |       88.478911
 134.282533 |      222.761444
  19.844650 |      242.606094
   8.255778 |      250.861872
   2.859764 |      253.721636

Finally, the complete query. This time, we already have a list of values of type numeric, so we don’t need to peform the initial typecasting step we did in the bigintversion. Instead, we just use to_char()to create pretty strings.

SELECT
    to_char(value, 'FM999,999,999,990.90') as str_value,
    to_char(balance, 'FM999,999,999,990.90') as str_balance
FROM (
    SELECT 
        value, 
        SUM(value) OVER (ORDER BY row_id) AS balance
    FROM (
        SELECT 
            row_id, amount * price AS value
        FROM (
            SELECT
                generate_series (1, 1000000) as row_id,
                (random() * 200)::numeric(17,2) AS amount, 
                (random() * 1)::numeric(15,4) AS price
        ) AS random_data
    ) AS raw_values
) AS balances

The full query, in numeric form, takes ~4,200ms. Less generation (2,200ms) and calculation (~1,350ms) time, the string processing takes ~650ms.

As expected, the nativelynumeric query is much faster at generating strings: ~650ms vs ~1,150ms, or 56% of the nativebigint query time.

However, the bigint query wins in the calculation stakes: ~880ms vs ~1,350, or 65% of thenumeric time. That means that excluding the random data generation time, the two queries run pretty much equal: 2,030ms for thebigint, and 2,000ms for thenumeric.

Of course, we must consider the elephant in the room: Maximum precision. Usingthebigint technique, we lose a huge chunk of precision every time we perform a calculation. With an amount featuring two implied decimals, and a price featuring four, we lose six digits.

A PostgreSQLbigint can store a maximum positive value of 9,223,372,036,854,775,807,or several quintillion units. Less the digits we used for implied decimals, that falls to two trillion units.

That puts thebigint calculation in a bind. Say we have a query that requires a further calculation step: Perhaps price is calculated as the multiple of two numbers with four decimal digits, e.g. 2.4561 * 1.3258. That is a common case for currencies, and would caused our maximum balance value to fall to around eight hundred million.

Based on the above analysis, we might reasonably assume that such a calculation heavy query denominated inbigint might pull away from anumeric version. However, the loss of maximum balance value might preclude its use for clients with big balance numbers.

As is the way of these things, I’ve now got more questions than I did when I started. What I want to know now is:

  • Does changing thenumeric precision definition, e.g. numeric(9,4)instead of numeric(15,4)affect PostgreSQL calculation performance?
  • We ignored retrieval speed in this analysis. When the data is coming off the disk, donumericandbigint cause materially different read times?
  • Does adding an extra calculation step cause thebigint method to pull away fromnumeric?
  • Couldbigint andnumeric be substituted depending on client needs, allowing small clients to benefit from speed while big ones benefit from arbitrary precision?

Questions for another day.

Chat to me about this article anytime on Twitter.