Today I work as an SRE, surrounded by dozens of complex systems designed to make the process of
taking code we write and exposing it to customers. It’s easy to forget that software deployment
itself is a problem that many developers have not yet solved.
Today I’d like to run you through a straightforward process I recently implemented for Git Tool
to enable automated updates with minimal fuss. It’s straightforward, easy to implement and works
without any fancy tooling.
So you’re sitting in front of your computer, wondering why your unit tests won’t
show up in Visual Studio’s Live Test Window. They appear fine in the normal Tests
Window and they run without problems, you haven’t done anything weird and all you
want is to be able to see whether your code works.
Have you ever run into a situation where Git just refused to obey your commands? No, I’m
not talking about that time you “typo-ed” git commit and ended up git reset --hard-ing
your repository back to the dawn of the universe, I’m talking about it really, truly,
I have, so let me tell you a story about what happened and how I fixed it so that you can
avoid future hair-loss and avoid questioning the nature of your reality.
As an engineer, I like to think that I help fix problems. That’s what I’ve tried to do most of
my life and career and I love doing so to this day. It struck me, though, that there was one
problem which has followed me around for years without due attention: the state of my development
That’s not to say that they are disorganized, I’ve spent hours deliberating over the best way
to arrange them such that I can always find what I need, yet I often end up having to resort to
some dark incantation involving find to locate the project I was certain sat under my Work
No more, I’ve drawn the line and decided that if I can’t fix the problem, automation damn well
better be able to!
I’d like to introduce you to my new, standardized (and automated), development directory structure
and the tooling I use to maintain it. With any luck, you’ll find it useful and it will enable you
to save time, avoid code duplication and more easily transition between machines.
If you’ve built a production API before, you’ll know that they tend to
evolve over time. This evolution is not only unavoidable, it is a natural
state that any active system will exist in until it is deprecated.
Realizing and designing to support this kind of evolution in a proactive
way is one of the aspects that differentiates a mature API from the thousands
that litter the Wall of Shame.
At the same time, it is important that your API remains easy to use and
intuitive, maximizing the productivity of developers who will make use of it.
It’s a poorly hidden fact that I love Kubernetes. After spending months running everything from
Marathon DCOS and CoreOS to Rancher and Docker Swarm in production, Kubernetes is the only
container orchestration platform that has truly struck me as truly “production ready” and I
have been running it for the past year as a result.
While functionality when I first started using it (v1.4) was somewhat patchy and uninteresting,
some of the more recent updates have been making sizeable strides towards addressing the operations
challenges we face on a daily basis.
With v1.8, Kubernetes has introduced the CronJob controller to batch/v1beta1, making it
generally available for people to play with. Sounds like the perfect time to show you how we
use CronJobs to manage automated, scheduled, backups within our environments.
One of the most interesting discussions to have with people, notably those
with traditional database experience, is that of the relationship between
an off the shelf RDBMS and some modern NoSQL document stores.
What makes this discussion so interesting is that there’s invariably a lot
of opinion driven from, often very valid, experience one way or another.
The truth is that there simply isn’t a silver-bullet database solution and
that by better understanding the benefits and limitations of each, one can
make vastly better decisions on their adoption.
Docker is become an incredibly prevalent tool in the development and
operations realms in recent months. Its combination of developer friendly
configuration and simple operational management make it a very attractive
prospect for companies and teams looking to adopt CI and CD practices.
In most cases, you’ll see Docker used to deploy applications in much the
same way as a zip file or virtual machine image. This is certainly the
most common use case for Docker, but by no means the extent of its
In this post I’m going to discuss some of the more interesting problems
we’ve used Docker to solve and why it serves as a great solution to them.
Aurelia is a modern web application framework in the spirit of Angular,
with an exceptionally concise and accessible developer experience and
standards compliant implementation. It is hands down my favorite web
framework right now and one I’d strongly recommend for most projects.
One of Aurelia’s greatest claims to fame is the incredible productivity
you can achieve, enabling you to build a full web application in just
days, if not hours.
When building the application becomes that fast, spending a day putting
together your deployment pipelines to roll out your application becomes
incredibly wasteful, so how can we avoid that?
Well, Docker offers us a great way to deploy and manage the life-cycle
of production applications. It enables us to deploy almost anywhere, with
minimal additional effort and in a highly reproducible fashion.
In this post I’ll go over the process of Dockerizing an existing Aurelia
web application built with WebPack, however the same process applies to
those built using SystemJS.
With the increasing popularity of Git as a tool for open source collaboration,
not to mention distribution of code for tools like Go, being able
to verify that the author of a piece of code is indeed who they claim to be
has become absolutely critical.
This requirement extends beyond simply ensuring that malicious actors cannot
modify the code we’ve published, something GitHub and its kin
(usually) do a very good job of preventing.
The simple fact is that by adopting code someone else has written, you are
entrusting your clients’ security to them - you best be certain that trust
is wisely placed.
Using Git’s built in support for PGP signing and pairing it with
Keybase provides you with a great framework on which to build and
verify that trust. In this post I’ll go over how one sets up their development
environment to support this workflow.
Anybody who has worked in the development world for a significant portion of
time will have built up a vast repertoire of abbreviations to describe how
they solve problems. Everything from TDD to DDD and, my favourites, FDD
and HDD. There are so many in fact that you’ll find a
website dedicated to naming and shaming them.
I’m not one to add another standard to the mix… Oh who am I kidding, let me
introduce you to Chance Driven Development.
Bash’s ability to automatically provide suggested completions to a command
by pressing the Tab key is one of its most useful features. It
makes navigating complex command lines trivially simple, however it’s generally
not something we see that often.
Bash CLI was designed with the intention of making it as easy as possible to
build a command line tool with a great user experience. Giving our users the
ability to use autocompletion would be great, but we don’t want to make it
any more difficult for developers to build their command lines.
Thankfully, Bash CLI’s architecture makes adding basic autocomplete possible
without changing our developer-facing API (always a good thing).
If you’re just looking to hop straight to the final project, you’ll want
to check out SierraSoftworks/bash-cli on GitHub.
Anybody who has worked in the ops space as probably built up a veritable
library of scripts which they use to manage everything from deployments
to brewing you coffee.
Unfortunately, this tends to make finding the script you’re after
and its usage information a pain, you’ll either end up grep-ing
a README file, or praying that the script has a help feature built
Neither approach is conducive to a productive workflow for you or
those who will (inevitably) replace you. Even if you do end up adding
help functionality to all your scripts, it’s probably a rather significant
chunk of your script code that is dedicated to docs…
After a project I was working on started reaching that point, I decided
to put together a tool which should help minimize both the development
workload around building well documented scripts, as well as the usage
complexity related to them.
Sierra Softworks has a brand new website, rebuilt from the ground up using the
brilliant Hexo project. A lot of emphasis was placed on making it as
easy as possible for us to publish new content here while minimizing the rate
at which content becomes outdated (something our previous website suffered
from rather badly).
As a result, we’ve tried to move all the project pages to their GitHub repositories
and provide a dynamically generated list of them here. Unfortunately,
not every project we had previously is on GitHub, so we’re busy migrating some
of the older content across to this website.
Traefik is an application load balancer written in Go and designed to simplify
the task of serving HTTP(S) services whose configuration changes on the fly. Traefik v1.1.0
was recently released with support for Docker Swarm and it works excellently.
In this post, we’ll go through how one sets up their Swarm cluster to automatically expose
its services through Traefik.
Docker Swarm is one of those interesting new technologies which has succeeded in shaking up
people’s preconceptions around what it means to run a scaleable cluster. In an environment
where everyone seems to be building a cluster orchestrator, including some big names like
Google’s Kubernetes, HashiCorp’s Nomad and Mesosphere’s Marathon;
Swarm has managed to burst through as one of the most attractive orchestration frameworks out
As a result of all this hype, it can be difficult to make a decision around whether Swarm is
the right tool to use. As someone who has had extensive experience with running Swarm,
Kubernetes, DC/OS (Marathon) and Rancher in production
environments, I’ll try to give you an unbiased view on the reasons you’d choose Swarm
and some of the gotchas to be aware of.
Until now, all of my work on websites has been done in HTML. Write HTML for this page,
write HTML for that project and so on. HTML is one of those languages which anyone who considers
themselves good with computers should know, but it also leaves a lot to be desired. In the latest version of our website,
I decided to move to Markdown as our primary markup language for documents. Markdown is one of those languages which
continues to grow more popular, especially on very tech-centric sites like StackOverflow and GitHub
and yet if you talk to most people who are merely “good” with computers, they have never heard of it. Somewhat strange given
that Markdown is designed to be an easier to use, easier to read, shorthand version of HTML for writing documents; but I guess
that’s just the way of things.
Code Highlighting is one of those things which doesn’t seem like a big deal, until you see what a difference it can make. The issue is that source code is inherently difficult to read due to the vast number of keywords and punctuation used by compilers to understand what we are trying to tell them to do. In an effort to combat this difficulty, we rely on two different tools.
The first, formatting, is probably the most important; it is the process of making code easier to read through added whitespace, often this whitespace makes no difference for a compiler but by adding newlines and tabs, humans are able to read it considerably more easily.
The second, highlighting, is the automated (or manual, if you’re a masochist) process of colouring different parts of the source code to make it easier for humans to read. This involves colouring specific keywords in certain colours, maybe colouring variable names another etc.
Static websites are synonomous with the dawn of the internet, before database servers became mainstream, before the advent of the CMS and long before the dawn of the web application. Over the years we’ve seen the advent of web development frameworks like Ruby on Rails, Express.js and MVC to name but a few. These frameworks include support for advanced templating engines, database backed page generation and custom routing, but is it really necessary to use such a framework when a static website might address all the same problems at a fraction of the cost.