Currently working to keep one of the world's biggest entertainment franchises available.
Site Reliability Engineer
First off, I'm not a graphic designer by profession and haven't received any kind of training in the field - so don't take this as a tutorial on how to create your company's logo because in all likelihood I haven't got the faintest clue what I'm talking about. I am, however, a huge fan of learning to do new things; and in my case that generally involves mashing together a bunch of Google searches until I find some information that gets me on the way.
Static websites are synonomous with the dawn of the internet, before database servers became mainstream, before the advent of the CMS and long before the dawn of the web application. Over the years we've seen the advent of web development frameworks like Ruby on Rails, Express.js and MVC to name but a few. These frameworks include support for advanced templating engines, database backed page generation and custom routing, but is it really necessary to use such a framework when a static website might address all the same problems at a fraction of the cost.
Code Highlighting is one of those things which doesn't seem like a big deal, until you see what a difference it can make. The issue is that source code is inherently difficult to read due to the vast number of keywords and punctuation used by compilers to understand what we are trying to tell them to do. In an effort to combat this difficulty, we rely on two different tools.
The first, formatting, is probably the most important; it is the process of making code easier to read through added whitespace, often this whitespace makes no difference for a compiler but by adding newlines and tabs, humans are able to read it considerably more easily.
The second, highlighting, is the automated (or manual, if you're a masochist) process of colouring different parts of the source code to make it easier for humans to read. This involves colouring specific keywords in certain colours, maybe colouring variable names another etc.
Until now, all of my work on websites has been done in HTML. Write HTML for this page, write HTML for that project and so on. HTML is one of those languages which anyone who considers themselves good with computers should know, but it also leaves a lot to be desired. In the latest version of our website, I decided to move to Markdown as our primary markup language for documents. Markdown is one of those languages which continues to grow more popular, especially on very tech-centric sites like StackOverflow and GitHub and yet if you talk to most people who are merely "good" with computers, they have never heard of it. Somewhat strange given that Markdown is designed to be an easier to use, easier to read, shorthand version of HTML for writing documents; but I guess that's just the way of things.
Docker Swarm is one of those interesting new technologies which has succeeded in shaking up people's preconceptions around what it means to run a scaleable cluster. In an environment where everyone seems to be building a cluster orchestrator, including some big names like Google's Kubernetes, HashiCorp's Nomad and Mesosphere's Marathon; Swarm has managed to burst through as one of the most attractive orchestration frameworks out there.
As a result of all this hype, it can be difficult to make a decision around whether Swarm is the right tool to use. As someone who has had extensive experience with running Swarm, Kubernetes, DC/OS (Marathon) and Rancher in production environments, I'll try to give you an unbiased view on the reasons you'd choose Swarm and some of the gotchas to be aware of.
Traefik is an application load balancer written in Go and designed to simplify the task of serving HTTP(S) services whose configuration changes on the fly. Traefik v1.1.0 was recently released with support for Docker Swarm and it works excellently.
In this post, we'll go through how one sets up their Swarm cluster to automatically expose its services through Traefik.
Sierra Softworks has a brand new website, rebuilt from the ground up using the brilliant Hexo project. A lot of emphasis was placed on making it as easy as possible for us to publish new content here while minimizing the rate at which content becomes outdated (something our previous website suffered from rather badly).
As a result, we've tried to move all the project pages to their GitHub repositories and provide a dynamically generated list of them here. Unfortunately, not every project we had previously is on GitHub, so we're busy migrating some of the older content across to this website.
If you can't find one of our older projects here, please send us an email.
If you're just looking to hop straight to the final project, you'll want to check out SierraSoftworks/bash-cli on GitHub.
Anybody who has worked in the ops space as probably built up a veritable library of scripts which they use to manage everything from deployments to brewing you coffee.
Unfortunately, this tends to make finding the script you're after
and its usage information a pain, you'll either end up
a README file, or praying that the script has a
help feature built
Neither approach is conducive to a productive workflow for you or those who will (inevitably) replace you. Even if you do end up adding help functionality to all your scripts, it's probably a rather significant chunk of your script code that is dedicated to docs...
After a project I was working on started reaching that point, I decided to put together a tool which should help minimize both the development workload around building well documented scripts, as well as the usage complexity related to them.
Bash's ability to automatically provide suggested completions to a command by pressing the Tab key is one of its most useful features. It makes navigating complex command lines trivially simple, however it's generally not something we see that often.
Bash CLI was designed with the intention of making it as easy as possible to build a command line tool with a great user experience. Giving our users the ability to use autocompletion would be great, but we don't want to make it any more difficult for developers to build their command lines.
Thankfully, Bash CLI's architecture makes adding basic autocomplete possible without changing our developer-facing API (always a good thing).
Inki is a small proof of concept project I've been working on which is designed to manage transient, single-use, SSH keys for an automated remediation tool our team is in the process of building.
In this blog post I'll go over some of the design decisions motivating a tool like Inki, some of its interesting implementation details and the questions we're hoping it will allow us to answer.