refactor by partial evaluation

Some time ago I read an article on proggit that made an analogy between compression and clean design. Basically if you see a lot of repetition then you factor out that structure and re-use it the same way a compression algorithm takes out common patterns and re-uses them to more compactly represent some blob of data. Sometimes though it is not the structure or the common patterns that make things hard to understand but instead it is too much generality and indirection. So how do you solve that problem? Continue reading

software and its ecosystem (or lack thereof)

Introduction

All software at the end of the day is written for some purpose. In order to fulfil its purpose the software must be deployed into an environment that can sustain it. Unfortunately this aspect of software development falls into the uncanny valley surrounded by development, infrastructure, and operations. The devops movement was motivated partly by the need to shine light into that uncanny valley. Initially the goal was to just reveal all the dark corners and flush out all the monsters that dwelt there and once the shock had worn off people started to think of ways to keep the monsters at bay. Many practices developed for taming the complexity of software projects where brought into the uncanny valley and currently the aspiring infrastructure and operations engineer does not have a shortage of tools to choose from to help with keeping a consistent fleet. In many ways things have gotten better but in many other ways things are still pretty bad and there is no tool out there that will save you.

Continue reading

continuous integration for infrastructure

If you are using the cloud for you infrastructure then you need to version it the same way you version your code and deployment artifacts. Lately, I’ve been using packer to generate AMIs and I couldn’t be happier with how things are working out. I now have a consistent environment whenever I want to experiment with something and when I discover something worthwhile that I think should be a basic functionality of all environments I need then I check that into the repository that contains the packer templates and regenerate all the AMIs. This means I can go back to any point in time and spin up an environment exactly as it existed at that time. Infrastructure is no longer something special. It is now just another artifact in a software development pipeline and you should treat it that way.

infrastructure 2.0

So I’ve been doing this long enough and have played with enough tools to give you some pointers on how exactly you should set up your infrastructure if you want it to be as smooth as possible. There are two quotes I go back to every time I’m trying to figure out how to approach an infrastructure problem:

The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs. – Alan Kay

When you do things right, people won’t be sure you’ve done anything at all. – God Entity (Futurama, Godfellas)

To live up to the above two quotes you need a bit of help. That help comes in the form of some really neat software:
Continue reading

frameworks are terrible

Yes, frameworks are terrible. At least that’s the case in the two domains I’m most familiar with: web and infrastructure. In the web camp you have ember.js, angular.js, ext.js, etc. which butcher JavaScript and then put it back together in a way that is completely unrecognizable. In the infrastructure camp you have ansible, chef, puppet, etc. All of those are also equally terrible especially puppet which takes Ruby and layers some kind of pseudo graph model on top using its own cute little DSL. Some people claim great productivity boosts with frameworks but if you look carefully it turns out for every productivity claim there is a counterclaim about all sorts of headaches and lost hours. Continue reading

simple in-memory store with consistent reads

Problem Statement

Suppose you want to write a simple in-memory JSON store with an equally simple socket based protocol. You want this in-memory store to support parallel and consistent reads. By “parallel reads” what I mean is if 10 clients request to read data from the store then no client should be blocking any other client. By “consistent reads” what I mean is when a client requests some data from the store there is absolutely no way that client gets half of the data before a write and half of it after a write and there is also some kind of ordering for reads and writes. In other words, if we have an array “[1,2,3,4]” that corresponds to the key “ints” in our JSON store then the following sequence of events is impossible: Continue reading

job description boilerplate

During my job search I came across a lot of job descriptions that were just a waste of space. The same amount of words could have been used to describe the taste of packaged tuna and if you could have cloned yourself then the version of you that read about the taste of packaged tuna would have come out ahead in terms of being a better person. But I’m never one to just criticize so here’s a concrete example of what a technology company’s job description should look like:

At our core we are a technology company. We move fast but we do not break things. Continue reading

enforcing invariants with singleton classes and method redefinitions

If you play around with Ruby long enough you start to notice that Ruby programmers overall tend to prefer small domain specific libraries, Ruby on Rails notwithstanding. There are many good reasons for this kind of approach from a software engineering perspective but the biggest reason is that Ruby makes it extremely easy by providing the right kind of metaprogramming facilities. Continue reading