Given that you can now deploy a ruby interpreter that won’t make experienced system administrators cry the next logical step is to do the same for your ruby applications. Fortunately bundler takes us almost all the way there with its ability to vendor dependencies. All you need to do is automate the process with a few rake tasks and you are pretty much good to go. Continue reading
rvm, chruby, rbenv, etc. do not belong in a production environment. Even if you are deploying and co-hosting applications that require different versions of ruby those tools still do not belong in a production environment. All those tools are strictly for dev environments. Binary shims and other hacks have no place in a production environment. Ideally you have one user per application that has the proper profile for setting up PATH to point to the right version of ruby which has been compiled and deployed wholesale ahead of time. This is actually quite simple and is in fact a one time operation if you do it right and package the binary bits with an RPM or Debian package. Heck, even a tar file would work if you’re willing to have some extra deployment logic and these days you can use any number of dev ops tools like chef and ansible to codify the initial production environment setup as well. Continue reading
Lets say you are designing some kind of metric gathering and alerting API. How would you go about that? What would be the first thing you anchor your API around? Maybe the sources of those metrics and alerts, no? Well you’d think that if you were a sane person but somehow the folks at circonus managed to turn the whole thing upside down on its head. Looking at their API you’d think they are actively trying to be hostile towards anyone trying to create automated tools around the API. I’d forgive the monstrosity that is their API if their web UI was any good but that damn thing is just as convoluted as the API. Continue reading
I ported one of my Ruby projects (pegrb) to Dart (pegdart) over the last few weekends and it was easier than I expected. The language overall is pleasant and the optional typing is ok, not great. I prefer TypeScript’s approach better because it is easier to understand. When you compile with ‘–noImplicitAny’ you either get errors or you don’t. With Dart I have to hunt stuff down within the IDE and if no types are assigned then by default the variable is ‘dynamic’ and it is tricky to figure out how the types are flowing. I also like how easy it is to publish stuff to the official package repository and in general the IDE makes it easy to navigate around the code. That’s one of the things I miss from my days working with Visual Studio. As far as server-side development goes though you’d be crazy to use Node.js when there’s Dart. Julia and Elixir are next on the list.
Too often I see software codebases with horribly convoluted architectures. Sometimes the choices are justified because of legitimate business edge cases or backwards compatibility issues but other times it is basically a lack of intelligence and discipline.
In algebraic topology there is a subfield known as obstruction theory. Obstruction theory is concerned with justifying why certain constructions are impossible by showing the existence of some other object that gets in the way. Software development needs such a theory. If there are no obstructions then there is no excuse for writing horrible software.
I wanted to get a taste of Node.js development on the server side by playing around with a very simple screen scraper. Writing vanilla JS is no fun so I also wanted to combine it with TypeScript to see if it would be as nice for server-side development as it is for client-side development. The prognosis is quite dismal. Most of the Node.js APIs are almost actively resistant to being typed and you are forced to use ‘any’ all over the place. I like to use ‘tsc’ with ‘–noImplicitAny’ but try as I might I couldn’t get the types to flow through properly without major surgery on ‘.d.ts’ files and instead of expending the effort just fell back on ‘any’. The code for my little experiment can be found at davidk01/node-typescript-amazon-price-scraper.
Some time ago I read an article on proggit that made an analogy between compression and clean design. Basically if you see a lot of repetition then you factor out that structure and re-use it the same way a compression algorithm takes out common patterns and re-uses them to more compactly represent some blob of data. Sometimes though it is not the structure or the common patterns that make things hard to understand but instead it is too much generality and indirection. So how do you solve that problem? Continue reading
All software at the end of the day is written for some purpose. In order to fulfil its purpose the software must be deployed into an environment that can sustain it. Unfortunately this aspect of software development falls into the uncanny valley surrounded by development, infrastructure, and operations. The devops movement was motivated partly by the need to shine light into that uncanny valley. Initially the goal was to just reveal all the dark corners and flush out all the monsters that dwelt there and once the shock had worn off people started to think of ways to keep the monsters at bay. Many practices developed for taming the complexity of software projects where brought into the uncanny valley and currently the aspiring infrastructure and operations engineer does not have a shortage of tools to choose from to help with keeping a consistent fleet. In many ways things have gotten better but in many other ways things are still pretty bad and there is no tool out there that will save you.
If you are using the cloud for you infrastructure then you need to version it the same way you version your code and deployment artifacts. Lately, I’ve been using packer to generate AMIs and I couldn’t be happier with how things are working out. I now have a consistent environment whenever I want to experiment with something and when I discover something worthwhile that I think should be a basic functionality of all environments I need then I check that into the repository that contains the packer templates and regenerate all the AMIs. This means I can go back to any point in time and spin up an environment exactly as it existed at that time. Infrastructure is no longer something special. It is now just another artifact in a software development pipeline and you should treat it that way.