Most software shops these days use github or some other variation of git as a service and integrate with any number of CI as service pipelines to do testing and general sanity checking. This is fine except most places do it at the wrong level of granularity by using pull requests. Pull requests can work fine but they don’t provide enough structure. For example, you can have two outstanding pull requests that have no conflicts and pass all CI checks in isolation but one breaks as soon as the other is merged. A theoretical concern but can happen if the entire branching/merging workflow is based around pull requests. Over the years I’ve seen a few branching models in various source control systems. Most either overconstrain the workflow and add too many annoyances or they don’t constrain enough so there is no structure to exploit and build tools around. I think this happens because the technical people don’t think about what they want out of their source control policies or they let non-technical people design it and it turns into a bureaucracy because most people think of bureaucracies as structured systems. Continue reading
Do you have multiple server processes on the same host? Do you need to aggregate the log data from those processes? Do you need to rotate the aggregated log? If you answered yes to those questions then you need to stop logging to files. Instead of logging to a file you should be logging to a unix domain socket and letting whatever is on the other end handle all the aggregation and rotation issues. I started looking around for examples of this but everything these days when it comes to logging is built for the enterprise. The actual skeleton of what all those enterprise systems are doing is quite simple. In fact it is so simple that you can do it in less than 30 lines of code in most high level languages. Here’s the skeleton for a logging server in Ruby: Continue reading
Given that you can now deploy a ruby interpreter that won’t make experienced system administrators cry the next logical step is to do the same for your ruby applications. Fortunately bundler takes us almost all the way there with its ability to vendor dependencies. All you need to do is automate the process with a few rake tasks and you are pretty much good to go. Continue reading
rvm, chruby, rbenv, etc. do not belong in a production environment. Even if you are deploying and co-hosting applications that require different versions of ruby those tools still do not belong in a production environment. All those tools are strictly for dev environments. Binary shims and other hacks have no place in a production environment. Ideally you have one user per application that has the proper profile for setting up PATH to point to the right version of ruby which has been compiled and deployed wholesale ahead of time. This is actually quite simple and is in fact a one time operation if you do it right and package the binary bits with an RPM or Debian package. Heck, even a tar file would work if you’re willing to have some extra deployment logic and these days you can use any number of dev ops tools like chef and ansible to codify the initial production environment setup as well. Continue reading
Lets say you are designing some kind of metric gathering and alerting API. How would you go about that? What would be the first thing you anchor your API around? Maybe the sources of those metrics and alerts, no? Well you’d think that if you were a sane person but somehow the folks at circonus managed to turn the whole thing upside down on its head. Looking at their API you’d think they are actively trying to be hostile towards anyone trying to create automated tools around the API. I’d forgive the monstrosity that is their API if their web UI was any good but that damn thing is just as convoluted as the API. Continue reading
I ported one of my Ruby projects (pegrb) to Dart (pegdart) over the last few weekends and it was easier than I expected. The language overall is pleasant and the optional typing is ok, not great. I prefer TypeScript’s approach better because it is easier to understand. When you compile with ‘–noImplicitAny’ you either get errors or you don’t. With Dart I have to hunt stuff down within the IDE and if no types are assigned then by default the variable is ‘dynamic’ and it is tricky to figure out how the types are flowing. I also like how easy it is to publish stuff to the official package repository and in general the IDE makes it easy to navigate around the code. That’s one of the things I miss from my days working with Visual Studio. As far as server-side development goes though you’d be crazy to use Node.js when there’s Dart. Julia and Elixir are next on the list.
Too often I see software codebases with horribly convoluted architectures. Sometimes the choices are justified because of legitimate business edge cases or backwards compatibility issues but other times it is basically a lack of intelligence and discipline.
In algebraic topology there is a subfield known as obstruction theory. Obstruction theory is concerned with justifying why certain constructions are impossible by showing the existence of some other object that gets in the way. Software development needs such a theory. If there are no obstructions then there is no excuse for writing horrible software.
I wanted to get a taste of Node.js development on the server side by playing around with a very simple screen scraper. Writing vanilla JS is no fun so I also wanted to combine it with TypeScript to see if it would be as nice for server-side development as it is for client-side development. The prognosis is quite dismal. Most of the Node.js APIs are almost actively resistant to being typed and you are forced to use ‘any’ all over the place. I like to use ‘tsc’ with ‘–noImplicitAny’ but try as I might I couldn’t get the types to flow through properly without major surgery on ‘.d.ts’ files and instead of expending the effort just fell back on ‘any’. The code for my little experiment can be found at davidk01/node-typescript-amazon-price-scraper.
Some time ago I read an article on proggit that made an analogy between compression and clean design. Basically if you see a lot of repetition then you factor out that structure and re-use it the same way a compression algorithm takes out common patterns and re-uses them to more compactly represent some blob of data. Sometimes though it is not the structure or the common patterns that make things hard to understand but instead it is too much generality and indirection. So how do you solve that problem? Continue reading