The cost/benefit tradeoff just can’t be solved for formal methods. There are far cheaper (in time, effort and financial cost) ways to write code with low enough incidences of bugs which are just more practical in almost all cases. Not to mention the fact that if you are manually implementing the code, someone can still just introduce bugs, and if you’re automatically generating the code, there can be bugs in that code generator, or if you’re using an automated verifier ther could be bugs in that. At some point a human had to write some code that can’t be proven to be “correct”.
Formal methods seem great for mathematicians and lousy for everyone who wants to make something productive in a reasonable time frame.
I call this a “goal post” argument and the argument strategy is about drawing an arbitrary boundary that something must be beyond to be valid. In this case the argument is about “trust” but if you break it down you notice it makes absolutely no sense. The author of that comment clearly “trusts” the hardware/CPU running their software. But, if they trust the hardware/CPU then they implicitly trust whatever process created the CPU and the process used to make the CPU was more people and software. So the whole argument collapses and they must trust the software. It’s circular reasoning in another guise.
Now that I’m one of the organizer of the formal methods meetup it has become a bit more clear why most programmers are not familiar with the general landscape of formal methods. It’s mostly a lack of awareness about what is available out there so to help fill in the awareness gap this post is meant to serve as a brief introduction to some of the vocabulary and high level concepts of formal methods.
This post consists of 3 main sections. 1st is about the general concepts and high level classification(s) of formal methods. 2nd is about why we should care about formal methods in software engineering. 3rd is about concrete applications of formal methods and some pointers for how to apply them in day to day software engineering.
Here are rules for a new game. Here is how it compares to some old games and some of the benefits and drawbacks. Here is why you should play and other similar games that you might like.
If you try to change the rules then existing players will treat you with extreme prejudice and strongly (potentially violently) recommend you play some other game.
All games are inherently mutable and meta-games are what keep them “stable”. Somewhat unsurprisingly, no one understands the meta-game logic and all players follow meta-game rules implicitly. Meta-game rules are only ever broken by accident. Cosmic bit flips, if you will, are inevitable.
There is strong suspicion there is more than one meta-game.
There is also strong suspicion meta-game rules are immutable so changing meta-games requires simultaneously changing all the rules which makes it a much bigger discontinuous change than just changing a single rule in a regular game.
In the presence of rigid meta-games it’s easy to imagine why regular games would be an obvious way to experiment with potential meta-game swaps. Meta-players would want reasonable assurance their game changes were “beneficial” to them according to some metric.
Most reasonable game players eventually conclude meta-games and meta-players must exist but it’s hard to know for sure because some reasonable players are willing to admit meta-games without meta-players.
Inspired by https://drewdevault.com/2019/02/18/Generics-arent-ready-for-Go.html.
In the distance, a gradual roar begins to grow in volume. A dust cloud is visible over the horizon. As it nears, the shouts of the oncoming angry mob can be heard. Suddenly, it stops, and a brief silence ensues. Then the air is filled with the clacking of hundreds of keyboards, angrily typing the owner’s opinion about functions, calling conventions, and Assembly. The clans of Algol, Cobol, Fortan, C, Forth, Lisp, and more – usually mortal enemies – have combined forces to fight in what may become one of the greatest flamewars of our time. And none of them read more than the title of this article before writing their comment.
This post has a few goals. One is to outline how to think about a simple imperative process in a declarative language/system like Alloy. Another is to make a case for why this kind of formalization is a worthwhile exercise for software engineers.
Disclaimer: To get the most out of this post you need some familiarity with Alloy syntax otherwise most of it will probably not make much sense. I’ll try to explain things as I go along but promise no mind blowing revelations. As some old person used to say, “There are no royal roads”.
This is mostly advice I wish someone had given me when I was starting out. It probably won’t make much sense until you have dealt with enough code to be sick of it. The usual disclaimers about ranting, warranties, and being a charitable audience stand.
One really cool trick I’ve discovered over the years is gamifying the learning process. Instead of doing a linear pass through some learning material I approach it in a non-linear fashion and take various detours by asking dumb questions and taking them to their logical conclusion. The downside is that it’s somewhat more time intensive than a linear pass but the upside is it’s more fun and I seem to retain the information better.
If you look at a build process abstractly then it is basically a function that uses some files as inputs and creates some files as outputs. We can peek into this input/output process with
strace by invoking the build script with
strace and then asking it to log all file operations. After we recover the inputs and outputs we can retrofit a caching mechanism on top of the build process by hashing the inputs and using that as a key to save the outputs.
To make things more concrete I’m going to use a simple script as a stand-in for a build process
Most people agree that code is a liability. The more code you have the worse things get because more code means more entropy and more problems from emergent and unanticipated behavior. So most people agree that the less code you have the better. In this sense TDD is doing the wrong thing because TDD is just more code. Formal methods on the other hand are not code and they de-risk the code portfolio by offloading parts that would be code into something that is not code and hence not a liability.
So really what I’m saying is that if you want to reduce risk associated with code then invest some time in learning formal methods and how to utilize formal methods instead of code to specify and validate systems.