Let me know if this sounds familiar. You work at a software company that uses a 3/3.5 tier architecture. There are frontend servers, backend servers, some batch work servers, and a database or two. If it doesn’t sound familiar then maybe you have never worked with web application but that’s the predominant architecture at most web shops. In fact, even with all the microservice/devops/container/serverless hype no one has managed to change that 3/3.5 tier architecture. There is always a frontend, there is always a backend, there is always a database. The only differences are in the domain logic and implementation details like which language is used for the backend components and which database is used to hold the data. Let’s now get into some more concrete but still abstract details because I promise there is point at the end of this.
Let’s say you have 10 frontend servers and 10 backend servers (we’ll ignore the database and any other details for the time being). Let’s also suppose that like every other place the traffic patterns are periodic so that those 20 servers are for handling peak load instead of average load. The average load across the servers is 50% (even this is a stretch and the real number is probably close to 20% because that’s the average I’ve seen at most places). This means every other server is essentially idling so the average number of servers is actually 5 for the frontend and 5 for the backend. In this imaginary but really non-imaginary configuration we are paying for 10 extra servers 24 hours a day 7 days a week. That’s a concrete number that any CTO and engineer should be able to understand. That’s literally real money being wasted every day because for whatever reason the architecture precludes automatically scaling things up and down.
The sensible engineering solution in the above scenario is to figure out the blockers for reducing the server count and work on removing those obstructions but the reality is that most engineering organizations will not bother with changing the status quo. They won’t bother with fixing the architecture because changing the status quo is painful. It isn’t painful in terms of engineering hours. It is painful in terms of retooling and rethinking required to reimagine the architecture so most engineering organizations will just continuously eat the cost of those 10 extra servers.
My perspective is not unique but there doesn’t seem to be much critical analysis on this issue. The only thing I could find was “Why bad software architecture is easy to monetize” that uses housing as an analogy and outlines why there is inherent interest from one side of the market to make bad software and why the other side keeps falling for the trick.