Categories
Software Engineering

Software Engineering: The Monolith Argument

Once in a while, there is a new fad. In everything. Software Engineering included. For a while, these new things appear to be the way to go. However, most often than not, the only reason that the new grass appears greener is because we have been bored of looking at the old pasture and we want some solutions to the moles and mountains on our side of the bargain. We refuse to believe that those problems exist because of us. Not because of problems with the philosophy of engineering.

My cudgels are up today against the family of “piece-meal engineering” philosophies. These are known by several names, the most common one being “Service-oriented Architecture” (SOA). This blog post is directly triggered by this post on LinkedIn by a good friend and long time colleague where he talks about treating components of software as bricks on a wall or the components of a truck. He posits that this sort of thinking will lead to better experiences when smaller components of a larger system fails and consumers do not have to wait for fixes to the whole thing.

I disagree. Vehemently. The answer to the “why?” is the crux of this post. I always say go back into history to understand why something exists. Be it an absurd tradition or a piece of science. The reason it exists is somewhere in the past. The wheel was invented when people got tired of lugging around heavy blocks of stone on their backs, not because you needed a better way to drive your luxury car.

“SOA” was first conceived of in the early 1990s. At the time, all software was monolithic. But then, that software was singularly purposeful. The reason why them being monolithic had become a problem was the abrupt rise of three things:

  1. The Internet
  2. GUI operating systems, led by Windows 3.x and then Windows 95
  3. Programming languages

The 90s were also a period of intensive bug fixing. Yes, for the “Y2K problem”. Programmers and project managers felt that things were starting to take too long to do because of the humongous amount of code, logic and data to sift through and revalidate before changes were applied or fixes approved. This led to the clamor to make software components smaller and modular. The envisaged advantages were:

  • Shorter release cycles
  • Increased manageability
  • Ability to enhance functionality better

(Notice that I did that last bullet in italics, coming to that in a moment.)

This led directly to the “SOA” principle. Giant books, thousands of hours of tutorial content and compendiums of interview questions were created around it in a matter of a couple of years. Good money for a whole bunch of people. Everywhere you went in the late 90s or early 2000s, people talked SOA. Even those that had no clue what it was. From accountants sitting among dusty ledgers to CXOs. Yup, 90% of them had no clue what SOA meant in reality. Yet, it was a mandate. Bills would not be paid unless the consultant had implemented the solution using “SOA”. People did not get hired if they did not know “SOA”. It was a magic potion.

So-called project and product architect at even large companies embraced this idea. They happily broke up otherwise excellent pieces of software into chunks and dreamed up spurious “interfaces” between them. This led to an even weirder and more dangerous way of writing code — the “interface pattern”. People now swore by how quickly they could write up these disconnected parts of code.

Question. Did it give us any of those benefits?

Yes, it drastically reduced engineering cycle time. When I only need to worry about 600 lines of code rather than 60K, I can release stuff to you faster. It also increased efficiency FOR A WHILE because people needed to be accountable for smaller bits of functionality and they did a great job of doing it well FOR A WHILE.

The advantages stopped there.

The problem began when people tried to embrace other trends and philosophies started appearing in the late 2000s:

  • Virtualization
  • Web services
  • Distributed data & computing
  • Relocating business logic

Teams quickly ran up against concrete walls. Let’s examine a few.

Not easy to virtualize or move servers

When you have a monolith and you want to move servers or virtualize its host, it is easy to do. There is only ONE piece to worry about. When your application exists as a thousand pieces, you need to worry about everything. As applications started to span around the globe, you also needed to consider where on the Earth the app was running from as well. And when that happened, you also need to worry about the location of pieces that it talks to.. and it talks to… and it talks to… and… you get the idea! The result was that people tried to sweep the problem under the carpet by force-placing multiple “services” [that were otherwise meant to live independently] on the same hosts.

Not easy to enhance

When you need to change business logic with a software component that is used in its discrete/service form, you NEED to worry about its consumers not making the changes in lock-step with you. In fact, most of them will never want your changes [or even be mad at you for making such changes at your end]. It is also a well established FACT of software engineering that when there is a bug that exists for a long time, other developers learn to live with that bug and write code to “support” that bug in their component’s workflows. When you try to remove it, it will end up breaking more things than fixing them.

This is why people that write things like web services tend to create “versions” of code — you typically need to tell the web service which version of the service you are trying to call. This might result in completely different data or changes in data format or changes in the fields returned or any number of other behavior.

What happens when you create a version is that all your previous versions are still running out there. THEIR developers and ops people need to watch for all those versions as well and keep them up and alive. It becomes a nightmare for everyone that has to live with it after you are done and long gone.

So are there no use cases?

There are. Definitely. If you do one thing and one thing only and do it well, then that is a great case for a web service. But don’t be fooled. Think of something like an OAuth service. There are tons of forward, backward, standards, non-standard, quirks, and other scenarios that the service will need to support. Over a period of time, a web service that started out as a 100 lines of code will become a behemoth needing its own sub-services!

When to use Microservices architecture?

I prefer to call “SOA” as “Microservices architecture” (“mSA”), because that helps define its purpose to me. Break up applications into component “services”. Unlike the other more common name, “mSA” does not force arm twist me into evolving the whole application around services. Instead, it lets me write “chunky” applications where parts of it are rambling monoliths and some are service/consumer model.

Don’t lock yourself into writing applications that are a collection of services and service consumers. It is incredibly hard to architect that correctly, implement it with precision and manage it in the long term.

I suppose that gave you an idea already. I use the mSA when I need to have parts of my application accessible as services, or when I need to perform things that will actually be asynchronous and distinct from the rest of the application. Typically, these services will also be reusable and reused among other applications I build. Here are some examples:

  • Authentication workflow
  • On-boarding users and customers workflow
  • Data storage and retrieval
  • Payment processing workflow
  • Data requests – like when you want to download your historical profile from a site
  • Email / notification sending
  • Automated incoming email processing

and so on…

You will notice that each of the above functions are fairly stable functions. That is, barring bug fixes, they are going to remain unchanged for long periods of time. In fact, there is a micro-service of mine that has been running untouched for 15 years now! Now you know what I mean when I say that microservices are really services.

Microservices Consumption Gotchas

Once you refactor a piece of functionality to live outside your main application, you will immediately jump into a barrel of issues. These are:

  1. Validation. You need to validate all your input and ensure that your output is validated as well. Even silly things like fixing a typo in an output data field can get nightmarish if you don’t catch it before its first client app is written. There is a heavily used ticketing API out there [not mine!] that has been calling “outbound flights” as “otubound” for over 10 years now. They have caught the typo a long time ago, but fixing it now means fixing hundreds and thousands of travel sites.
  2. Sanitize messages. When you throw things like exceptions around through your main app, you can write whatever you want in your exception messages. Some internal exception messages may read downright ugly and un-parliamentarian! But the minute you start sending those texts outside of your app, you need to make them not only tone-neutral, but also restrict the amount of internal state information you give away [for example, you can no longer dump your internal variables into the exception].
  3. Create your own client libraries to talk to your own services. Otherwise, you will end up repeating [on average] 500 lines of code per consumer of your service.

The bottom line

Don’t get sucked into over-using something because it seems to be “cool” to do, or because your manager insists you need to do it that way. Chances are, the moment your engineering team is done with it, it becomes a nightmare for the teams that have to maintain it and keep it alive.

Leave a Reply

Your email address will not be published. Required fields are marked *