Kubernetes The Next COBOL?
GS Low
Sun Jun 20 2021
Recently I read this article discussing why COBOL is still around and facing a shortage of talent today. The author suggests that COBOL is so hard to replace because it is infrastructure, and goes on to discuss whether the trend towards microservices and infrastructure-as-code today will lead to a shortage of Kubernetes talent in the future, after Kubernetes has been replaced by something else. He then asks further. What about CI/CD tools? What about even tools like git?
Every technology choice we make today is sure to become legacy in the future. The question is whether we can replace it, how, and at what cost. I’ve seen many customer requests over the years to replace old systems. Usually the apps are re-written from scratch, but with much difficulty because the documentation is lost, or outdated and incorrect. Even if we can get access to the old source codes, they could be written in an older paradigm, the “state of the art” in those days, and if you don’t have seniors (more than 5 years) on the team who used to code “the old way”, understanding these codes will be a problem. Anyway, nobody wants to read legacy codes because they tend to be badly written, full of misleading comments, badly named functions and variables, convoluted paths leading to nowhere, and bugs which might have already melded naturally into the “business logic” over the years. To raise the challenge further, the team will also need to fit the application into a new architecture or paradigm. Everyone is asking to transform perceived monoliths into microservices, but without a clear justification why they want microservices yet.
Let’s go back to the original hypothesis - that infrastructure is hard to replace. Yes, today we have Kubernetes, which everyone is trying to deploy, even for small systems that don’t need it. Cloud vendors are already providing something simpler, basically still Kubernetes under the hood, locking you further into their cloud offerings (not dismissing their value though).
I don’t see that as a problem actually. In theory, infrastructure should be easily re-created, if the architecture documentation is clear. The problem is the documentation, and the popular trend today is “the code is the documentation”. So will anyone be able to read and understand the infrastructure-as-code scripts 20 years into the future, assuming that Kubernetes is still with (some of) us? To clarify, I’m not suggesting that we should maintain separate set of documentation. We’ve all seen how that approach turns out. But I argue that there will be someone who can read the Kubernetes scripts, as long as you are still deploying new applications to Kubernetes. The problem comes when you have a working Kubernetes cluster that doesn’t really need to change, while the whole world races past you to embrace some other new infrastructure technology.
Well, since infrastructure is such a big problem, let’s make it disappear, then problem solved right? Let’s go for “functions” or “serverless” and let the cloud handle it for us. Sure, if that’s what you want. You can design your app as little fragments floating around and linked together in a pool (“nano” services?) made of someone’s unknown underlying infrastructure, and hope that someone doesn’t suddenly one day regret to inform you that they’ve decided to retire that service (again, not dismissing their value).
Going back to the article, the author also mentions CI/CD pipelines and git. Just like codes and tests, CI/CD pipelines require people to maintain them. The current trend is to hire DevOps engineers to manage them centrally, but I’m already seeing the trend towards a more de-centralised approach and putting the power (and responsibility) back into the application team. Whoever it is that maintains the CI/CD pipelines for you today, know that they become legacy when your team stops maintaining them.
As for SCMs like git, that’s the least of my worries. Git is the most popular SCM now, and for good reasons. The “legacy” ones are still around such as SVN and CVS, and there are also competing commercial products. You can usually migrate your commit history from one SCM to another (I’ve done SVN to git before). You might lose some information though. Yes, you will also need to change your CI/CD pipelines, perhaps team workflows as well.
COBOL is still around because the COBOL apps work, and we shouldn’t fix things that are working well right? But we were also oblivious to the fact that our folks maintaining them are getting older (and will one day pass on), and that the young folks coming in have less competency (because there is nothing much to maintain since nothing is broken). Sometimes we have to change even when things are not broken.
End of the day it is all about risk and change management. Your technology choices today will become legacy in the future, and the future comes faster nowadays. You must plan to replace them. You must have people who understand the technologies, the risks and costs of changing / not changing. (You also need these same people to not chase after everything that is trendy).
Also, change the way your procurement department functions (they are bad with understanding changes), and listen to your developers on the ground.