In a perfect world, the contents of this section belong at the end of the article, as part of a conclusion. But a key theme to this article is that there is a lot of unintentional imperfection in the world and one of those imperfections is a tendency for some to draw conclusions early, so I will start with the end and see if we can meet in the middle.
There will be people that strongly disagree with this article. There will be others that share the sense of epiphany I experienced formulating the outline, and probably more than a few who will have come to the same conclusion before this article was written.
For everyone else, I ask that you look at your own enterprise and decide for yourself if the architectural decisions that drive your IT solution are based on corporate culture more than the best way of providing business value.
The most commonly stated reasons to migrate
Skim the thousands of recent articles and community postings about enterprises adopting a new architecture or process (microservices and devops are the buzzwords at the time of this writing, and I expect those will change several times before this article is no longer relevant) and the driver behind the move will generally translate with ease to one of the following:
- Improved operational efficiency
- Higher reliability
- Faster time to market
- Better support of business needs (arguably redundant to the first three items)
All those are excellent reasons to change how things are done. Moving from the current way of doing things to the new way of doing things will definitely yield those benefits in many (though assuredly not all) enterprises. I’ve been in this industry for more than 25 years and here are some of the shifts that I have seen made for the exact same reasons:
- Single-tier to two-tier architecture
- Two tier to n-tier architecture
- Fat client to thin client
- Single server to redundant services
- Redundant services to remote procedure calls (RPC)
- RPC to web services
- Thin client to app
Every one of the above-mentioned shifts resulted in some level of success. And, except for the last (which I include because irony fascinates me) reflects a cultural shift towards distribution of overall responsibility, isolation of specific responsibility and increased specialization. I can already hear the exclamations of “There is an increase in demand for full-stack developers, which refutes this observation!” I agree that more companies are looking for and hiring full-stack developers. I have observed, with some delightful exceptions, that once the people are hired they are pushed in to some type of specialization within a couple of years (often less).
The most frequent real reasons a change is needed
There was a behavioral study done almost 100 years ago that resulted in an a concept known as the Hawthorne Effect where changes in worker conditions resulted in increased productivity resulting from the expectation of improvement rather than the change itself (my spin on the conclusions, many of which are still being debated). When an enterprise architecture or IT process is changed, the result is similar.
There are many common examples of why a change is needed to achieve improvements, regardless of what that change is. Here are some that I have seen from working with dozens of different enterprises in several different industries.
The person that wrote that doesn’t work here anymore
My first few IT-related roles were as an FTE and a consultant for companies that were small enough where I was the sole IT resource involved. While I’m proud of the fact that some of my earliest applications are still in use more than two decades later, it has dawned on me while writing this article that it may be simply because I had not learned to properly document applications back then and no one has been able to make any changes for fear of putting the company out of business.
I learned about the value of good documentation when I did my first project for a large multinational manufacturing company, still as an independent consultant. I knew that I would be leaving these folks on their own with the application once my part was done and that people who would be hired after the project was complete would inherit the code and functionality without the benefit of any knowledge transfer meetings. At that time, I was not very unique in providing this service as part of my work. What I have learned since is that, like myself when I first started, many full-time employees either see no need to document their work or know how to.
In later years, many consultants either reduced or completely stopped providing documentation as a way to ensure more work or (to be fair) decrease costs in an increasingly competitive market.
The string that broke the camel’s backup
Even when best practices are followed in regards to simplicity and reuse for the first release of an application, by the nth release/enhancement/bug fix the application can reach a state where attempts at any but the most minor modifications result in something else breaking. Did the team’s skill atrophy or is this a result of a less-capable team owning maintenance? No.
Fragility creeps into solutions over time because technical debt piles up. If “technical debt” is a new term for you, I strongly suggest reading up a bit on it. In short, like credit card debt, if it isn’t dealt with early and often it will increase until more effort is allocated to dealing with the problems then solutions that caused them.
A culture where identifying, documenting, and correcting potential issues and enhancements identified throughout the life cycle of projects will extend the longevity of an application’s value and reduce IT costs minimizing the frequency of technology refreshes driven by failing systems rather than adding business value.
String theory is an antipattern
Another heading could be “Spaghetti and hairballs.” This driver to move is similar to the previously described scenario except it occurs at a lower level. The architecture may still resemble something that is comprehensible and even sensible, but some of the implementation code and configuration has become unmaintainable. Frequent causes of unmaintainable code are:
- Changes in personnel with little, poor, or no documentation to reference upon inheritance.
- Changes in personnel with plenty of documentation and no time allotted in the project plan to review it before diving in to the next set of “enhancements.”
- No change in personnel and no time allotted for code reviews.
- No change in personnel and no time allotted to address technical debt.
The common theme here is that haste makes waste. The irony is that the haste is always driven by a desire to reduce waste (or perceived waste in the form of costs associated with the activity that would have actually prevented the waste).
Growing pains
Earlier I mentioned some of the transitions that I have experienced firsthand. Here is the list again for context:
- Single tier to two tier architecture
- Two tier to n-tier architecture
- Fat client to thin client
- Single server to redundant services
- Redundant services to remote procedure calls (RPC)
- RPC to web services
- Thin client to app
A side effect of each of these is that they tended to increase the number of teams necessary to build and maintain solutions. By itself, the sharing of responsibility is a good thing. Efficiencies can be realized by having teams focused on specific areas as long as both technical and human interfaces are aligned to support the same goals. Unfortunately, cultures of competition and departmental isolation can also result from the same growth, resulting in a focus to improve efficiency at the expense of the original goal.
How to delay the changes until they are needed
The phrase “Wherever you go, there you are” applies just as aptly to migrating from one IT solution set to another as it does to trying to leave your troubles behind by relocating. If all of the bad patterns come along for the ride, the new will surely resemble what was just left behind sooner or later.
To be both fair and clear, most (if not all) of the common issues enterprises face today that drive them to move to a new platform to resolve their issues did not crop up because someone deliberately sabotaged the processes—they came about because the intention behind a move in the right direction at some point was forgotten and only the motion was left.
Documentation started falling by the wayside driven by two trends. The first was more intuitive user interfaces that required minimal or no documentation. This was a great idea with the best of intentions. However, some of the results of this trend are not so great, usually with the end users being the ones to suffer. There were many open source projects that ditched documentation by initially simplifying the interfaces. As the projects became popular, books, paid consulting and blogs with advertised were much more lucrative than documenting the more complex version. Since people were used to the software not having documentation (because it originally didn’t need it), this became acceptable.
Within the enterprise, the adoption of agile practices and the philosophy of documentation being no more than necessary eventually evolved into little or no documentation both because the skills to properly document became atrophied and budget-pressured management convinced themselves it was no longer needed. While I am probably the most vocal about documentation problems resulting from fractionally resembling agile (frAgile for short), there have been many long-standing agile proponents who have recently been calling BS on how enterprises have claimed to adopt agile and are actually destroying it by calling what they are doing agile (or extreme programming, scrum, etc.). Two example posts are The Failure of Agile and Dark Scrum.
Ideally, make it part of your project process to capture opportunities for improvement and document any technical debt knowingly incurred. Additionally, make it part of your SDLC to review the backlog of technical debt and technical enhancement recommendations at the start of project planning and make it mandatory to budget it reducing some level or debt and/or including some improvement.
Alternatively, what I have done for most of my consulting career is to keep a running catalog of such items throughout the project. Toward the end, I will assemble my notes into a single document (occasionally happily checking off items that were addressed before project completion) as a handoff to management at the completion of each project. Later, I would re-circulate the document prior to any follow-on projects.
I’m optimistic enough to expect that there will eventually come a time when this article is no longer relevant, and cynical enough to doubt that it will happen in my lifetime. The way I cope with this is to do things as best I can with the resources I can muster and continue to write articles like this to remind people that technology was supposed to make things simpler and easier so that we could spend time focusing on more interesting problems. Please share your coping mechanisms with me on social media.
This article is published as part of the IDG Contributor Network. Want to Join?