Welcome to my multi-part series about Unix principles applied to infrastructure as code. Throughout my career, I have seen tools rise and fall much like the benevolent kings turned tyrant of old. How many times do you hear the following at your job?
"Hey, What does this tool do? What features does this tool have?".
In my opinion, this is a very backward approach to solving problems and we often get caught up in marketing and the feel good "hello world" experience that a tools documentation may provide.
Sometimes, we receive a directive to use a tool and the organization wants to see it implemented. We then go read the docs, implement the tool to the best of our ability and best practice is followed only so far. Rarely do they bring in leaders in that space to help with implementation and proper engineering of the tool. As more consumers of the tool come into play with no one leading best practice and people having deadlines to meet, they simply engineer playbooks, modules, scripts etc.. to meet their deadlines never seeing the technical debt that is being created every day.
These scenarios lead to a hammer in search of a nail; You're a tool without purpose, searching for meaning. Maybe your tool has grown organically without any oversight, put into production with grandiose plans developed in an ivory tower, never talking to the consumers of the tool to find out their processes and pain points. Perhaps you did not have the backing you needed to push best practice and enforce best practice as the tool evolved.
A hammer in search of a nail; You're a tool without purpose, searching for meaning.
Whatever the case may be, What usually ends up happening is a tool will reach a certain level of maturity within the organization, but will end up having some problems. Maybe the problems are due to load, or due to the way it was implemented and yes just plain using the tool incorrectly. Then the tool gets a bad name within the organization, someone finds a new tool, and the cycle is repeated. The organization is forever caught in a vortex of tool cycling and half-baked implementations.
What Can Be Done?
It's my personal belief that a lot of the new DevOps concepts are not new at all. They have been the creed of Unix administrators for decades, we perhaps have not socialized them in the right way or people thought they were the old way of some Unix long hairs stuck in some terminal unwilling to change.
From my point of view, the DevOps movement if you want to call it that, is simply catching up to these Unix principles that were so far ahead of their time they were taken for granted. Yes, I even use to take them for granted, until I started seeing patterns repeatedly emerge or rather anti-patterns in the implementation with so-called DevOps tools and automation.
So we're going to explore some of these principles, how they can be applied to automation and infrastructure as code. While some of these may seem like simple common sense approaches, I urge you to comb your organizations playbooks, modules, recipes, and scripts. You'll be surprised at how many of these are broken, and a simple adherence to these principles can solve a lot of despair.
Table of Contents for Series
Over the next few weeks, I will be introducing you to various principles. I will provide a high-level overview and in-depth coverage of the principle. I won't be going over every single principle, but rather key principles I feel need to be understood.
- Rule of Modularity Part I Part II
- Rule of Clarity Part I
- Rule of Composition
- Rule of Separation
- Rule of Simplicity
- Rule of Parsimony
- Rule of Transparency
- Rule of Least Surprise
- Rule of Silence
- Rule of Repair
- Rule of Extensibility