The most expensive cost of any project is man-hours. For example, let’s examine the common vehicle repair of replacing a head gasket. A quick Google search shows that the average head gasket repair ranges in the thousands, and yet, the gasket itself can cost as little as $10. This is because head gasket repairs not only take hours to complete, but one must know what they are doing, have the right tools and be able to address additional issues that arise during the repair. Now, if several people need a gasket repair at the same time, they would likely be waiting a significant amount of time to receive their repair due to these complications.
When it comes to software development, labor rates are also typically the most expensive cost. While developers do solve tasks as complicated yet well-defined as changing a head gasket, oftentimes they are working on equally complicated tasks with no predefined approach. Like the mechanic, software developers may be able to speed up repairs by including another person to help. However, this can backfire as there are some tasks only one person can work on at a time and sometimes these tasks are sequential. Furthermore, in 2021, an industry survey revealed the number of interview requests for software engineers doubled. This establishes developer time, a specific type of man-hour, as not only the costliest but the most precious resource in software development.
With all of this in mind, surveys show that the average developer only spends 32% of their available time writing code and 35% of their time managing it — and our experience with clients at Insight supports similar results. This means that the goal should be to find ways to increase the amount of time our developers can write code. Thankfully, the tasks to manage code are typically highly repeatable, whereas writing code requires brainstorming and troubleshooting. The benefit of highly repeatable tasks is that they can be standardized, and if they can be standardized, they can be automated.
The first step in a code management process is to document the value stream and workflow. A value stream is the set of actions that happen to capture a request, realize it, and then deliver that request to the customer. Or, in the words of John Willis, “From aha to ka-ching.” The workflow is the series of activities required for a team to process a request, similar to a Standard Operating Procedure (SOP) in lean manufacturing. These two documents enable the evaluation of current steps to identify areas of waste, optimize the delivery process, and ultimately initiate the standardization of work. Depending on your organization, there can be huge returns from this activity alone.
Once the work has been standardized, the highly repeatable tasks (the actions that will be performed the same every single time) can be automated with the appropriate tools. The tools will vary depending on the tasks you have standardized and the technology you are leveraging. At the highest level, a scripting tool (Azure DevOps, Jenkins, Ansible, etc.) will be leveraged to link all your other tools together in what is called a toolchain or Continuous Integration/Continuous Delivery (CI/CD) pipeline.
Automated toolchains are capable of performing tasks in a fraction of the time that humans do and with higher accuracy because they are incapable of skipping steps or overlooking details. This can also free up the cognitive load of your developers and decrease onboarding for new teammates by not only providing a documented process but removing the need for them to keep track of these operational steps. A toolchain can be leveraged by all developers on the team, and scripts can even be shared across teams.
Research led by Nicole Forsgren, Ph.D., (documented in her book “Accelerate” and DORA’s State of DevOps Reports) has identified quality as a key enabler of speed to market. This is because high-quality code is easier to work in, requires less troubleshooting to successfully deploy a change, and increases the time developers can spend on new requests because they are not constantly fixing production issues. The Navy SEALs have a phrase, “Slow is smooth, smooth is fast,” which very much is the case in software development. Taking the time to ensure quality in your codebase enables smooth sailing for the development of new features and functionality.
The best-known way to increase quality is to test more often. Test cases are highly repeatable and therefore are perfect for automation. They are also a necessary inclusion in any CI/CD pipeline. Test cases are enabled by tools such as version control repositories, testing suites and code smell scanners, practices including unit testing, Test-Driven Development (TDD) and Behavior-Driven Development (BDD), and the rule of thumb “check-in early and often.” Put simply, the automated script is reliant on a trigger to run. This is typically when a developer commits or checks in their code changes to the version control; therefore, the more a developer commits their code, the more these quality checks will run. Smaller changes mean less troubleshooting if something fails the automated test script. Plus, the sooner the toolchain can inform the developer of any identified issues, the more relevant that information is and the easier it is for the developer to respond to it. What is easier to remember, a decision you made an hour ago or a month ago?
While developers can learn to check in frequently while working on a traditional tollgate project, it is easier when the work is designed to be incrementally delivered. It can also be impossible to automate all the steps if the team does not have access to their entire Systems Development Life Cycle (SDLC). This is where run-of-the-mill Agile practices can reduce overhead on developers by introducing quick feedback cycles into the requirements model and team operational structure. Not only can this help enable feedback about quality, but it can encourage feedback on the value of the functionality the developers are coding from actual users. This quick feedback enabled by smaller work requests and automated operations frees organizations to make riskier experiments knowing a change can be rolled back or fixed if customers are not fans. It also allows leadership to reassess priorities so that developers are not wasting weeks, months or years on something that is not valuable to end users. It overall decreases the time to market, enabling organizations to respond to their users and their competitors.
It is crucial to understand that software architecture can be a huge factor when it comes to effectively leveraging a CI/CD pipeline or working in an agile fashion. If an application was developed in a tollgate era with a monolithic structure, then it may be incredibly difficult to introduce incremental changes or automated toolchains. This is a common issue our clients experience when trying to implement these practices and/or move to the cloud, and it is why we at Insight offer an app modernization service. Organizations do not have to pause all development to modernize their portfolio, though. An intentional approach can be applied to new functionality to refactor the code as the development team continues to add value. And honestly, if an organization has applications that do not change very often, there may not be a reasonable return on this sort of investment.
To summarize, your organization can significantly increase its available developer man-hours by leveraging the right tools. There are several steps required to realize this benefit, but these steps can be advantageous on their own. Furthermore, if you calculate a third of your current development costs and compare it to the cost of implementing these tools and practices, I will bet that the math works out in your favor. At Insight, we help organizations through this process, and recently helped a CX department at a Fortune 500 company to not only realize that third in efficiency but also to reduce deployment costs by more than 90%. It is a worthy journey, and we would be honored to help you accelerate your capabilities.
Lastly, if your organization has multiple development teams, as they start to standardize their work and leverage tools to reduce the load of this repetitive work, you will likely quickly find that they are solving the same problems and leveraging the same technology. This is where there is a benefit in creating organizational standardization to promote a shared quality expectation across your portfolio, realize any savings from bundling licenses, and further reduce the overhead of teams having to reinvent the wheel. The reality is that once you reach this point, there is also overhead to your teams that can be drastically reduced by creating a dedicated platform engineering team, or Internal Developer Platform (IDP). In my next writing, I will explore how your organization can save even more developer man-hours by effectively leveraging these platforms at scale.