Software dependencies and keeping them up-to-date

Software updates are a pain, but an aggressive update strategy offers benefits

Software teams nowadays are often confronted with a situation where updates for third-party components keep arriving almost every day. Delaying updates and bundling them may initially look like a viable option. However, it does not do to underestimate the risks and costs of this option. Read the expert article from Netcetera's CTO Corsin Decurtins, published in inside-it.ch.

If inside-it.ch ever holds a competition for the least popular word in IT, then my nomination would be "software update". I think that I would have a strong chance of winning. Updates are basically a really good thing: A new, prettier user interface, new functionality, bug fixes, better performance, fixed security vulnerabilities etc. That gets users excited. But let's be honest: in most cases updates are a real pain. The installation takes time, large updates can clog up the Internet connection and processors, updates can have bugs and after their installation nothing works like it used to. The new user interface may look nice, but people can no longer find what they are looking for.

Software projects are sadly no exception to these issues. Applications today use many third-party components such as libraries, frameworks, platforms, micro services, APIs and other runtime dependencies. Whether they are commercial components or open source does not really matter. For most technology stacks it is perfectly normal for even a simple little software project to have dependencies on dozens of components. Even if the project itself only relies on a few dependencies, they in turn have their own dependencies that need to be integrated in the parent project. Micro services, continuous delivery and other trends in modern software development also mean that dependencies are being fragmented ever more finely into smaller independent components that are constantly being updated and improved.

Software teams nowadays are often confronted with a situation where updates for third-party components keep arriving almost every day. This is independent of whether the software project is still being actively developed or is in maintenance mode, or is about to be decommissioned.

How do software teams manage updates for third-party components?

For software teams and managers the question is now how they can manage this situation. Updates to new versions of components are not free. New versions of components mean new releases of the application, which mean tests, builds, deployments etc. Often only minor adjustments are needed to the code or configuration to make the software run with the new versions of the components. All of this costs time and money and involves risks.

Why would you put yourself through all of this, if these are not critical security updates? Especially if everything is working just fine with the old version of the software and the users are happy? As they say: "If it ain't broke, don't fix it."

As little as possible, as much as necessary ...

A very common strategy is therefore to only install updates for components when it becomes absolutely necessary. For example, security patches, urgent performance improvements, or long-awaited functionality. Otherwise, the integration of new versions of components really brings no direct benefit to the users or managers of the software.

This strategy ensures that the available working hours of developers, testers and operational people are used as efficiently and effectively as possible. Instead of spending their time installing updates, teams can be focusing on new, innovative projects that add value. Many updates can be leapfrogged, saving time and money.

In addition, there is less risk of downtime if as few changes as possible are applied to a software system that is running perfectly. And finally, the users are grateful when they do not receive a new update for their software every few days.

As mentioned above, this strategy is very common. It is more or less the standard for software which is no longer in an active development or extension phase. For software in maintenance mode, people try to minimize the cost in time and money, apply as few changes as possible, or at least bundle them together into just a few maintenance windows every few months or years. But this strategy also entails risks.

Standing still means going backwards

Software ages in different ways to physical systems. There is no abrasion, friction or wear-and-tear. Programs that were written, for example, in the 1960s, basically run exactly the same way today as they did back then. But there are comparable symptoms of aging in software. For example, does the hardware for your program from the 1960s even exist any longer? Or the software may still run, but no changes can be made because the program is written in a language that today's programmers simply do not know how to work with.

Physical systems age because they are subject to processes of abrasion or decay. Software ages because the world around the software moves on. Software teams move on to other projects, developers move to other companies or retire, companies fail, new companies are created, hardware is replaced, there are new architectures, programming languages, libraries, platforms and APIs.

For software systems, standing still means going backwards. Software that is not kept continuously up-to-date in terms of its dependencies, accumulates technical debt. And that can cost those affected very dearly.

A Gordian Knot of updates hangs over our heads

Installing the new version of a dependency within a software system that is currently up-to-date with its dependencies, is in most cases a relatively minor exercise. Only one dependency needs to be changed. But if the application is out-of-date for various updates, it can quickly become complicated. The new version of Library A may, for example, require having the latest version of runtime environment B. To install the update for A, you also need to update B. But that in turn requires an update of framework C, because your version is incompatible with the latest version of B. In addition, library D is not yet available for the latest version of B. But the author of D no longer exists, and so you need to replace all the D components in their entirety with E ...

Suddenly a minor update from version 1.5.2 to 1.5.3 of a library has turned instantly into a giant Gordian knot of updates and dependencies which will keep the team busy for days, if not weeks. Any savings from skipping and postponing software updates are rapidly eroded by situations like this.

The real problems start when the upgrades become urgent and can no longer be delayed. Imagine you have a security vulnerability in one of your dependencies, and you need to block it straight away. Or your old hardware broke down and you cannot find the parts to fix it any longer, and you have to migrate to a new hardware generation.

In this case, there are not just opportunity costs because your team is kept busy unravelling the Gordian update knot. You also have actual costs, plus damage to your reputation because your software is offline for days or weeks.

The Gordian update knot can also become a problem even if no updates are urgently needed. The cost of changes to the software rise massively. Even the smallest change can be very expensive and involve major risks. Perhaps the existing software was neglected for too long, and is now in such a poor condition that renovation is barely worth it and it would be cheaper to build something new.

Even if that makes us, the software developer, happy it is not a sustainable and efficient IT strategy. In addition, we would much prefer to work on new, innovative projects.

Is a more aggressive update strategy the solution?

Extreme Programming (XP) has a mantra: "If it hurts, do it more often." That also seems to be a good strategy for our update problem. More and more software companies and users are backing this kind of strategy. With this, new versions of dependencies are integrated continuously and without delay. This does take investment, but it also has many benefits.

A strategy like this, however, of necessity requires a high level of automation, if the costs are not going to grow exponentially. Several updates per week are absolutely normal for software projects with a large number of dependencies. This pace cannot be managed without automated builds, integration, quality assurance, testing, releases and deployment. Especially for software projects that are in maintenance mode, and otherwise are not being actively worked on.

Modern software development processes with their automated pipelines (continuous delivery / continuous deployment) mesh well with this kind of development. Updates to dependencies such as libraries, runtime environments or services can be automated and integrated directly into the software delivery pipeline. The build environment checks continuously to see whether new versions of dependencies are available. If so, the updates are automatically integrated and the software delivery pipeline is kicked into action with build, testing, deployment etc. Only when there are errors, software developers do have to intervene manually and adjust the code as necessary.

This approach guarantees that the software is always up-to-date as far its dependencies are concerned. Urgent and security critical updates can be integrated and deployed rapidly and efficiently. The risk from code changes is minimized, because the changes are only very small each time and automated tests safeguard software quality.

Third-party components are an integral part of modern software development. Today's applications could not be envisaged without powerful libraries, runtime environments, services and APIs, and certainly would cost unacceptably more. Third-party components and dependencies do, however, need continual maintenance and need to be kept up-to-date. Delaying updates and bundling them may initially look like a viable option. However, the risks and costs of this option should not be underestimated.

The automation of the updates to third-party components and their integration in continuous software delivery pipelines is a much preferable solution. Taking a long term view, this option has fewer risks and is more economical and more sustainable.

 

Source: inside-it.ch (in German only)

Talk to our expert

Corsin Decurtins

Chief Technology Officer

More stories

On this topic

MORE STORIES