Over time, all software eventually proves to have security flaws that will leave it open to hackers who are compelled to compromise it.

This isn’t because software begins to change, or that it starts to decay; it’s because software is complex, and, given enough time, someone will be able to find vulnerabilities and ultimately penetrate it.

A great example of this was the Heartbleed bug that was discovered in 2014. The bug was created in 2012 and affected a piece of software called openssl, which was (and probably still is) used in about two thirds of the world's web servers. And yet it took 2 years for someone to discover you could trick it into giving out the contents of the computer's private memory.

It’s not just software

The Spectre and Meltdown scandals which came to light in December last year were called ‘catastrophic’ by security experts.

A fundamental processor design — which had been around for decades — suddenly became vulnerable simply because some people found a way to compromise it. The vulnerability, of course, had been there all along, it just took such a period of time for someone to find it. And find it they did.

The flaw this time was hardware and not software - highlighting further the issues of complexity and the fact that time leads to vulnerability.

New might not be the answer

New software (or hardware) doesn't necessarily have fewer flaws; if new functionality is added to some software it might even lead to a greater number of flaws. So each time you update a piece of software you’re likely to be more secure because you'll get fixes to the most recently-discovered bugs, but if the update contains new features then you may also be receiving some new security flaws that will be discovered in the future.

This isn't necessarily a zero sum game though. Firstly because even if an update introduces as many bugs as it fixes, the amount of time attackers have had to discover those bugs is much smaller. But also, as a piece of software matures it may get a reducing number of new features, meaning updates are less likely to introduce new flaws. It depends on the software. But either way, being idle about updates is unlikely to serve you well; you need to keep on updating.

Security is a process not a product

Put simply: moving targets are hard to hit. Check for flaws, frequently, regardless of a software’s age. Ideally you should automate this process by signing up to receive vulnerability announcements for the software libraries you're using, or using a tool that scans the installed libraries in your code and checks for vulnerability announcements for you.

If you plan to continue using a piece of software, you need to plan regular security upgrades and maintenance for it. Installing something and then leaving it, believing the software is a watertight "finished" product that doesn't need updating is foolish in the extreme. Seldom is any software a "finished" product. Put this into your timelines, your budgets and your mindset.

Think about the future

Another danger of legacy infrastructure is the longer you leave it before upgrading, the more difficult — and probably more expensive — it's going to be when the time comes knocking (with probable urgency).

The NHS famously had to pay Microsoft big money to keep providing Windows XP security updates, simply because the NHS hadn't upgraded the then unsupported software.

Paying for the bespoke security updates was, perhaps, cheaper than buying the licensing for each new version of Windows. However, when the NHS fell victim last year to the ransomware WannaCry — it possibly indicated it wasn't worth it.

The stakes are high

In the case of WannaCry, the data involved was healthcare patient data which is, after all, extremely sensitive. Weak security of other healthcare systems — including life support machines — could be deadly. Financial systems too, would — you’d hope — prioritise security before anything else.

Every industry may have different security priorities, with varying consequences of breaches.

Don't rely on being given updates

Finding out that your application is vulnerable by receiving an email alert about a library that you're using is one thing, but finding out by seeing your brand name appearing in news headlines is quite another. You shouldn't rely on updates to the third party software you're using as a sole means of keeping you safe; you need to test things yourself as well. There are 2 reasons for this. Firstly, because, if possible, you want to discover those flaws yourself before other people do, and secondly because no one else is going to provide updates for the parts of the software that you've written in-house.

There are various testing tools which you can use to scan applications for common types of vulnerabilities. These tools will periodically run attempted attacks on your application (or a replica of it) so that if you introduce a new flaw – whether through a third party library or through your own code – the scanning tool might detect it and alert you. But these tools only detect common types of vulnerabilities; they won't pick up everything. So there's no substitute for instilling a mindset of security thinking in your development team.

Ultimately, having secure applications is a process and a culture. You cannot simply buy it, and you certainly cannot achieve it by standing still.