It’s hard to miss the increasingly frequent and ferocious cyber-attacks on our infrastructure. To save me compiling yet another run-down recent of events, here’s the opening of a recent CNN piece on the topic:
The CNN article goes on:
To put things into perspective, consider February’s breach at a Florida water treatment plant, where an attacker attempted to poison the water supply of a town of nearly 15,000 residents. This account in a PCMag article is sobering:
While it’s possible to conclude that underfunded municipal water supplies are especially vulnerable to this kind of attack (see this excellent piece by Krebs on Security), it’s hard to feel that they are true outliers.
In an increasingly connected world, where fewer and fewer parts of our physical reality are untouched by the Internet, this is clearly a problem.
And while the prevailing narrative of “update all your software” sounds straightforward in principle, in a world where connected infrastructure is barely even inventoried, even that may prove a tall order. Add to that the set of actual measures needed for real-world security – including end-to-end encryption, multi-factor authentication and a Zero Trust service architecture – and the picture starts to look borderline hopeless.
To move forward to a more secure future, infrastructure operators need to embrace Software-as-a-Service (SaaS) throughout their operations. In other words, they ought to outsource more of their software and IT to external providers – ones who specialize in providing solutions for their sector; who do so securely and at scale.
At first glance this may seem counter intuitive. It (a) means that more parties are involved in any given operation, (b) involves more interfaces between parties, and (c) involves more distributed IT infrastructure.
In reality, such a setup is likely to lead to more secure and robust outcomes. Firstly, putting “more eyes” on a problem – especially the eyes of IT specialists attuned to spotting security vulnerabilities – almost always leads to better outcomes. Secondly, communications on the Internet are already underpinned by countless secure interfaces between parties, often in the form of well-designed APIs. These have a long track-record of success in sectors like finance. As to the third point: it seems fair to say that by now the myth that IT infrastructure needs to live in a physical silo in order to be secure has been thoroughly debunked, most recently with the US military moving to the cloud.
Yet the key point is this: Software-as-a-Service providers enter into long-term partnerships with their customers. And in these partnerships, SaaS offerings evolve not just to address changes in customers’ functional needs, but also in response to evolving cyber-threats and security best practices. Ensuring cyber-security is no longer a passive but an active undertaking.
With SaaS, each party can focus on what it does best: water utilities run the plants that keep our drinking water clean, while software providers provide secure control systems for those utilities.
The above may also sound too simple a prescription for what is fundamentally a complex problem. For one, infrastructure operators run on timelines that span decades – and may be justifiably hesitant about outsourcing to providers who are not guaranteed to be around for that long. Also, the inherent security of SaaS may only go so far in a sector with just as much hardware as software. Thankfully, recent trends mean that issues like these are increasingly surmountable; more on this in a follow-up post.
In the meantime, we need to be taking all the steps we can, to secure our critical infrastructure. Because malicious actors just starting to scratch the surface of what’s really possible.