The global economy is powered by business innovation with small and large organisations alike inventing the future for us all. The rapid rate of change brings both opportunities and threats with recent cyber events acting as a wake-up call. Far from being afraid, we should be reminded that we need to design businesses to operate and even thrive in unexpected circumstances.
In the 1970s, companies like Toyota revolutionised manufacturing with “just in time” supply chains. Nothing ever comes for free, for every dollar of stock taken out of the system there is a dollar of contingency and slack also removed. When everything works well, there isn’t a problem, when it doesn’t the flow-on effects can create a supply chain whiplash or bullwhip effect. The best manufacturers in the world solved this by putting enormous pressure on quality to avoid exactly these sort of disruptions.
These are lessons we need to learn as we look to roll out more sophisticated systems in our society such as connected infrastructure and transport and even make the move to autonomous systems. Our society in a few short years is likely to be orders of magnitude more connected through complex networks and supply chains.
Computing generally follows real world models in its first iterations, and its mirroring of best practice supply chains is no exception. Moving from a world where each system was independent to one where they are tightly coupled across corporate boundaries has seen a data supply chain that lends heavily from manufacturers. The addition of cloud computing means that almost every process involves at least two and often more players linked together through a multitude of interdependencies.
This trend is as prevalent in our digitally enabled infrastructure (such as support for our rail networks and energy grids) as it is in digital-only systems (such as banking, telecommunications and government systems). The tighter those linkages the more functions that can be added and the lower the overall cost.
As amazing as the capabilities of our world of technology is, the integration leaves us with almost no room for error or ability to flex in an environment of disruption. For example, our energy grids seem to be becoming more brittle with the rise of interconnections and regular travellers know the impact of airlines operating without slack when something goes wrong.
Like the manufacturing supply chains of the last half century, the key to keeping this technology running is quality with CIOs aiming to keep systems up 24/7. Even small outages, though, have a flow-on effect that is harder to predict and further reaching than the equivalent disruption in a manufacturing process. That’s because the complexity of these system interdependencies has grown exponentially.
The brittle and inflexible nature of complex systems have been one of the reasons that retail has struggled to adjust to the juggernaut of online shopping and manufacturers are still trying to get control of their supply chains back. Recent cyber-attacks, leaving major companies offline, have brought this into stark focus. The attacks have typically encrypted or hijacked one or two systems in the network and brought a brittle environment to breaking point.
The architects of systems and processes tend to design for today’s business. Defensive computing is a paradigm for boxing components in such a way that they work regardless of what happens. This is a mindset that goes beyond testing for the scenarios outlined by stakeholders and moves to safe failovers in the event of anything unexpected.
Defensive measures include having systems work while offline or while counterpart systems are unavailable and when reference data is corrupted or hijacked. If technologists adopt a more defensive mindset, the testing burden is dramatically reduced and the uses of their systems can be extended far beyond the context of their initial design.
Where tightly-coupled systems are brittle, those that have been defensively architected are like flexible buildings that can withstand the buffeting winds of cyber-attacks and the shifting sands of changing business models.
Defensive design requires more expansive thinking about the worst-case scenario for every module. Data should backed-up incrementally and then be thoroughly validated. Connected systems should be assumed to provide completely unexpected and illegitimate responses. Users can be expected to approach every interaction with an almost destructive mindset.
Every part of a system should be independently robust and proactively test that every interaction is valid, rather than only checking for known invalid responses. The more modular and API-driven such a solution is, the more likely it is to be flexible and robust enough to survive cyber-attack as well as business disruption through combination with new applications.
Our infrastructure is never going to be impregnable. Even the strongest perimeter barriers can be breached by one innocent user clicking on the wrong link. Similarly, our business models aren’t invulnerable. The answer is to have each component of the information supply chain designed in a defensive way such that it assumes the worst of even trusted systems, users and competitors.
Businesses building for the worst case, planning to run even when seriously compromised, will find that they more easily weather cyber issues and competitive disruption. Neither should ever come as a surprise.