Less is more

Voyager 1 has ventured farther than any human-made machine, now over 25 billion kilometres from Earth, drifting through the vast emptiness of interstellar space.  Since its launch in September 1977, it has continued to transmit signals back to Earth, sending scientific data from the freezing void beyond our solar system.  Yet, astonishingly, on board is a computer that is equivalent to a pocket calculator and less than 70 kilobytes of memory, not even enough to represent the web page or email you are reading now.  Through ingenuity and relentless reprogramming, NASA’s mission control has eked out every bit of capability, repurposing the same limited hardware to deliver decades of discovery, with Voyager 1 still whispering its messages back to Earth.

Despite our ability to do more with less, the software side of the technology industry is doing the exact opposite.  Software has become increasingly inefficient, especially in enterprise, and cloud environments.  Developers now rely on abundant processing power and memory, often trading optimisation for speed of development and new features.

This unfortunate trend is known as “Wirth’s Law” which observes that “software speed is decreasing more quickly than hardware speed is increasing”.  This “software bloat”, as a result off applications growing larger and slower without commensurate benefits, has real consequences.  It drives up costs (from wasted cloud spend to IT maintenance) and expands cyber risks by increasing complexity.

Consider how far system requirements have drifted from necessity.  In 1995, Windows 95 ran capably on a 25 MHz processor with just 4 MB of RAM.  Fast forward 25 years, and Windows 11 demands a dual-core 1+ GHz CPU and 4 GB of RAM, an increase of over a hundredfold in computing power just to run a basic operating system.  While hardware has improved, much of this demand stems not from richer functionality but from layers of abstraction, background processes, and bloated code.  The gains in user experience have been incremental, but the resource costs have skyrocketed.

Why has efficiency taken a back seat?  The answer often lies in incentives.  In enterprise software, the drive to deliver visible, customer-facing features usually outweighs the quieter work of refining performance or reducing footprint.  Programmers may want to write quality code, but the market doesn’t care.  In a commercial environment where value is measured by functionality rather than elegance, efficiency is seldom measured.

The tech industry isn’t uniformly guilty of resource mismanagement.  Engineers have long debated the balance between capability and efficiency, often advocating for streamlined solutions.  This debate isn’t merely academic; it’s influencing the future of processor design.  The industry is increasingly gravitating toward Reduced Instruction Set Computing (RISC), exemplified by modern ARM processors.  By emphasising a concise set of swift, low-power instructions, RISC achieves greater efficiency, making it ideal for mobile devices, embedded systems, and emerging cloud workloads.  Conversely, Complex Instruction Set Computing (CISC), which underpins x86 processors from Intel and AMD, employs more intricate instructions that simplify software optimisation but require increased power and hardware complexity.  While CISC continues to dominate traditional PCs and servers, the growing momentum behind RISC indicates a future where efficiency prevails over sheer complexity.

Now, along comes AI, and with it, another fork in the road.  One path is deeply inefficient: using AI as a live interpreter, embedded within applications, constantly consuming compute resources to do work that could be done once, ahead of time.  But there’s a better alternative.  Used wisely, AI is a tool for code generation, performance tuning, and automation, serving as a force multiplier rather than a drain.  In that role, it has the potential to reverse some of the bloat it might otherwise accelerate.  Like any tool, it’s not the technology itself but how we choose to apply it that determines whether we compound the problem or help solve it.

The power of constraint isn’t limited to machines. It’s a principle that extends to how we think and communicate.  In 2001, linguist Sonja Lang created Toki Pona, an experimental minimalist language built around just 120 to 137 root words.  Like Voyager 1’s limited memory, this tiny vocabulary forces efficiency, requiring speakers to combine words creatively to express nuance and complexity.  Inspired by Taoist thought and the idea that simplicity reveals truth, Toki Pona offers a linguistic parallel to lean computing.  Where bloated systems obscure intent, constraints, whether in language or code, can sharpen it.  In both cases, working with less doesn’t limit possibility, it encourages ingenuity.

I wrote some years ago about one of last century’s tech industry heavyweights, Philippe Khan, a founder of Borland, and his pivot toward lightweight, efficient software by founding Starfish.  His foray into small, lean applications now lives on in the background of most mobile devices, quietly powering data synchronisation.  Just as it was then, each new wave of technology brings fresh opportunities to simplify, refine, and do more with less.  Those opportunities are emerging again and this time, we’d be wise not to waste them.

Leave a Reply

Your email address will not be published. Required fields are marked *