We are no closer to the singularity
There is no doubt that 2023 has been the year that AI, through Generative AI, has truly captured public imagination. As we wrap-up the year, I thought it useful to look back at a piece I wrote nearly a decade ago, and few current followers of my posts will likely have read.
Although written back in December 2014, my comments are as relevant now as they were at the time which suggests we really are no closer to the so-called singularity. Personally, I think that’s a good thing!
Over the millennia we have been warned that the end of the world is nigh. While it will no doubt be true one day, warnings by Stephen Hawking in a piece he co-authored on artificial intelligence don’t fill me with fear (See Transcending Complacency on Superintelligent Machines). I disagree with the commentators across the board who are warning that the machine will outsmart us by the 2030s and that it could become a Terminator-style race between us and them.
Hawking and his co-authors argue that “[s]uccess in creating AI would be the biggest event in human history. Unfortunately, it might also be our last unless we learn how to avoid the risks.” They go on to compare artificial intelligence (AI) to an alien life form of superior intelligence who would owe us no comfort or future on this planet.
These comments relate to the so-called “singularity”, a term popularised by writer Vernor Vinge where, sometime in the vicinity of the 2030s, AI can out think humans.
I have previously written about the limits of current AI research (see Your insight might protect your job). Although current techniques (which I argue are the second generation of AI) cannot scale to provide the cognitive leaps that are necessary for real insight, it would be wrong to assume that a third generation isn’t on the horizon.
Despite the potential for a third (yet to be imagined) generation of technology for AI, there are three reasons why I disagree that such machines will take over the world or even outsmart us.
1. It’s all about the user interface
Simply applying Big Data analytics to the content of the internet will not create a machine that is smarter than us. If a machine is to be of our world, it needs to be able to interact with it through a user interface.
The human brain only works well in conjunction with opposable thumbs, something that few other intelligent animals can’t compete with us on. Regardless of how intelligent a dolphin is, the lack of a good interface means that it can’t manipulate the world around it and in-turn learn from these interactions.
Like previous generations of computing, it is all about the user interface. Robotics is likely to overcome these constraints but current predictions of the internet coming alive due to its complexity are fanciful. Far from running out of our control, we are re-architecting our technology to remove the risk of runaway complexity by segmenting the systems that touch our physical world. This segmentation is like cutting off the closest thing that the internet has to opposable thumbs.
2. We will become the machine
Information, knowledge and intelligence directly equate to power. Humans never give-up power easily and choose political alliances with adversaries rather than cede control.
Any competition between humans and machines is likely to follow the same lines. Rather than cede to the machine, we will join with them. I’ve previously written about what might become the first direct neural interfaces (see Will the bionic eye solve information overload?). It is inconceivable that we won’t choose to augment our own brains with the internet in the coming decades.
Such a future virtually guarantees supremacy of our species against any machine competition, but it does paint a future which is perhaps uncomfortable from our vantage point today.
3. We aren’t their competitors
Despite what you might read, we live the majority of our lives in the physical world. We eat food, enjoy socialising in person and interact with our hobbies in three dimensions.
Our computers live almost entirely in memory of the machines that we have made. They are creatures of the internet. While we visualise the internet through our browsers, apps and other tools, we are visitors to this space. We twist its contents to represent metaphors of the physical world (for example, paper for writing on and rooms for meeting in).
Some scientists argue that the virtual world is an entirely valid reality. Nick Bostrom has even gone so far as to wonder whether our own universe is the virtual world of some supercomputer experiment being run by an alien life form (see Are you living in a computer simulation?) If that is the case, we need to be very afraid of the alien “off” switch!
Regardless of the simulation argument, any virtual reality of the internet where AI may take shape is not our reality. It is as if we are of different universes, but like the multiverse that is regaining popularity in theoretical physics, we do have an increasingly symbiotic relationship.
>Simply applying Big Data analytics to the content of the internet will not create a machine that is smarter than us.
That’s not the only approach we’re working on. There’s a huge amount of ongoing work around creating architecture of cognitive diversity. Already, we’ve shown success in multi-modal approaches, which mix text and images (both analysis and generation). There’s no reason to believe we’ve run out of ideas or hit a dead end. We’ll add spatial, temporal, bodily, psychological, and social knowledge so our systems can switch between representations when they get stuck — as we do.
There is no reason to believe, no known law of physics or limit of computation, that indicates computers are incapable of doing everything people can do.