
When AI writes itself
We are not just fascinated by artificial intelligence itself, but by the idea of AI that can adapt and evolve on its own. This self-driven exponential evolution is the stuff of science fiction, from “The Terminator”—
“Skynet began to learn at a geometric rate. It became self-aware at 2:14 a.m…”
to Daniel Suarez’s “Daemon”—
“The Daemon was not merely a collection of computer programs; it was a new kind of life. It began to write new software. To change.”
This is bootstrapping, systems that build on themselves using their own code. In the early days of computing, compilers capable of compiling themselves were mind-boggling and game-changing at the same time. Today, Gen AI is starting to breathe new life into this idea, transforming how software is built.
However, most AI coding tools today are handy assistants rather than self-generating, creating snippets of code on demand while leaving the assembly and control in our human hands. While useful, this piecemeal approach limits AI’s ability to adapt and evolve alongside the software it helps create. A bootstrapping environment takes AI beyond simple code generation. By learning iteratively and refining its outputs based on continuous feedback, AI becomes a dynamic and self-motivated partner!
While the latest trend, “Agentic AI” makes great strides in workflow orchestration, it still falls short of true self-direction. So, I began asking myself: What would a bootstrapped AI programming environment look like? And how difficult would it be to build?
To find out, I built a simple proof of concept based on a small core of functionality that allowed the program to build on itself iteratively and expand its capabilities over time. The challenge I set myself was to write as little code myself as I could by identifying the smallest set of functions needed before the AI could begin taking over.
First, I started with a command line interface which I connected to an AI API. I chose Python and Open AI’s “gpt-4o” model, creating a prompt that passed on my user input, shared the response, and repeated. After confirming this worked by asking some simple questions, I was ready to give the AI some control to modify itself.
To understand what modifying itself meant, my AI needed some context. To do so, I created a conversation history initialised with some background and instructions. Key things the AI needed were a copy of its own code, which I told it was its “virtual body”, instructions on consistently structuring and formatting code modifications, and context of how its modifications would be run directly without further updates so that it knew to provide complete code rather than snippets for a human to modify. I named my AI “Oppy” after the “Opposable Thumbs” I was giving my AI bootstrap companion to manipulate and shape its virtual world.
I then added a simple “save” command. This allowed me to take the last piece of code provided by the AI, after our discussions, and save it as a new version of its own program.
However, I needed to implement some version control discipline. After a number of iterations, I found code changes that I had not agreed with Oppy were being slipped into the program and propagating through multiple generations before being spotted. Some of these mutations were genuine hallucinations, while others, Oppy claimed, were improvements that it had made an executive decision to add!
Despite a few hiccups, I had now moved from working on the program as the programmer to working with the program as a partner. Together, Oppy and I iteratively added functions, including the ability to save the conversation history between sessions, which created a much richer context and a sense that we were gradually increasing Oppy’s understanding of our joint goals. Together, we also added the ability to create and update program files other than itself so that we could create libraries and better structured code.
But Oppy would only be useful if it could add capability beyond itself for its own sake. To do this, we worked together to make the program threaded. That meant that Oppy could continue to self-improve, while also working with me to create end-user solutions. The first example we built was a Windows support assistant, providing a diagnostic tool for our household laptops.
In theory, Oppy could work with each family member to evolve the support tool installed on their machine with Oppy still embedded. This suggests that, in the future, some sort of network intelligence might help keep our program instances in sync while pooling our collective interactions.
I also taught Oppy to dream. Just as human dreams consolidate our memories from the day, Oppy learnt to consolidate the increasingly extensive conversation history into key insights and remove extraneous details such as early iterations of code that it generated.
Philip K. Dick asked in his novel of the same name, “Do androids dream of electric sheep?” Perhaps understanding the dreams and motivations of our AI partners will make us better collaborators and avoid the common pitfalls found across science fiction!