Agentic elephants

Ask ChatGPT how to boil an egg or write a limerick, and it will usually do a fine job.  Ask any large language model to solve Wordle or invent a new kind of joke, and things start to wobble.  If you’ve ever wondered what is really going on inside our AI tools, you are not alone.  Perhaps they’re not so much like us as they are like what is arguably the most successful brain of the last million years: the elephant.

In their recent book, The Gambling Animal (Profile Books), Glenn Harrison and Don Ross argue that elephants and humans have evolved similarly powerful brains.  Rather than imagination, Harrison and Ross focus on creative risk-taking.  Elephants developed an enlarged cerebellum, giving them remarkable, perhaps unmatched, memory capabilities.  In contrast, the brain of our ancestors expanded in the frontal cortex, laying the groundwork for abstract thought, imagination and a willingness to take risks.  While we tend to be biased by the runaway success of humans in recent millennia, it’s worth remembering that elephants thrived on their own terms, stabilising food sources, maintaining populations, and adapting to climate swings that nearly drove our ancestors to extinction.

Understanding how different brains work helps us manage today’s AI-powered, agentic workforce.  I once joked that in my next life, I’d prefer to lead a team that just followed instructions, but in reality, that would be a nightmare.  It’s precisely the willingness to take initiative that allows human teams to innovate and solve complex problems with a level of autonomy, even if it occasionally frustrates us as their leaders.

Neither elephants nor Gen AI are stupid, but they don’t take the sorts of risks that lead them to diverge from the instructions they’ve been given.  It’s this risk-taking divergence that helps smart teams navigate ambiguity.  The ability to imagine futures that have never existed, and act toward them, remains beyond today’s AI systems, limiting the role agentic models can play in open-ended scenarios.

Agentic behaviour matters because word prediction alone often leads models into dead ends, producing plausible continuations without a clear path to complete thoughts, coherent paragraphs, or meaningful answers.  But when we look at large language models, they show few signs of the wildcard risk-taking we take for granted in humans.  Their strengths lie in their memory-like capabilities, like an elephant’s cerebellum, storing vast amounts of data and surfacing patterns with remarkable fluency.  They don’t imagine new worlds; they remix the past into statistically plausible variations.

Together, these ideas suggest that intelligence is not a single quality to be replicated, but a spectrum shaped by purpose, context and evolution.  Elephants show us the power of deep memory; humans, the unpredictable force of imagination.  Large language models, for all their fluency, mirror only fragments of this spectrum.  They recall and recombine, but they do not yet invent or intuit.  To build machines that move beyond surface-level imitation, we must first accept that intelligence is not just about what a system can output, but about the inner shape of thought itself.

There’s a flicker of something like imagination in these models, but it comes from the interface rather than the model itself.  Each prompt includes a built-in element of randomness, often controlled by a setting called “temperature.”  This parameter determines how adventurous the model is in choosing its next word.  Lower temperatures keep responses predictable and conservative (elephant-like), while higher ones allow for more surprising, even seemingly creative, outputs.  But this spark isn’t intrinsic, it’s as if we gave a memory expert a dice roll to decide how creative to be.

And that should give us pause.  If we want AI to be more than oversized memory machines, clever tuning won’t be enough.  What we need next may not be faster or larger models, but deeper insight into what’s happening inside them.  Today, we rely on visualisations like attention maps and token attributions, crude but useful, like poking a brain with a stick and calling it neuroscience.  Just as MRI transformed how we understand the human mind, we may need a comparable leap to truly grasp what AI is doing.  We’ve built something powerful, but still poorly understood.  If we want these systems to move beyond elephantine memory and toward something like imagination, we’ll need new tools not just to shape AI, but to see it clearly.

In the meantime, we should appreciate the power of what we now have.  Gen AI gives us computing resources that are almost infinitely flexible, include astounding elephantine memory and which can be directed by almost anyone through simple prompts.  It isn’t about humans or AI, it is humans leveraging AI capabilities to do more than we could ever do on our own.

One Reply to “Agentic elephants”

Leave a Reply

Your email address will not be published. Required fields are marked *