Post-truth surprises
Unexpected election results around the world have given the media the chance to talk about their favourite topic: themselves! With their experience running polls, the media are very good at predicting the winner out of two established parties or candidates but are periodically blindsided by outsiders or choices that break with convention. In most cases, there were plenty of warnings but it takes hindsight to make experts of us all.
Surprises are coming as thick and fast in business as they are in politics and similarly there are just as many who get them right with perfect hindsight! The same polling and data issues apply to navigating the economy as they do to predicting electoral trends.
The Oxford Dictionary picked “post-truth” as their 2016 word of the year. The term refers to the selective use of facts to support a particular view of the world or narrative. Many are arguing that the surprises we are seeing today are unique to the era we live in. The reality is that the selective use of data has long been a problem, but the information age makes it more common than ever before.
For evidence that poor use of data has led to past surprises, it worth going way back to 1936 when a prominent US publication called The Literary Digest invested in arguably the largest poll of the time. The Literary Digest used their huge sample of more than two million voters to predict the Republican challenger would easily beat the incumbent, President Roosevelt. After Roosevelt won convincingly, The Literary Digest’s demise came shortly thereafter.
As humans, we look for patterns, but are guilty of spotting patterns first in data that validates what we already know. This is “confirmation bias” where we overemphasise a select few facts. In the case of political polls, the individuals or questions picked often reinforces a set of assumptions by those who are doing the polling.
This is as true within organisations as it is in the public arena. Information overload means that we have to filter much more than ever before. With Big Data, we are filtering using algorithms that increasingly depend on Artificial Intelligence (AI).
AI needs to be trained (another word for programming without programmers) on datasets that are chosen by us, leaving open exactly the same confirmation bias issues that have led the media astray. AI can’t make a “cognitive leap” to look beyond the world that the data it was trained on describes (see Your insight might protect your job).
This is a huge business opportunity. Far from seeing an explosion of “earn while you sleep” business models, there is more demand than ever for services that include more human intervention. Amazon Mechanical Turk is one such example where tasks such as categorising photos are farmed out to an army of contractors. Of course, working for the machines in this sort of model is also a path to low paid work, hardly the future that we would hope for the next generation.
The real opportunity in Big Data, even with its automated filtering, is the training and development of a new breed of professionals who will curate the data used to train the AI. Only humans can identify the surprises as they emerge and challenge the choice of data used for analysis.
Information overload is tempting organisations to filter available data, only to be blindsided by sudden moves in sales, inventory or costs. With hindsight, most of these surprises should have been predicted. More and more organisations are challenging the post-truth habits that many professionals have fallen into, broadening the data they look at, changing the business narratives and creating new opportunities as a result.
At the time of writing, automated search engines are under threat of a ban by advertisers sick of their promotions sitting alongside objectionable content. At the turn of the century human curated search lost out in the battle with automation, but the war may not be over yet. As the might of advertising revenue finds voice, demanding something better than automated algorithms can provide, it may be that earlier models may emerge again.
It is possible that the future is more human curation and less automation.