The answer can be glimpsed through the extraordinary coup that played out at OpenAI, the once non-profit whose proto-robot conversationalist got everyone talking. Its board sensationally defenestrated CEO Sam Altman, only to reinstate him within days.
To make head or tail of this, you have to understand two warring schools of thought. Both agree that AI could usher in a new era of superintelligence – but they take very different stances on whether it’s a good thing or not.
The ‘doom-mongering’ wing asserts that this technology poses an existential risk to humanity. We’re just months away, they claim, from hitting the threshold of computing power beyond which a budding HAL 9000 could exponentially re-engineer itself to sublimity and ultimately close the pod bay doors for all of us.
Preventing this nightmare scenario is the number one priority for the ‘effective altruists’ who think it plausible. What started out as a utilitarian project for more rational philanthropy has become singularly focused on slowing AI development until it can be safely ‘aligned’ to human interests.
OpenAI’s board – apparently of this mindset – concluded that, despite Altman’s public requests for regulation, progress was disconcertingly fast. They showed him the door.
Meanwhile, an opposing movement insists that such fears are founded on sand – in more ways than one. ‘Effective accelerationists’ (styled ‘e/acc’) say regulation will only benefit monopolists and the West’s rivals – far better, they cry, to charge full steam ahead.
The history of knowledge
With Altman back at the helm of OpenAI following a staff revolt, are we now hurtling towards the dawn of omniscient machines?
No – at least not yet. While generative AI is remarkable – with enormous potential for good and ill – these programmes are not about to become digital demigods.
To understand why, we must step back and consider the history of knowledge.
For thousands of years, human life was precarious, often violent, and commonly a drudge. But that all changed with the Enlightenment.
As a ‘clockwork’ understanding of the physical world built more reliable bridges and accurate ballistics, it became hopeless to cleave to ancestral authority. ‘Nullius in verba’ (‘Take nobody’s word for it’) went the motto of the newly formed Royal Society, and the Industrial Revolution was set in motion.
Crucially, it was not natural resources that lit the spark – after all, fossil fuels had been lying dormant the whole time. Instead, it was a new mode of generating knowledge that made all the difference.
But empiricism became, ironically, almost as sacrosanct as received wisdom had been. It is still the prevailing philosophy of science today, despite its defects.
The central flaw in the empirical approach is an unhealthy deference to data. The implicit assumption is that past performance is indeed a guide to future performance – the very opposite of the investment regulator’s refrain.
Self-described rationalists enjoin us to ‘be more Bayesian’ – to ‘update our prior beliefs’ according to new evidence. On the face of it, this sounds eminently sensible. But it is a beguiling folly.
The problem is that a data-driven mindset is prone to extrapolating patterns that may well not hold, by neglecting the importance of explanatory reasoning. Outside the cleanly delineated borders of game theory, people can, and habitually do, find ways to change the rules.
Karl Popper showed that induction is but a seductive fallacy, yet we are still swamped by articles drawing universal conclusions from mere snippets of history. Samples can’t hold a candle to better conjectures – that is, better explanations of how and why things have been the way they have been. Unlike the data, conjectures can stretch beyond the past tense into what could be.
What’s all this got to do with generative AI? Or investments, for that matter?
The crux is that the programs themselves are empirical. They ‘predict’ the best response to a prompt by maximising its statistical resemblance to training data.
They cannot, therefore, make creative leaps beyond those already taken – unlike, say, Newton, who, when ‘prompted’ by a falling apple, inscribed the laws of the heavens.
Furthermore, the effective altruists themselves are dogmatically Bayesian. This is precisely why they fear a silicon overlord so much – they think that ChatGPT and its brethren have the same epistemological software as the human brain.
If that were true, then it really would be only a matter of sufficient number-crunching power to forge an artificial mind. But it is a misconception.
It is the same misconception made by those who think the future is a mere extension of the past and make detailed, otherwise highly intelligent plans which do not survive contact with reality.
And now we return to the world of investments. For it is the same category of error made by those attempting to pick future winners. Some asset managers do enjoy success in this regard, but no more than we would expect through sheer chance. Information not already reflected in market values is extremely hard to come by.
Moreover, natural disasters and human behaviour are both unforeseeable. But as time goes on, only the unpredictable influence of the latter can grow without end – if more and more of the future is determined by choice.
Should that happen, more and more wealth would be generated too – gold spun from knowledge. People find new ways to solve problems and eke out more from less. They do not merely consume.
Ingenuity through explanatory knowledge is the reason why artificial general intelligence (AGI) is still a long way off, why prophesy (including stock-picking) is impossible, and why the case for investing rests emphatically not on a hopeful projection of the past, but on a joint ticket with human progress – on our freedom to err and improve in peace.
It’s also why the better our investments do, the more uncertain they’ll become. If we’re lucky, our futures will become more and more unforeseeable – as we make our own luck.
For more insights on Artificial Intelligence, explore our Creating The Future video collection – hear from experts on how it may shape health, education, work, security and much more.
Investments can go up and down in value and you may not get back the full amount originally invested