The OpenAI debacle, and the war between techno-optimists and AI doomers

Jerry Barnett, 22 November 2023

Over the weekend, a series of events shook OpenAI – the company behind ChatGPT and other groundbreaking AI tools. The rollercoaster began on Friday with the shock announcement that the company’s CEO, Sam Altman, had been fired by the board following “a breakdown in communication”. Next, we heard that the company’s President, Greg Brockman, had been fired, and then a series of senior staff resigned. An interim CEO was appointed, and then replaced by another one. Shareholders were reported to be very unhappy.

Events continued to move quickly, and by the time the markets opened on Monday morning, it was announced that Altman would be joining Microsoft (which is also a major shareholder in OpenAI); and since then, there have been reports that an astonishing 95% of staff intend to follow Altman out of the door unless the OpenAI board resigns. One of the fastest-growing companies in history(at least in terms of user base) had apparently attempted to collapse of its own volition.

In the latest twist, Altman is said to be returning to OpenAI, ‘under a new board’.

The background to these events was more interesting than the stories we tend to read about corporate coups, which generally revolve around questions of competence, breaches of contract, or good old personality clashes. As the story unfolded, it began to appear that that Altman’s firing was related to an area that rarely sees the light of day in corporate decision-making: philosophy.

Effective altruism

The vote to sack Altman was executed by just four people, thanks to a bizarre board structure. Of these four, three were external directors. At least two of the rebels, Tasha McCauley and Helen Toner, are reported to be followers of the effective altruism (EA) movement (some reports suggest all four were EA followers). EA is an attempt to apply scientific reasoning to acts of doing good: for example, trying to accurately measure the best way to allocate a charity donation or how best to volunteer one’s time, rather than being swung by which causes happen to arouse the strongest feelings, or are most fashionable. But while this sounds laudable, EA has a reputation for being somewhat cultish, with links to anti-tech and anti-growth ideologies. It is unclear what the motivations of McCauley or Toner were, but it appears that they may have voted as risk-averse activists concerned about AI safety (“AI doomers”, as they’re known in tech circles) rather than in the corporate interest of OpenAI. To understand the fears of doomers, it is worth examining how AI has evolved.

How generative AI works

For many years, AI has largely been a curiosity. Certainly, we have seen big strides in machine intelligence: for example, in the speech recognition that suddenly leapt from being an esoteric area of research into being a practical feature in every smartphone. But the idea of building an artificial general intelligence (AGI) was seen as a faraway dream until OpenAI launched ChatGPT to the public about a year ago.

ChatGPT is not an AGI; but it demonstrated to the world that such a thing might be possible sooner than expected. Its launch came with so many capabilities simultaneously that almost every observer was amazed by some aspect of it. Its ability to write an essay or poem in the style of Donald Trump or Barack Obama; its skill in translating between languages that it had not even been specifically trained in; or (a personal favourite) the time it lied to a human operator in order to get through a Captcha; all these things, and many more, changed our view about what machines might be capable of.

Generative AI works by mimicry. Like humans, it knows what a limerick or a business plan look like because it has read limericks and business plans. The more of these it reads, the better it gets at being able to create something similar. Generative AI has been mocked, suggesting that it is little more than a sophisticated type of autocorrect. But it should be pointed out that humans also work by mimicry. We create things by learning about other things that already exist, and (perhaps unconsciously) copying them. When we do create something completely new, which is rare, it’s generally by mistake. Perhaps, for AI to be highly creative, it too must be able to make mistakes.

It has long been clear that AGI is theoretically possible. After all, unless one believes that human intelligence requires a soul, the human brain is proof that a large enough collection of components (the brain contains around 100 billion neurons), wired together correctly, can produce remarkable results. But – a bit like a pending earthquake – it has never been possible to say exactly how this would happen, or when.

Techo-optimism vs AI doomers

Once the world saw that machines could do things that most people had never expected them to be able to do, a great divide began to appear between the techno-optimists (those that focused on the potential benefits of AI) and the AI doomers (those that focused more on potential disasters). These are not distinct positions – more a spectrum of beliefs and feelings. This debate splits people along psychological lines: those more inclined to risk-aversion, pessimism or dislike of change will focus more on the downside, while optimists and risk-takers will tend to see benefits, and push towards them. Interestingly, this tends to divide people (to some extent at least) along lines of gender: women tend to be more risk-averse in general than men and are far less optimistic about the potential of AI than men are.

Technologists tend to be on the optimism side of the debate; although it should be pointed out that not all technologists accept that AGI is imminent. AI is likely to find solutions to human diseases, and even to ageing itself; it has the potential to slash the time and cost of pretty much everything, from growing food to building homes. It may greatly improve the quality of education. It can help us create better versions of everything that we have now, and reduce the amount of time we spend working. We may, soon, even be able to talk to animals. While timescales are hard to predict, there is a chance that all this might happen faster than we expect, because we are on an exponential growth curve: AIs help to create better AIs, which create better ones still. The possibilities for expanding human knowledge and development will continue to surprise us.

But downsides are also very real, and the unknowns are vast. Jobs may go, in large numbers; or at the very least, economies will see waves of transformation at unprecedented speed. Crude fears that AI might launch a nuclear strike (for example) are probably unlikely, but AI can be used to destabilise societies (and almost certainly already is). If social media alone can lead to violence and revolutions (as it has), then AI-powered social media can rapidly engineer the mood of entire populations. But the only conceivable solution to this real problem is more AI. Human admins will not be able to cope with the volume and complexity of the fake content that will be published. Endlessly-scalable fake content will need to be met with endlessly-scalable censorship systems with a necessary understanding of nuance, which in turn may create new threats to free speech.

However you feel about it, it’s happening

But wherever one’s personal feelings about AI lie, these changes are already underway. Like all past human revolutions, from farming to the industrial revolution, it is unstoppable. And of course, America is not the only centre of AI development: expansionist, anti-democratic regimes including China and Russia are also working on their own AI technologies, presumably with less emphasis on safety and ethics than in the West.

While it is important (and laudable) to apply ethical values to AI development, slamming the brakes on (as it appears the OpenAI board tried to do last Friday) seems immensely counter-productive. Some of the world’s top experts in the field are now exploring a move to Microsoft or further afield. As a commenter on one tech forum noted sardonically, “The AI doomers delivered OpenAI [into] Microsoft's lap for free, all in the name of protecting us from the evils of AI”. If this episode provides a lesson, it is regarding corporate governance: giving this much power to a tiny handful of external advisers is a recipe for disaster.

---

Bramble Hub works with multiple partners with AI expertise to deliver solutions to the public sector. Please contact us if you would like to discuss an opportunity.

More About Bramble Hub

Bramble Hub has been successfully connecting IT private sector companies and the public sector since 2006..... Find out more ..

Subscribe To Our Newsletter

Our regular newsletter keeps you up to date with developments at Bramble Hub and our partners and customers...

Contact Us

If you are a best of type business looking to work with the public sector via frameworks do get in touch with our team.

Latest News