Categories
AI technology

Five conclusions after the chaos at OpenAI

Sam Altman is back at OpenAI and more powerful than before, but is that a good thing?

A few days after the kings drama at OpenAI, let's try to look over the ruins at this company whose mastermind, Chief Scientist Ilya Sutskever, has said that AI could herald the downfall of the world. Far too often it is forgotten that Sutskever and his colleague Jan Leike, also no slouch, published this text on OpenAI's official blog in July:

"Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction."

Ilya Sutskever and Jan Leike, OpenAI

And oh yes, they have over $10 billion in the bank to figure out if they are going to save the world, or end it. Yet there are few serious attempts to seriously monitor, let alone regulate, OpenAI and its competitors.

Imagine Boeing developing a new plane with a similar PR text: 'this super-fast plane will fly on spot-cheap organic pea soup and could help humanity make aviation accessible to all, but it could also backfire and crash and blow up the earth.' The chances of getting a license wouldn't be very good. With AI, things are different; the tech bros just put the website live and see how it goes.

Move along, the turkey was great

What is the media reporting on now at OpenAI? About the Thanksgiving dinner where reinstated CEO Sam Altman sat down with Adam D'Angelo, one of the board members who had fired him just six days earlier. Both tweeted afterwards that they had a great time together. They are so cute.

Despite the media's tendency to quickly lapse into picking a hero and a villain when conflicts arise, the much-lauded Sam Altman is increasingly being viewed as demonstrating some odd behavior from time to time. According to the Washington Post, Altman's dismissal had little to do with a disagreement over the safety of AI, as first reported, but mostly with his tendency to tell only part of the truth while trying to line his pockets left and right.

Even if they would be Granny Smith green

Meanwhile, then, Altman is back with a new board about which there are many doubts. Christopher Manning, the director of Stanford's AI Lab, noted that none of the board members have knowledge of AI: "The newly formed OpenAI board is presumably still incomplete,” he told TechCrunch. “Nevertheless, the current board membership, lacking anyone with deep knowledge about responsible use of AI in human society and comprising only white males, is not a promising start for such an important and influential AI company."

I don't care what color and what gender they are, even if they are Granny Smith green with three types of reproductive organs, but I do prefer when they have an understanding of the matter that their own experts say has the potential to put humanity over the cliff.

Five conclusions after the chaos

1. The AI war has been won by America.

Look after a week of craziness and fuss at OpenAI and we see that Microsoft, the old board and the new board are all Americans. The competitors? Amazon, Google, Meta, Anthropic, you name them: Americans. The rest of the world watches and holds meetings and speeches, but it's a done deal.

2. Good governance is nice, but bad governance is disastrous.

By this I do not mean that the people who fired Altman were right or wrong, because no one still knows that; the crux of their argument was that Altman had not given full disclosure, and if that is true, that remains a mortal sin.

But the root of the problem was deeper. The OpenAI board had been appointed to safeguard the mission of the OpenAI Foundation, which, in a nutshell, was to develop AI to create a better world. Not to create maximum shareholder value, as has now become the goal. The problem arose because of those conflicting goals.

3. Twitter, or X as it is now called, remains the only relevant social network in a crisis.

Elon Musk went on a rampage again last week and that seems to cost him $75 million in revenue, but Altman and everyone else involved still chose X as a platform to tell their story. Not Threads or TikTok – although I would have liked to have seen this mud fight for power portrayed in dance.

4. Microsoft wins.

Under Bill Gates, I already thought Microsoft was a funny name, because the company was neither micro nor soft then either, but in the nearly 10 years under CEO Satya Nadella, Microsoft has become a dominant force in all kinds of markets.

While Amazon, Google, Meta and also Apple are struggling to develop a coherent AI strategy, Microsoft seems to have found a winning formula: it is investing heavily in OpenAI, which uses the Microsoft Azure cloud, returning much of the investment back to Microsoft. Meanwhile, Microsoft does enjoy the capital appreciation via its 49% stake in OpenAI.

5. AI should be tested and probably regulated

Precisely because companies like Microsoft, Google, Meta and Amazon also dominate in the field of AI, the development of AI must be carefully monitored by governments. The years of privacy violations, disinformation and abuse of power taking place through social media, for example, show that these companies cannot regulate themselves.

The tech bro's motto remains unchanged: move fast & break things. But let them do that nicely with their own planet, not the current one. The potential impact of AI on the world is simply too great to let the mostly socially limited minds running tech companies make the choices for society.

An initiative like the AI-Verify Foundation can be a vehicle to achieve responsible adoption of AI applications. I close with the same quote as last week from OpenAI's Chief Scientist, Ilya Sutskever, which shows the world's AI leaders almost hope that future AI systems will have compassion for humanity:

"The upshot is, eventually, AI systems will become very, very, very capable and powerful. We will not be able to understand them. They’ll be much smarter than us. By that time, it is absolutely critical that the imprinting is very strong, so they feel toward us the way we feel toward our babies."

By Michiel

I try to develop solutions that are good for the bottom-line, the community and the planet at Blue City Solutions and Tracer.