Categories
AI invest crypto

EU says to invest two hundred billion in AI, but how?

The European Union announced this week at the AI Action Summit in Paris that it will invest two hundred billion Euros in the development of AI. Curious clicking on the link leads directly to a deleted YouTube video: 'Video removed by the uploader'. These brainiacs are going to invest two hundred billion Euros of taxpayer money in AI?

One striking aspect of the story, because serious plans are as yet unobtainable, is the creation of 'AI Gigafactories', or large-scale data centers to serve as the backbone for European AI development. When politicians start spouting texts about "hundreds of billions of investments" and empty phrases like "AI Gigafactories," because data centers are apparently not sexy enough anymore, it is advisable to be vigilant.

Of course, the European rhetoric is a reaction to the ambitious American Stargate project. That too is weighed down by a Boy Scout objective like "to build and develop AI - and specifically AGI - for the benefit of all humanity."

The communique states that priorities include “ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all” and “making AI sustainable for people and the planet”.
It is as if miss World and Buzz Lightyear were handing in a homework assignment together.

The Guardian wrote up a clear summary of the AI summit, with three things standing out: first, the global recognition that AI is having a huge impact on society and the economy; second, that developments in AI are accelerating; and, unfortunately, third, that there is no consensus on how to regulate developments internationally.

The fear among entrepreneurs in Europe is that bureaucrats without substantive expertise will distribute the planned budget, which will result in wasted money and slow implementation.

Smarter European approach: embrace open source AI

A better approach would be to not simply spend these funds on infrastructure or vague programs, but to invest in AI companies working with open-source technologies, not based on but inspired by China's DeepSeek. By starting with a fully open-source codebase, including transparent training data, the EU can build an AI ecosystem that is widely accessible to large companies, startups, researchers, businesses and hopefully even individual developers.

The most practical approach would be the creation of a fund to invest in AI applications that build on this open-source base. This would ideally be done in partnership with existing investment funds in the market to avoid wasting taxpayer money, rather than a top-down model in which the EU itself tries to drive innovation.

The current trend within AI shows that most investment is going to large language models (LLMs), with companies like Meta and Microsoft spending tens of billions a year on AI development. This means that if Europe is not more strategic with its investment, it risks remaining behind.

Focus on open-source AI and a smart investment model rather than a purely infrastructure-driven approach could yet help Europe achieve a competitive and sustainable AI ecosystem. But if the strategy is not sharply translated into tactical and operational decisions soon, this historic opportunity will get bogged down in inefficiency and political rhetoric.

Elon Musk's OpenAI bid not for real

Elon Musk has announced his intention to make a nearly $100 billion bid for OpenAI, but the question is whether this is a serious acquisition proposal or a strategic move to thwart his archenemy Sam Altman. Musk, who co-founded OpenAI but later left acrimoniously, vehemently opposes OpenAI's transition from a nonprofit to a commercial company. A bid of this size would make it more difficult for OpenAI to move the shares held by the non profit organization to regular commercial shareholders.

A major complication is that Microsoft owns 49% of the shares in OpenAI, meaning Satya Nadella's company has a decisive vote in any acquisition. For Microsoft, a sale would raise nearly $50 billion, but the company also has a strategic stake in OpenAI because most of its AI infrastructure runs on Microsoft Azure. This makes it unlikely that Microsoft will stand and cheer when OpenAI is acquired, unless a deal is struck in which Musk's AI company XAI along with OpenAI becomes a major customer of Microsoft.

Remarkably, Sam Altman himself owns no shares in OpenAI, giving him little direct influence over an acquisition. This highlights OpenAI's unusual governance model, with control largely in the hands of the foundation that founded the company. Musk's bid therefore seems less a serious attempt to acquire OpenAI and more a tactical move to disrupt Altman's plans and make OpenAI's future uncertain. Surely investors will be scratching their heads before they will fork over the forty billion sought by Altman on a valuation of three hundred billion in this situation.

You need a search engine to make sense of Google Gemini's choices. 

AI UI is horrible

You'd almost forget in all the fuss to take a good look at OpenAI's products. MG Siegler did not hold back about ChatGPT's sadly tuneful interface:

"Well, now we're up to eight options – six in the main drop-down and still those same two "left-overs" in the sub-menu. And technically it's nine options if you include the "Temporary chat" toggle."

At Google, the user interface (UI) is just as horrible. The makers of the most Spartan, and thus most successful, search engine ever, have managed to turn their ChatGPT competitor Gemini into an incomprehensible AI menu. It is downright woeful, because there are extraordinary capabilities hidden beneath this wretched interface. See, for example, how Google AI Studio phenomenally explains how Photoshop works.

So I asked Google Pro 1.5 Deep Research, what a name, to produce an investment strategy for the European Union based on literature research. A few minutes later, Deep Research produced this Google Doc. Far from perfect, but better than anything produced so far by the EU.

Ethereum under fire

Ethereum, for years the leader in the world of smart contracts and after Bitcoin the crypto currency with the highest market cap, is at a crossroads. Despite the rising Bitcoin price and optimism in the crypto market, especially since Trump's election victory, Ethereum remains far behind and is trading even lower than a year ago.

Ethereum's share price is suffering from the rise of competitors such as Solana and Sui

What are the causes?

  • Lack of major updates: after "The Merge" (the switch from Proof-of-Work to Proof-of-Stake), there has been no new breakthrough.
  • Increasing competition: Solana, Sui and Aptos are gaining ground with faster and cheaper transactions.
  • Negative publicity: Ethereum founder Vitalik Buterin's recent tweet about communism and decentralization was taken out of context and caused unnecessary uproar.

Ethereum is still seen as a fundamentally strong blockchain, but it may lose more and more market share to newer platforms that are more responsive to users' current needs.

Huge livestream error, token price rises?

In the third episode of the NFA Podcast, which Nisheta Sachdev makes with yours truly, she surprised me with the news that NEAR Protocol's token price had risen after a team member accidentally shared the wrong screen of his computer during a livestream, unwittingly treating viewers to carnal intimacy of the eighteen-plus genre.

The crypto world is known for its unpredictable market reactions, but what happened next was exceptional even for crypto: the price of NEAR rose 5.6% to $3.50. While it cannot be proven that the livestream incident is directly responsible for the price increase, it again raises the question of how much influence, if any, "fundamentals" have on the crypto market?

If a blunder like this can drive up the price, it means the market is guided more by hype than by the true value of a project. Even the Tinder Swindler, infamous since the Netflix documentary, is launching his own token. It is leading to increasing frustration among professional developers and investors in the blockchain world.

Nish explains the Near livestream incident

GameStop considers buying crypto

GameStop, the company that was bailed out by retail investors in 2021 during the WallStreetBets revolt, is now considering investing in Bitcoin and other crypto-assets. By the way, the movie about GameStop is particularly worth seeing, with splendid roles by Pete Davidson and Seth Rogen, among others.

San Francisco overrun by startup teenagers

When incubator Y Combinator recently had a party, the platters went around with glasses of soda instead of alcohol: many startup founders were simply too young to legally drink alcohol. San Francisco's startup scene is flooded with very young AI entrepreneurs, many of whom left college to start their own companies.

The cost of university education in the U.S. has risen so much that despite the low success rate, entrepreneurship is a legitimate option. Outside the U.S., university education often remains a more logical route because the cost of a university education is much lower and the funding and exit opportunities for startups are not as great than in Silicon Valley.

That and much more in the third episode of the NFA Podcast, in which I also share how my experiment with investing one hundred dollars last February went down, exclusively in tech stocks.

For the hasty viewer and clicker

00:00 Introduction to NFA Podcast and Hosts Nisheta and Michiel 

01:42 Surprising News in Crypto: Near Protocol Incident 

03:53 Market Reactions and Near Token Performance 

05:22 Ethereum's Market Sentiment and Fear Index 

08:09 Ethereum's Performance Compared to Other Blockchains 

09:29 Market Predictions and New Money Flowing In 

11:35 GameStop's Potential Move into Crypto 

12:42 Upcoming Launches: Tinder Swindler's Token 

13:06 Elon Musk's Bid for OpenAI 

14:44 The AI Summit and Global AI Treaties. 

16:49 Youth and Startups: The College Dropout Phenomenon 

20:44 Market Spotlight: Insights and Predictions 

22:34 Investing Strategies and Personal Experiences. 

24:44 Supermicro, Palantir and Nvidia 

25:20 Dutch Trance NFA Podcast Theme 

25:41 NFA Dutch Trance Theme Review 

25:59 Indian NFA Podcast Theme 

26:25 Indian NFA Theme Review

Categories
AI invest

Wall Street panic over DeepSeek exaggerated; Nvidia shares suffer historic record loss

On Sunday, I wrote that DeepSeek-R1 was a revolutionary, good and cheap AI product from China. But I had no idea that a day later Wall Street would react as if aliens had launched an attack on our planet.

The homepage of the Wall Street Journal on 'the day after' the DeepSeek crash

Yesterday, the technology sector experienced a sharp downturn, to put it mildly, with the chip sector hit the hardest. Nvidia's share price fell 16.9%, resulting in a loss of $593 billion in market capitalization. Broadcom saw a 17.3% drop, accounting for a loss of $198 billion. Advanced Micro Devices (AMD) lost 6.3%, a loss in value of $12.5 billion. Taiwan Semiconductor Manufacturing Company (TSMC) fell 13.2%, down $151 billion, and shares of Arm Holdings fell 10.2%, a $17 billion hit. Marvell Technology experienced the steepest decline, losing 19.2%, a whopping $20 billion. 

Apple smiling third

The drop pushes previously high-flying chipmaker Nvidia to third place in total market cap, behind Apple and Microsoft. Last Friday, it was still in first place. Apple now has the highest market value with $3.46 trillion ($3,460 billion ), followed by Microsoft with $3.22 trillion and Nvidia with $2.90 trillion. Shares of Apple, which has less exposure to AI, rose 3% Monday, while the tech-heavy Nasdaq fell 3%.

Apple's rise is similar to a house rising in value because the neighbor's roof is on fire. Intrinsically, of course, nothing has changed in Apple's value, and a trade war with China still lurks, which would hit Apple hard. Bur that's for another day.

Barron's was the voice of reason yesterday

DeepSeek hits Wall Street

Investors blame the sell-off on the rise of DeepSeek, a only one year old Chinese company that last week unveiled a revolutionary Large Language Model (LLM), named DeepSeek-R1. DeepSeek's model is similar to existing models such as OpenAI's ChatGPT 4o or Anthropic's Claude, but is said to have been developed at a fraction of the cost. It also costs a fraction for customers compared to ChatGPT and Claude.

This has rightly led to concerns in investor circles that the U.S. strategy of heavy investment in AI development, often referred to as a "brute force" approach, is becoming obsolete. This brute force method uses extensive computing power and large data sets to train AI models, with the goal of achieving higher performance due to its massive scale. It is a billion-dollar approach that I wrote about earlier.

DeepSeek 'the Sputnik moment for AI'

A Wall Street Journal editorial clearly summarizes the competitiveness of DeepSeek-R1 with a catchy example:

"Enter DeepSeek, which last week released a new R1 model that claims to be as advanced as OpenAI's on math, code and reasoning tasks. Tech gurus who inspected the model agreed. One economist asked R1 how much Donald Trump's proposed 25% tariffs will affect Canada's GDP, and it spit back an answer close to that of a major bank's estimate in 12 seconds. Along with the detailed steps R1 used to get to the answer."

Venture capitalist and former entrepreneur (Netscape) Marc Andreessen described the launch of DeepSeek-R1 as the Sputnik moment for AI; similar to the moment the world realized the Soviet Union had taken a lead in space exploration.

OpenAI reacts anxiously

OpenAI CEO Sam Altman put on a brave face:

"DeepSeek's R1 is an impressive model, particularly around what they're able to deliver for the price. We will obviously deliver much better models and also it's legitimately invigorating to have a new competitor! We will pull up some releases."

(I myself took the liberty of inserting the capital letters Altman avoids, because otherwise I find it too annoying to read.)

Altman puts on a brave face, but in the last sentence it appears that OpenAI is accelerating product releases under pressure from DeepSeek. Or without capital letters just like him: altman blinked. Legit, you know, bro.

Sweat-soaked body warmers

They need to exhale and tuck in their body warmers up: a
on Wall Street. The claim that DeepSeek developed their R1 model with only a $5 million investment is not verifiable, and the Chinese media are not known for their transparency, nor for their critical approach to Chinese initiatives.

Western companies are unlikely to adopt Chinese AI technology, with all the geopolitical tensions and regulatory constraints, especially in critical sectors such as finance, defense and government. Chief Information Officers are increasingly cautious about integrating Chinese technology into critical systems. In fact, it only happens now if there is no other alternative.

Despite recent market volatility, major technology companies such as Microsoft, Google, Amazon and Oracle continue to rely on high-performance chips for their AI initiatives. No company is canceling its orders with Nvidia because DeepSeek has a different approach.

Because there is currently no Western equivalent of DeepSeek's R1 model that is fully open-source (as opposed to "open-weight," a term I discussed in my Sunday edition ), these companies will continue to invest in expensive hardware and huge data centers.

This means that shareholders in companies such as Nvidia and Broadcom can expect a recovery in stock prices in the coming months, perhaps weeks.

This is how the main victims of the DeepSeek crash closed on Wall Street yesterday

Panic at the vc's

The real impact of DeepSeek's innovation will likely be felt more profoundly by venture capital funds that have poured billions into AI startups without clear revenue models or, as VCs always so delightfully know how to put it from the comfort of their armchairs: a clear path to profitability.

Earlier I highlighted the precarious financial situation of OpenAI, which seems headed for a $15 billion loss this year: that's $41 million per day, $1.7 million per hour and $476 per second. Partners at Lightspeed, which last week invested $2 billionin Anthropic, the developer of DeepSeek-R1 competitor Claude, at a valuation of as much as $60 billion, will have slept terrible last night.

Does DeepSeek dare a frontal attack?

DeepSeek's approach to AI development, especially its emphasis on efficiency and the possibility of local implementation, running the model on your own computer, is remarkable. But globally, the AI community can only benefit from this methodology if DeepSeek chooses to release its underlying code and techniques. So far, DeepSeek-R1's codebase has not been made public, raising questions about whether it will ever be. It's the difference between open-weight and open-source I wrote about on Sunday.

If China decided to fully open-source DeepSeek-R1, it would pose a massive challenge to the U.S. tech industry. An open-source release would allow developers worldwide to access and build upon the model, greatly reducing the competitive advantage of U.S. companies in AI development.

This could lead to a democratization of advanced AI capabilities, reducing reliance on closed models such as from OpenAI and Anthropic and expensive infrastructure such as from Nvidia, Oracle, Microsoft and Amazon Web Services. Such a move would totally disrupt the current market dynamics and force U.S. companies to completely change their strategies in funding AI research and development.

China rules, Wall Street pays

How big an impact the technology sector has on the U.S. economy was once again evident yesterday when the total loss on Wall Street was estimated at a trillion dollars: a staggering thousand billion dollars.

It leads to the ironic conclusion that the first week of "America First" President Trump ends with a moment when China can determine whether to throw the U.S. economy into disarray. Wall Street is now watching every move from Beijing like a dear in the headlights.

Categories
AI technology

DeepSeek revolutionary: good, cheap AI product from China

OpenAI launched the AI agent Operator, initially useful for messenger and shopping services, while scientists such as Jan Leike and Nobel Prize winner Geoffrey Hinton are again warning of dangers of AI. Image created with Midjourney.

DeepSeek-R1, a new large language model from Chinese AI company DeepSeek, with a website that looks like a sleep-deprived intern pressed "enter" too quickly, has attracted worldwide attention as a cost-effective and open alternative to OpenAI's flagship o1. Released on Jan. 20, whether or not coincidentally on the weekend that "tout Silicon Valley" was eagerly clinging to the coattails of power in Washington, R1 excels thanks to "chain of thought" reasoning which mimics the problem-solving ability of humans.

Unlike closed models such as OpenAI's o1 and Anthropic's Claude, which this week raised another $2 billion from investors who are throwing pudding against the wall in AI hoping that some of it will stick, R1 is open-weight and published under an MIT license. That means anyone is free to build on the architecture. Unlike open source, the source code and training data used to build DeepSeek-R1 are not public, 

The model was developed for just five million dollars through algorithmic efficiency and reinforcement learning, significantly less than o1, despite U.S. export restrictions on advanced GPU chips, especially from Nvidia, on which U.S. competitors are primarily developing. Its affordability, with API costs more than ninety percent lower than o1, thus makes advanced AI more accessible to researchers with limited resources. It also offers a free chatbot interface with Web search capabilities, surpassing OpenAI's current features.

'Everyone is freaking out about DeepSeek'

By matching or even surpassing o1 in some benchmarks, R1 has highlighted China's advance in AI development. Its sudden rise has sparked discussions about the future of open, accessible AI and the need for international cooperation to move forward responsibly. 

International reactions to DeepSeek-R1 ranged from respect to dismay. Nature was analytical: 'DeepSeek-R1 performs reasoning tasks at the same level as OpenAI's o1 - and is open for researchers to examine.' MIT Technology Review remained tidy: 'The AI community is abuzz about DeepSeek R1, a new open-source reasoning model.' But VentureBeat said out loud what all of Silicon Valley was thinking: 'Why everyone in AI is freaking out about DeepSeek.'

By the way, anyone who asks DeepSeek about Tiananmen Square gets the reply, 'I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses.' Asked about the situation of the Uyghurs, a very elaborate answer first appeared that even used the word genocide, but a few seconds later that text was replaced by: "Sorry, that's beyond my current scope. Let's talk about something else.' DeepSeek wants to keep things light and breezy.

Stargate historic project in AI infrastructure

The focus on China's DeepSeek led to great chagrin from the American techno-elite, who wanted to use this very week to underscore American supremacy. OpenAI, Oracle, Japan's SoftBank and Emirates-based MGX are funding the Stargate Project, a $500 billion initiative described as the largest AI infrastructure project in history.

Announced by President Trump in the Oval Office, its goal is to build advanced data centers for AI in the US, which Trump says will create a hundred thousand jobs. They are a kind of Delta works forAI. The project currently has one hundred billion dollars in direct funding, with the remaining investment spread over four years. The first huge data center will be built in Texas.

It already led to bickering over Stargate funding between OpenAI CEO Sam Altman and Elon Musk. Forbes even made a timeline of the ongoing fitties between Altman and Musk, who should get into a boxing ring or a hotel room.

Will the real MGX please rise

In all the excitement, it was especially comical that overexcited investors bought the wrong stock in the belief that it is part of Stargate: biotech company Metagenomi (symbol: MGX) saw its share price shoot up even though it is not involved in Stargate. The MGX that does participate in Stargate, Abu Dhabi's sovereign wealth fund, this MGX, will have watched on in wonder.

It would be some feat if Trump succeeds in getting foreign investors to invest hundreds of billions in U.S. infrastructure with MGX and Japan's Softbank, without the U.S. taxpayer contributing. Investor Bill Gurley (Uber) publicly question ed the public-private partnership, which is unusual by American standards. The main question is whether Stargate will be accessible to all and who ultimately makes the decisions. OpenAI CEO Sam Altman often has problems with governance.

OpenAI with AI agent: Operator

In all the fuss over DeepSeek and Stargate, the news was snowed under that OpenAI this week introduced Operator, an AI agent that can independently navigate Web browsers and perform tasks such as online shopping, booking travel and making reservations. It marks the moment when AI agents are making their entrance into the mass market.

Operator uses OpenAI's Computer-Using Agent (CUA) model, which mimics human interactions with Web sites by using buttons, menus and forms. OpenAI is working with companies such as DoorDash, Uber and eBay for Operator to ensure it complies with their terms of use. 

Despite all its potential, Operator has limitations with more complex tasks such as banking and complex web interfaces or CAPTCHAs. Right now, unfortunately, it is only available to U.S. users on the ChatGPT Pro subscription of two hundred dollars a month, so I have not been able to test it myself.

Operator an echo of General Magic

OpenAI's Operator, nearly thirty-five years after the fact, is very reminiscent of the legendary company General Magic, known for the description as "the most important company to ever come out of Silicon Valley that nobody ever heard of. All of Operator's marketing copy seems to duplicate General Magic's slogans and claims from the early 1990s.

In the end, General Magic, which attempted to create a handheld computer with agent features before the Internet and digital mobile telephony got to mass adoption, proved too far ahead of its time. Like General Magic, Operator strives to integrate seamlessly into users' lives and function as a personal assistant and productivity booster.

For fans: a fine documentary was made about the rise and fall of General Magic, of which this is the trailer. The team behind General Magic was so special that dozens of books have been published and even a real feature film has been made in which they starred: Andy Hertzfeld was a prominent member of the team that developed the Apple Macintosh for Steve Jobs, after General Magic Tony Fadell became the developer of the iPod and co-creator of the iPhone at Apple, and Joanna Hoffman is such a special person that Kate Winslet went to great lengths to play her in Danny Boyle's film about Steve Jobs.  

Leike and Hinton with different warnings

In all the publicity about DeepSeek, Stargate and AI agents, the fact that two leading AI scientists once again warned against the misuse of AI with potentially disastrous consequences for the world snowballed. Professor Geoffrey Hinton, a leading figure in AI and winner of the 2024 Nobel Prize in Physics, discussed the risks of rapid AI developments in a fascinating conversation with his former student Curt Jaimungal. 

Hinton has frequently warned that AI could evolve and gain the motivation to make more of itself and autonomously develop a sub-goal to take control of the world without regard to humans.

The German Jan Leike, co-founder of OpenAI where he left disappointed, now puts it this way:: "Don't try to imprison a monster, build something you can actually trust!" I previously wrote extensively about Leike and Hinton's warnings in this blog post. 

Categories
AI technology

$10 billion for OpenAI: $6.6 vc funding and $4 billion from banks

The announcement of a major funding round in the tech world is often accompanied by a clichéd photo of a group of men in light blue shirts and the occasional rebel in a black t-shirt, trying to look tough into the camera. OpenAI, in announcing its new funding round of no less than $6.6 billion, out of a $157 billion valuation, posted an abstract image without a source; probably because there is no management left except CEO Sam Altman himself.

The text was also dry by Altman standards: "This funding allows us to strengthen our leadership in AI research, increase our computing power and build tools that help people solve complex problems."

"The dawn, blah blah blah chocolate custard"

This is less prosaic than the Bouquet Series lyrics Altman recently posted on his own blog:

"We must act wisely, but with conviction. The dawn of the Intelligence Age is a profound development with very complex and extremely risky challenges. It will not be an entirely positive story, but the benefits are so enormous that we owe it to ourselves, and to the future, to figure out how to navigate the risks that lie ahead."

This almost seems like one from the oeuvre of the unsurpassed Kees van Kooten: "but enough about myself, what do you think of my hair?" Because here Altman is unabashedly trying to attach half of his own company name to a term that is supposed to mark the period after our current post-industrial era. It is, of course, rather the dawn of the TikTok era, but TikTok is not as good at PR.

OpenAI now bronze, after SpaceX and ByteDance (TikTok)

The funding round increases the pressure on Altman to live up to this high price tag, normally via an IPO. OpenAI is now worth far more than any other venture-backed company ever before at the time of their IPO, including Meta, Uber, Rivian, and Coinbase, according to a PitchBook analysis.

Funding for OpenAI only just fits the chart. Source: Pitchbook.

OpenAI is now the third most valuable venture-backed company in the world. Only Elon Musk's SpaceX, valued at $180 billion, and ByteDance, the owner of TikTok valued at $220 billion, are worth more.

Is there any competition for OpenAI?

To put it in Dutch perspective: OpenAI is worth five times as much as Philips, as much as Unilever and still only 20% less than Shell. OpenAI's competitors must look at the new funding with horror. xAI, founded by Musk, raised more than $6 billion earlier this year, but with a valuation of "only" $24 billion. OpenAI's nearest competitor, Anthropic, was valued at $19.35 billion as recently as January.

A few days ago there was brief enthusiasm in the market when Nvidia, nota bene the party that is going to earn the most from OpenAI's new funding because it supplies all the hardware under the hood of ChatGPT, introduced a proprietary Large Language Model that seemed open source, called NVLM. Unfortunately, the technology is not allowed to be used in commercial applications, making Google with Gemini the only real remaining competitor to OpenAI. Or Anthropic needs to find funders who dare to throw billions at it.

$4 billion credit facility for OpenAI

Early investors in OpenAI are obviously in jubilant spirits, including the legendary Vinod Khosla, who in this interview with Bloomberg almost even seemed to be caught smiling. Khosla: "It is the most important tool we have ever had in human history to create abundance and realize a fairer, more equitable, prosperous society."

Khosla also reported that he has used ChatGPT for more mundane tasks - from quickly learning complex material to designing his garden. The latter is especially impressive, as Khosla's garden has a mile-long beach that has been the subject of lawsuits for years.

On Thursday, it emerged that in addition to its $6.6 billion investment round, OpenAI also managed to secure a $4 billion credit facility from a number of banks. With over $10 billion, the company can go on for a while, although the question remains how far this pole reaches for the company that loses millions every day.

Mozart of mathematics

Anyone who throws around bombastic cries about AI being the savior of humanity brings criticism upon themselves. These range from substantive criticism of the quality of the technology developed by OpenAI, to the way Altman runs the company.

More and more is leaking out about how OpenAI is rushing, especially under pressure from competition from Google, to release products that are far from ready and insufficiently tested. Even before Mira Murati's unexpected departure from OpenAI, staff complained that the o1 model was released too early.

The Atlantic advocates putting aside Altman's turgid rhetoric and looking primarily at what OpenAI's products are currently capable of:

"Altman insists that the deep-learning technology that powers ChatGPT can in principle solve any problem, at any scale, as long as it has sufficient energy, computing power and data. However, many computer scientists are skeptical of this claim and argue that several more major scientific breakthroughs are needed before we reach artificial general intelligence."

Terence Tao, a mathematics professor at UCLA, is "a real-life superintelligence, the Mozart of Mathematics," according to The Atlantic. Tao has won numerous awards, including the equivalent of a Nobel Prize in mathematics, and analyzed the performance of the OpenAI-hyped o1.

Tao's conclusions are not positive for o1's math ability and even led Tao to apologize for comparing o1's performance to that of a doctoral student.

A broader analysis appeared In the book "AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How You Can Recognize the Difference. "In it, the authors explain why organizations are falling into the trap of AI snake oil (worthless solutions), why AI can't fix social media, why AI is not an existential threat, and why we should be much more concerned about what people will do with AI than about what AI would do on its own. I haven't read the book but the reviews are positive.

Investors lining up for AI companies

Despite all the criticism, investors are eager to invest billions in AI companies.

OpenAI has raised almost as much funding, as the competition combined. 

Reuters made this overview of funding in 2024. Adding up to over $10 billion, it especially underscores how well-funded OpenAI is, which has such an amount at its disposal due to the investment round and the banks' own credit facility.

Categories
AI

Are Jony Ive & Sam Altman developing an "iPhone killer with AI?"

How nice would it be if they called the device LoveFrom, so people would start asking each other: 'how do you like the new LoveFrom?'

The Wall Street Journal reports that Apple has dropped out at the eleventh hour and will not participate in the investment round in OpenAI, which is expected to close next week. It would have been a rare investment in an unlisted technology company for Apple.

The question is whether not participating in the investment round will affect ChatGPT's possible integration into the new version of the iPhone operating system, iOS 18. Despite all the wonderful promo videos, it is still unclear what Apple means by its own interpretation of AI under the name "Apple Intelligence. Seeing first, then believing remains the credo in technology.

Perhaps Apple's enthusiasm for participation in OpenAI has been dampened by persistent reports that top designer Jony Ive, a former confidant of Steve Jobs and now co-funded by his widow Laurene Powell Jobs, is working with his new company LoveFrom and OpenAI CEO Sam Altman on a veritable iPhone killer: a mobile device with fully integrated AI features from, surprise, OpenAI.

An "iPhone for AI"

A major article in the New York Times reports in detail on the creation of the new mobile device, unfortunately without clarifying what kind of device it is:

"Mr. Ive and Mr. Altman met several times for dinner before they decided to develop a product, with LoveFrom leading the design. They raised money privately, with contributions from Mr. Ive and Emerson Collective, Ms. Powell Jobs' company, and could raise up to $1 billion in seed capital from tech investors by the end of the year.

In February, Mr. Ive found office space for the company. They spent $60 million on a 32,000-square-foot building called the Little Fox Theater, adjacent to LoveFrom's courtyard. He hired about 10 employees, including Tang Tan, who oversaw iPhone product development, and Evans Hankey, who succeeded Mr. Ive as chief designer at Apple.

On a Friday morning in late June, Mr. Tan and Ms. Hankey could be seen moving chairs between the Little Fox Theater and the nearby LoveFrom studio. The chairs were laden with papers and cardboard boxes containing initial ideas for a product that uses A.I. to create a computing experience less socially disruptive than the iPhone.

The project is being developed in secret. Mr. Newson (fellow designer Marc Newson, MF) said the product and release time have yet to be determined."

$60 million in office space for a startup

So nice, when you can begin a startup with $60 million dollars just for your office space. Apparently that took all the budget for the website of Ive's company LoveFrom. All that's there is an animated bear, the mascot of the state of California, who casually strolls over the company name.

Wired published a nice piece yesterday in which it almost talks aloud, asking questions to the reader, trying to figure out what kind of mobile device Ive and Altman are working on. After all, no one has an image of the so called "iPhone for AI." But to lure investors, an Altman specialty, that tag line will work wonders. 

Categories
AI invest

OpenAI worth $150 billion with potentially a $15 billion loss?

She was the face of OpenAI, but CTO Mira Murati left without giving CEO Sam Altman advance notice. Source photo: OpenAI
It's only a matter of time before a movie comes out about OpenAI, hopefully as good as The Social Network was about the founders of Facebook. Perhaps the entire film could be generated with AI from OpenAI's own products. Because that's what remains special about OpenAI: although only three of its eleven founders are left after years of rolling down the street fighting, it continues to develop extraordinary products. It is as if a car keeps winning Formula 1 races while most team members at every pit stop try to rip off their own driver's helmet, remove his steering wheel and puncture his tires.

Superman becomes Scrooge McDuck

Once upon a time, OpenAI was founded as a foundation with a noble goal: to advance humanity through artificial intelligence. Nothing is left of those altruistic values as it turns into a heavily funded, shareholder-value-driven commercial enterprise. The transition of OpenAI into a for-profit benefit corporation will reportedly earn CEO Sam Altman several billion dollars in shares in the company for the first time. It's like Superman transforming into Scrooge McDuck.

The tensions surrounding this transition apparently caused prominent executives such as Chief Technology Officer Mira Murati, Chief Research Officer Bob McGrew and VP of Post Training Barret Zoph to resign this week, raising questions about the stability of the company. Previously, I wrote about the extraordinary career of Mira Murati, originally from Albania.

OpenAI's leadership a year ago on the cover of Wired. Only Altman, bottom right, is still at the company.

Still, investors are eager to pump billions into OpenAI. The huge new round of investment, led by Thrive Capital, values the company at $150 billion. That's fifty percent more than Facebook was worth during its IPO, when it was already making a billion in profits. (I remember experienced investors talking shame about such a valuation, who must now surely grit their teeth at the fact that Meta has since become worth fifteen times as much, but let's put that aside.) Profit is a concept they will see at OpenAI in the coming years only when they enter it as a prompt in ChatGPT, but not in their accounting.

Thrive Capital is investing more than $1 billion in OpenAI's current $6.5 billion investment round and has an added benefit that other investors don't get: the ability to invest another $1 billion next year at the same valuation if the AI company meets a certain revenue target. That's a rare funding condition.

Open AI will do $12 billion revenue at a $15 billion loss?

Anonymous sources told Reuters that OpenAI revenue will rise to $11.6 billion next year, compared with an estimated $3.7 billion revenue in 2024. Losses could reach as much as $5 billion this year, depending on spending on computing power. If the operating margin does not improve very quickly, this means that, at projected 2025 revenue, OpenAI will lose over $15 billion next year.

I can't resist: $15 billion loss per year is $41 million per day, $1.7 million per hour and $476 per second. Loss.

With the new $6.5 billion in its pocket, that carries OpenAI only about six months, although its coffers will not be completely empty at this point. Shall we call it "remarkable" that a company can apparently be so promising that its market capitalization is not ten times its profits, but ten times its losses?

Categories
AI invest technology

Nvidia has passed Apple, so what will Tim Cook do tomorrow?

So much happened in the tech world last week that I briefly discuss ten news items that stood out to me the most.

If Nvidia maintains the revenue and profit growth of recent quarters, and it looks like it will, it will be the world's most valuable company before the end of the year. 

1. Nvidia worth more than Apple

The day you knew who was coming was Wednesday: Nvidia passed Apple in stock market value and became the world's most valuable company after Microsoft. There are legitimate reasons why Apple's sales are stagnant, with limited access to the Chinese market in particular preventing Apple from realizing its full market potential.

But there is more behind Nvidia's impressive run. Because while Nvidia had been investing heavily in the development of AI technology for over a decade, with all the risks of such a relatively one-sided strategy, Apple waited no less than nine years since the iPad in 2010 and the Apple Watch in 2015, until 2024, before introducing a new category of products with the Apple Vision Pro.

Meanwhile, Apple did buy back hundreds of billions of its own shares.Investors were happy about it, but buying back its own shares remains a weakness. Apple could have bought all sorts of useful companies, but Beats. the maker of flashy headphones, was the largest acquisition in Apple history ten(!) years ago at a cost of three billion dollars. That seems like a lot, but put it in perspective: Apple makes that amount in net profit every two weeks.

Apple could have purchased content (like Disney, and then divested the channels like ESPN), content aggregators (Netflix, Spotify), a completely new product category (Tesla) or valuable sports rights (World Cup, NFL, Olympics, Premier League). But none of that. No, to satisfy shareholders Apple kept doing huge stock buybacks.

Beats only fun for Dr. Dre

Meanwhile, it hobbled along behind Spotify with Apple Music, and those ostentatious headphones from Beats by Dr. Dre pleased mostly Mr. Dre himself - and according to rumors, he's not even a real doctor. More than half of Apple's profits come from products, particularly the iPhone, that are more than a decade old and under pressure from cheaper competitors.

Apple, at its core, sells too few products to still grow sales independently, although it still managed to increase its profit margin by cleverly optimizing its sourcing, like replacing Intel as a chip supplier with Apple's own top-quality Silicon chips.

Nasdaq Composite beat Apple

Investors are punishing mediocre growth due to Apple's lack of innovation and are sprinting toward Nvidia. NVDA shares are up more than 150% in 2024 (AAPL: 6%), 214% in the past year (AAPL: 9%) and over 3,200% in the past five years (AAPL: 314%).

By comparison, during those same periods, the Nasdaq rose 14%, 29% and 126%, respectively. It was unimaginable a few years ago: the Nasdaq Composite rose more than three times as much as Apple last year .

For those looking for more background on Nvidia's growth, I previously wrote this piece.  Why the Apple Vision Pro is technically fabulous but from a business perspective merely a drop in the bucket for Apple, is described here.

TikTok bypasses U.S. export restriction

Nvidia is so unique and crucial that all other major tech companies are clutching their hats to be allowed to buy chips from it. From Microsoft to Google, Meta and Amazon: without Nvidia hardware, they can't develop AI applications, especially processor-guzzling Large Language Models (LLMs) like ChatGPT, Google Gemini or applications on Amazon Bedrock.

ByteDance, the parent company of TikTok, also needs Nvidia to develop AI and has cheekily circumvented U.S. export restrictions: it rents cloud capacity from U.S. cloud services, including those of Oracle. Officially, none of these developments seep into China, but for those who believe that, I also have a nice used car for sale from a half-blind widow, barely used.

2. Tim Cook's AI moment

Tomorrow morning, 10 a.m. California time, Tim Cook will take the stage at Apple Park in Cupertino at a pivotal moment in his career. Cook has been through a lot in his more than 12 years at the helm of Apple, but never this. He must convince the world that Apple has an AI strategy.

It has already been leaked that Apple will not launch a single AI app, but will apply AI across the breadth of its product spectrum. With one crucial difference here, compared to Microsoft: everything at Apple is opt-in, so users have the choice to turn AI applications on or off.

In contrast to the fiasco at Microsoft this week, which, with the feature Recallunsolicited searched through a user's activities, including files, photos, emails and browsing history and taking screenshots of the user's computer every few seconds to search through as well. Well that's not creepy at all.

3. Elon Musk sent Tesla's Nvidia chips to X and xAI 

"Elon prioritizing X H100 GPU cluster deployment at X versus Tesla by redirecting 12k of shipped H100 GPUs originally slated for Tesla to X instead,” an Nvidia memo from December said. “In exchange, original X orders of 12k H100 slated for Jan and June to be redirected to Tesla.” according to a leaked Nvidia memo from December.
 

By directing Nvidia to prioritize X (also known as Xitter, because formerly Twitter) over Tesla, Musk ensured that the automaker would receive more than five hundred million dollars worth of Nvidia GPUs months later. This likely caused additional delays in setting up the supercomputers Tesla says it needs to develop autonomous vehicles and robots.

A more recent email from Nvidia, from late April, said that Musk's comment at Tesla's first quarterly meeting "conflicts with bookings" and that his April post on X about ten billion dollars in AI spending also "conflicts with bookings and FY 2025 forecasts."

There is growing criticism of Musk's many hats, who, after all, is also CEO of aerospace company SpaceX, founder of brain-computer interface startup Neuralink and tunneling company The Boring Co. He additionally owns X, which he acquired in late 2022 for forty-four billion dollars, and AI startup xAI. Now Musk is even in danger of losing a fine bonus of fifty-six billion dollars.

The nice thing about Musk is that he often responds to critical reports on X, including now. His response is that Tesla had no capacity to do anything with those much-needed Nvidia H100 chips and they would have been stored in a warehouse. Hence the change of receiving address for this multi-million dollar order. Musk also says Tesla will install fifty thousand H100s at the Tesla Giga Factory in Texas to develop fully self-driving cars (FSD).

Nvidia Blackwell: no discounts

Just a quick calculation: an H100 reportedly goes out of the store for at least thirty thousand dollars, so Tesla alone buys one and a half billion dollars worth of goodies from Nvidia. Then consider that the new Nvidia chip, the Blackwell, has a higher base price and is quickly heading toward seventy thousand dollars, and it is clear that it is a matter of months, not years, before Nvidia also overtakes Microsoft in market value and becomes the world's most valuable company.

4. Wall Street Journal's Walt Mossberg on Jobs, Gates and Bezos

No one had the network of Walt Mossberg, the legendary tech journalist who built deep relationships with the founders of the world's biggest technology companies, including Steve Jobs, Bill Gates and Jeff Bezos.

In this podcast, the now-retired Mossberg talks about how Steve Jobs dealt with moments like Tim Cook is experiencing tomorrow, what Jobs focused on (everything was about the consumer) and how much Jobs cared about the stock market (not much, at least that's how Jobs made it look).

5. Majority of companies halt acquisitions because of ESG concerns

Sustainability considerations are becoming increasingly central to the M&A process, with more than seventy percent of M&A leaders saying they have abandoned potential acquisitions because of ESG concerns. An overwhelming majority say they are willing to pay more for targets with strong ESG characteristics, according to a new survey by professional services firm Deloitte.

The question is how Environment, Social and Governance is measured. Unlike traditional accounting, there are hardly any measurable criteria for ESG. Therefore, I hereby tell you: this newsletter is hugely social and is written by an almost elderly man with a dark complexion. A newsletter cannot be much more ESG.

6. OpenAI CEO Altman's weekly scandal

Sam Alman's opaque personal investment empire makes him rich and raises questions about conflicts of interest. For although Altman has no shares in OpenAI and earns only a modest income there, out of the goodness of his heart, meanwhile he appears to be awarding all kinds of companies in which he is a private shareholder good deals with OpenAI. Especially good for his own investment portfolio.

7. OpenAI with another weekly scandal

"I’m scared. I’d be crazy not to be." So says a former OpenAI employee to Vox about the open letter from a group of AI experts from OpenAI , Google DeepMind and Anthropic
"
warning against the potentially humanity-threatening consequences of large-scale AI use.

Vox rightly states, "It can be tempting to see the new proposal as just another open letter from "doomsayers" who want a break from AI because they fear it will get out of control and wipe out all of humanity. That's not all this is. The signatories share the concerns of both the "AI ethics" camp, which is more concerned about current AI harms such as racial prejudice and disinformation, and the "AI security" camp, which is more concerned about AI as a future existential threat. These camps are sometimes played off against each other. The goal of the new proposal is to change the incentives of leading AI companies by making their operations more transparent to outsiders - and that would benefit everyone."

At the same time, we should be aware that a large group of AI experts believe that the current generation of LLMs will not lead at all to the dreaded introduction of "Artificial General Intelligence"(AGI), the AI form that will be able to perform all human functions better than us and could replace us. Investor Benedict Evans wrote an excellent piece on this last month.

8. The AI elections instead of the U.S. elections?

Until AGI makes us humans obsolete, we had better worry about how AI affects democracy. Regulators can't decide whose problem it is. A federal power struggle in the U.S. and inaction by the U.S. Congress could leave voters largely unprotected prior to the 2024 election.

The chairman of the Federal Communications Commission (FCC) last month announced a plan to require politicians to disclose AI use in TV and radio ads. But the proposal is receiving unexpected opposition from a top Federal Election Commission (FEC) official, who is himself considering new rules on AI use by campaigns. But when?

The dispute - along with inaction at the FEC and Congress - would leave voters unprotected from those using AI to mislead the public or hide their political messages during the final phase of the campaign for the U.S. presidency. 

 9. BBC: audio deepfakes are worse than video deepfakes

The BBC believes that audio deepfakes are worse than video deepfakes because they are harder to spot and few people realize they are listening to a bot. This article did lead X to delete a number of accounts on which fake messages were shared.

Finfluencer of the century: Keith Gill aka Roaring Kitty

10. GameStop shares fall despite Roaring Kitty

It remains highly recommended: the movie Dumb Money about how YouTuber and Reddit user Keith Gill, better known as Roaring Kitty, propelled GameStop stock up and turned a few billionaires back into millionaires.

After disappearing from the face of the earth for a few years, Gill made his comeback on YouTube this week to over two million viewers. For GameStop stock, Gill's return was to no avail, but it is still extraordinary to see a grown man in sunglasses and a sling tell of his love for a dying retail chain while making hundreds of millions in the process.

"Blue eyes. Finance. Trust fund." Singfluencer Megan Boni.

In conclusion: in nineteen seconds to world fame

27-year-old Megan Boni asked on TikTok for remixes of her nineteen-second video that said, "I'm looking for a man in finance. Trust fund. 6' 5" ((1m96). Blue eyes. Finance. Trust fund."

Forty million views and a remix with David Guetta layer, she was offered a record deal by Universal and is invited to perform in Ibiza. The impact of going viral on TikTok is unprecedented.

See you next week!

Categories
AI crypto technology

Harari: For the first time, no one knows what the world will look like in 20 years

Yuval Noah Harari was a guest at Stephen Colbert's late night talkshow, leading to an unexpectedly relevant conversation.

Harari: "I’m a historian. But I understand history not as the study of the past. Rather it is the study of change, of how things change, which makes it relevant to the present and future.”

Colbert: "Is it real that we are going through some sort of accelerating change?"

Harari: "Every generation thinks like that. But this time it’s real. It is the first time in history that no one has any idea what the world will look like in twenty years. The one thing to know about AI, the most important thing to know about AI, it is the first technology in history that can make decisions by itself and create new ideas by itself. People compare it to the printing press and to the atom bomb. But no, it is completely different."

Technology that makes its own decisions

Perhaps my fascination with the work of Harari, best known as the author of Sapiens, stems from the fact that I am a historian myself (history of communication), but have found that study to be most useful in assessing technological innovations. Harari confirms the idea that many of us have, that current technology involves a completely different, more pervasive and comprehensive innovation than anything the world has seen to date.

With his conclusion that AI is an entirely new technology, precisely because perhaps as early as the next generation AI will be able to make decisions on its own, Harari identifies the core challenge and does so in the very week that Amy Webb presented the new edition of the leading Tech Trends Report themed "Supercycle.( The report is availablehere and this is the video of Webb's presentation at SXSW).

Supercycle

Webb: ""

- Amy Webb, CEO Future Today Institute

Webb, like Harari, believes that technology will affect all of our lives more strongly than ever.

The face OpenAI CTO Mira Murati made after the simple question, "Have you used YouTube videos to train the system?

OpenAI CTO said 'dunno'

If Harari and Webb are right, it is all the more shocking what Mira Murati, the acclaimed Chief Technology Officer of OpenAI, maker of ChatGPT and others, blurted out during an interview with the Wall Street Journal. The question was simply whether OpenAI used footage from YouTube in training Sora, OpenAI's new text-to-video service.

Now OpenAI is under pressure on this issue, because the New York Times has launched a lawsuit against the alleged illegal use of its information in training ChatGPT. So getting this question wrong could possibly provoke a new lawsuit from the owner of YouTube, and that is Google, OpenAI's major competitor, of all people.

Murati obviously should have expected this question and could have given a much better answer than the twisted face she now pulled, combined with regurgitating some lame lines that can be summed up as "don't call me, I'll call you. It's of a sad level at a time just after OpenAI already experienced a true king drama surrounding CEO Sam Altman.

These people are developing technology that can make its own decisions and are undoubtedly technically and intellectually of an extraordinary level, but as human beings they lack the life experience and judgment to realize what impact their technology can have on society.

Your car works for your insurance company?

It is downright miraculous that Zuckerberg can still sleep after the Cambridge Analytica scandal, which is a consequence of peddling our privacy for financial gain. It is now not just the big tech companies that are guilty of this revenue model, even car manufacturers have joined the guild of privacy-devouring crooks.

LexisNexis, which builds profiles of consumers for insurers, turns out buyers of General Motors cars had every trip taken, including when they drove too fast, braked too hard or accelerated too fast. The result: higher insurance premiums. As if you needed another reason never to buy a car from this manufacturer of unimaginative, identity-less vehicles.

Google Gemini does not do elections

Partly because of stock price pressures, tech companies are forced to release moderately tested applications as quickly as possible. Think of Google with Gemini, which wanted to be so politically correct that it even depicted Nazis of Asian descent. Sweetly intended to be inclusive, but totally pointless.

This fiasco caused such a stir that Google announced Tuesday that Gemini is not providing information about all the elections taking place this year worldwide. Indeed, even to the innocent question "What countries are holding elections this year?" Gemini now replies, "I am still learning how to answer this question." I beg your parrrrdon?

Google Gemini does know all about Super Mario

Use Google's search engine and you come right to a Time article that begins with the sentence, '2024 is not just any election year. It may be *the* election year.' According to ChatGPT, elections will take place this year in the US, Taiwan, Russia, the European Union, India and South Africa; a total of 49% of the world's population will be able to go to the polls this year.

So when providing meaningful information about the future of the planet, Google Gemini is not the place to be. Fortunately, I do get a delightfully politically correct answer to my question: 'Did Princess Peach really need to be rescued by a white man? Wasn't Super Mario just being a male chauvinist?' Reading the answer, I get the feeling that Google Gemini has been fed a totally absurd worldview by well-intentioned people. The correct answer would have been, "Super Mario is a computer game. It's not real. Go worry about something else, you idiot.'

Anti-monarchists claim that this photo has been doctored. I deny everything.

Speaking of princesses, there is one who claims that, like us mere mortals, she sometimes edits photos herself. At least, so says the X account on behalf of Princess Catherine and Prince William. The whole fiasco not only draws attention to the issues surrounding the authenticity of photos, but also demonstrates the need for digital authentication when sending digital messages. It would be helpful if it were conclusively established that the princess herself sent the message that was signed with the letter C.

Where do we go from here?

Globally, people are wrestling with how to deal with and potentially regulate the latest generation of technology, which is also a source of geopolitical tension. See how China is reacting to the news that ASML is considering moving out of the Netherlands.

The possible ban on TikTok, or a forced sale of the U.S. branch of TikTok by owner ByteDance, will not happen as quickly as last week's news coverage might suggest. By the way, it is interesting what happened in India when TikTok was banned there in 2020: TikTok's 200 million Indian users mostly moved on to Instagram and YouTube.

India announced this week that a proposed law requiring approval for the launch of AI models will be repealed. Critics say the law would slow innovation and could worsen India's competitiveness; the economic argument almost always wins.

The European Union is beating its chest that the law for AI regulation has been approved, but it will be years before it takes effect. It is unclear how the law will protect consumers and businesses from abuse. Shelley McKinley, the chief legal officer of GitHub, part of Microsoft, compared the U.S. and European approaches as follows:

"I would say the EU AI Act is a ‘fundamental rights base,’ as you would expect in Europe,” McKinley said. “And the U.S. side is very cybersecurity, deep-fakes — that kind of lens. But in many ways, they come together to focus on what are risky scenarios — and I think taking a risk-based approach is something that we are in favour of — it’s the right way to think about it.”

Aviation as an example

Lawmakers often tend to create a new regulator in response to an incident, think of the U.S. Department of Homeland Security after 9/11. The EU is now doing the same with the new European AI Office, for which qualified personnel is being recruited.

It shows a far too narrow view of digital reality. As the aforementioned Tech Trends Report correctly shows, it's not just about AI: the "tech super cycle" is created by an almost simultaneous breakthrough of various technologies, such as, in addition to AI, bioengineering (submissions for a good Dutch translation are most welcome!), web 3, metaverse and robotics, to name just a few.

It would therefore be better to set up a digital technology regulator similar to the European Medicines Agency EMA or the U.S. aviation authority FAA. Not that things are flawless at the FAA right now, far from it, but the FAA has spent decades ensuring that aviation is the safest form of transportation.

It is precisely having oversight relaxed, coupled with the greed of Boeing management, that has created dire situations such as Boeing personnel saying they never wanted to fly on the 787 themselves. It is exactly the situation that should be avoided in digital technology, where already many former personnel are coming forward about abuses and mismanagement with major social consequences.

Spotlight 9: Bad week for AI, but what will next week bring?

It was a week of correction for AI stocks, but what happens when Monday Nvidia announces the latest AI chip...

It was a week of hefty corrections after an extremely enthusiastic start to the year in tech stocks and in crypto. Bitcoin lost 5% and Ethereum lost as much as 10%. My completely made-up AI Spotlight 9, or nine stocks that I think will benefit from developments in AI, also received hefty ticks.

On crypto, I like to quote Yuval Noah Harari again, this time on the Daily Show: "Money is the greatest story ever told. It is the only story everybody believes. When you look at it, it has no value in itself. The value comes only from the stories we tell about it, as every cryptocurrency-guru or Bitcoin-enthusiast knows. It is all about the story. There is nothing else. It is just the story."

Media critic Jeff Jarvis believes nothing of the doom-and-gloom talk about rapidly advancing technology and even scolded people like investor Peter Thiel and entrepreneurs Elon Musk and Sam Altman. It was striking to encounter Jarvis in one of my favorite sports podcasts. Jarvis apparently does not realize that just his appearance on this sports show to talk about AI underscores the impact of technology on everyday life. He is not invited to talk about the role of parchment, troubadours or the pony express.

Million, billion, trillion

Where startups once started in someone's garage, AI in particular is the playing field for billionaires. The normally media shy top investor Vinod Khosla (Sun, Juniper, Square, Instacart, Stripe etc) publicly opened fire on Elon Musk after he filed a lawsuit against OpenAI, not entirely coincidentally a Khosla investment.

OpenAI top man Sam Altman appears to still be in talks for his $7 trillion chip project with Abu Dhabi's new $100 billion sovereign wealth fund MGX, which is trying to become a frontrunner in AI with a giant leap. Apparently, Altman has also been talking with Temasek, a leading sovereign wealth fund of Singapore. These talks involve tens of billions.

From the perspective of Harari, let's look at Nvidia's story. That is offering developers a preview of its new AI chip this week. How long can Nvidia and CEO Jensen Huang wear the crown as the dominant supplier of AI chips in the technology world? Tomorrow, Huang will walk onto the stage at a hockey arena in Silicon Valley to unveil his latest products. His presentation will have a big impact on my AI Spotlight 9 stock prices in the coming weeks and maybe even months.

The shelf life of a giant

Payment processor Stripe, also a Khosla investment, reported in its annual reader's letter that the average length of time a company is included in the S&P 500 index has shrunk sharply in recent decades: it was 61 years in 1958 and it is now 18 years. Companies that cannot compete in the digital world are struggling. With the huge sums currently being invested in technology, that trend will only accelerate.

In conclusion

In that context, it is particularly fun and interesting to see that in Cleveland good old mushrooms are eating up entire houses and cleaning up pollution, even PFAS. Perhaps not an example of Amy Webb's bioengineering, rather bio-remediation, but certainly a hopeful example of how smart people are able to solve complex problems in concert with nature.

Have a great Sunday, see you next week!

Categories
AI technology

OpenAI CEO Sam Altman is looking for $7 trillion; that's $7,000 billion

You surely remember the scene from the movie The Social Network where Justin Timberlake, in his role as Sean Parker, says to Mark Zuckerberg, "A million dollars isn't cool. You know what's cool? A billion dollars.' Ah, what simple, innocent times those were, looking back now. The CEO of OpenAI, Sam Altman, who was kicked out of his own company just a few months ago, has only been back in office for a few weeks but is laughing at millions and billions: Altman is looking for seven trillion dollars. Or: seven thousand billion dollars. For an idea, not even for an existing company yet. What's going on?

The face of investors as soon as they hear the amount Sam Altman wants.
 

The Wall Street Journal reported on Thursday that Sam Altman, the CEO of OpenAI, is seeking five to seven trillion dollars to build a global network of chip factories. It was already rumored last year that Altman wanted to set up a chip factory competing with Nvidia under the code name Tigris, but at the time it was not suspected that trillions were involved. The now leaked seven trillion in numbers is 7,000,000,000,000,000, a seven with 12 zeros.

Wait, how much?

To put this in perspective, in 1995 the Internet hype started with Netscape's IPO, much to the dismay of the traditional investment market because the browser maker was not yet making a profit, even though it had millions of users of the popular browser Navigator. On opening day, Netscape raised $82.5 million with the stock sale.

So Altman wants to raise eighty-five thousand times more money from private investors for his idea, for that is all it apparently is yet, than Netscape fetched on the Nasdaq. Times are changing.

To make another attempt to indicate how much money is involved: Altman wants to raise more than a third of the GDP of the entire European Union, the second largest economy in the world, with $7 trillion. The GNP of this planet, by the way, is $88 trillion; Altman would like 8% of that, so he can make a nice clean start.

According to the Wall Street Journal, Altman is in talks with the United Arab Emirates sovereign wealth fund, among others, and then I suspect it's not Dubai, which is better at marketing than making money but the wealthy oil-producing Abu Dhabi. Primarily through its sovereign wealth fund, Mubadala, Abu Dhabi is looking for new sources of revenue as oil wells appear to be slowly but surely closing to retain a chance of a livable planet.

It is plausible that Altman is also swinging by the Emirates' friendly big neighbor, Saudi Arabia, which is gaining traction with the sovereign wealth fund PIF (note the windmills at the top of the page, apparently Saudi Arabia is famous for those). 

What do you spend 7 trillion on?

Altman seeks to address a critical bottleneck to OpenAI's growth: the scarcity of advanced graphics processors (GPUs) essential for training advanced AI models, such as his extremely popular ChatGPT. Despite the success of OpenAI and competitors such as Google Gemini and Anthropic, all of these billion-dollar companies are standing hat in hand at the doors of chipmaker Nvidia, whose lead as as maker of the best GPUs seems unsurmountable. But there's one thing: Nvidia can't handle the demand. And Altman doesn't want to be dependent on one supplier.

One of my New Year's resolutions was to judge people less in 2024, but people who are too cool to use capital letters don't make it easy for me

Altman announced on Twitter, a day before publication of the Wall Street Journal article:

"We believe the world needs more AI infrastructure - manufacturing capacity for fabs, energy, data centers, etc. - than people currently plan to build. Building AI infrastructure on a massive scale, and a resilient supply chain, is critical to economic competitiveness. OpenAI will try to help!" 

- Sam Altman

Solid plan or pipe dream?

His ambitious plan involves setting up a network of several dozen chip factories ("fabs") that would ensure a steady supply of the crucial chips not only for OpenAI but also for other customers worldwide. The plan involves cooperation between OpenAI, investors, chip manufacturers including market leader TSMC, data centers and power producers. Because without their own power plants, chip factories cannot operate on this scale.

What is striking about Altman's tweet is his specific mention of data centers. That means he not only plans to reduce his dependence on Nvidia, but also wants to get rid of his reliance on cloud-based solutions like Microsoft now runs for OpenAI and Google for Anthropic. Microsoft owns 49% of OpenAI's shares and was instrumental in allowing Altman to return to OpenAI after the Palace Revolution in November, so that will be an interesting issue to follow. 

If this initiative becomes a reality, it would mean that the AI industry and many other computing power-guzzling industries could realize their ambitions. But regardless of the money, it will result in a complex ownership structure where it is still unclear who will control and own the intellectual property, aside from all the chip factories, data centers and power plants.

Sustainability and geopolitics major challenges

Sam Altman's plan to radically scale up superchip manufacturing has significant sustainability implications. The environmental footprint of chip factories is significant; they are energy-intensive facilities that also require large amounts of water and produce harmful waste.

The unprecedented scale of Altman's idea would put enormous pressure on natural resources and energy networks. The environmental impact is compounded by the need for new power plants, which will increase CO2 emissions unless renewable energy sources are used exclusively. With financiers from the Middle East, that does not seem a reasonable priority.

Just last week, the Biden administration proudly announced a new initiative in which the U.S. is investing $5 billion in a public-private partnership aimed at supporting research and development in advanced computer chips. This initiative was completely drowned out by the WSJ article on Altman's plan.

President Biden's move underscores once again that the U.S. government recognizes the importance of high-performance chips, and therefore Altman's plan could quickly fuel geopolitical tensions. By attempting to expand chip production within a U.S.-led framework, China will surely respond, as it has also been explicitly pursuing high-end chips with Huawei playing a major role in recent years.

Superchips are a matter of national security and long-term economic growth. China will not stand idly by in the face of a concentration of production of these chips by US allies, possibly leading to retaliatory measures in which US companies and their partners will find it even more difficult to access the Chinese market. Altman's project therefore already casts the shadow of an intense trade war between China on the one hand and the U.S. and its allies on the other.

Categories
AI technology

Five conclusions after the chaos at OpenAI

Sam Altman is back at OpenAI and more powerful than before, but is that a good thing?

A few days after the kings drama at OpenAI, let's try to look over the ruins at this company whose mastermind, Chief Scientist Ilya Sutskever, has said that AI could herald the downfall of the world. Far too often it is forgotten that Sutskever and his colleague Jan Leike, also no slouch, published this text on OpenAI's official blog in July:

"Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction."

Ilya Sutskever and Jan Leike, OpenAI

And oh yes, they have over $10 billion in the bank to figure out if they are going to save the world, or end it. Yet there are few serious attempts to seriously monitor, let alone regulate, OpenAI and its competitors.

Imagine Boeing developing a new plane with a similar PR text: 'this super-fast plane will fly on spot-cheap organic pea soup and could help humanity make aviation accessible to all, but it could also backfire and crash and blow up the earth.' The chances of getting a license wouldn't be very good. With AI, things are different; the tech bros just put the website live and see how it goes.

Move along, the turkey was great

What is the media reporting on now at OpenAI? About the Thanksgiving dinner where reinstated CEO Sam Altman sat down with Adam D'Angelo, one of the board members who had fired him just six days earlier. Both tweeted afterwards that they had a great time together. They are so cute.

Despite the media's tendency to quickly lapse into picking a hero and a villain when conflicts arise, the much-lauded Sam Altman is increasingly being viewed as demonstrating some odd behavior from time to time. According to the Washington Post, Altman's dismissal had little to do with a disagreement over the safety of AI, as first reported, but mostly with his tendency to tell only part of the truth while trying to line his pockets left and right.

Even if they would be Granny Smith green

Meanwhile, then, Altman is back with a new board about which there are many doubts. Christopher Manning, the director of Stanford's AI Lab, noted that none of the board members have knowledge of AI: "The newly formed OpenAI board is presumably still incomplete,” he told TechCrunch. “Nevertheless, the current board membership, lacking anyone with deep knowledge about responsible use of AI in human society and comprising only white males, is not a promising start for such an important and influential AI company."

I don't care what color and what gender they are, even if they are Granny Smith green with three types of reproductive organs, but I do prefer when they have an understanding of the matter that their own experts say has the potential to put humanity over the cliff.

Five conclusions after the chaos

1. The AI war has been won by America.

Look after a week of craziness and fuss at OpenAI and we see that Microsoft, the old board and the new board are all Americans. The competitors? Amazon, Google, Meta, Anthropic, you name them: Americans. The rest of the world watches and holds meetings and speeches, but it's a done deal.

2. Good governance is nice, but bad governance is disastrous.

By this I do not mean that the people who fired Altman were right or wrong, because no one still knows that; the crux of their argument was that Altman had not given full disclosure, and if that is true, that remains a mortal sin.

But the root of the problem was deeper. The OpenAI board had been appointed to safeguard the mission of the OpenAI Foundation, which, in a nutshell, was to develop AI to create a better world. Not to create maximum shareholder value, as has now become the goal. The problem arose because of those conflicting goals.

3. Twitter, or X as it is now called, remains the only relevant social network in a crisis.

Elon Musk went on a rampage again last week and that seems to cost him $75 million in revenue, but Altman and everyone else involved still chose X as a platform to tell their story. Not Threads or TikTok – although I would have liked to have seen this mud fight for power portrayed in dance.

4. Microsoft wins.

Under Bill Gates, I already thought Microsoft was a funny name, because the company was neither micro nor soft then either, but in the nearly 10 years under CEO Satya Nadella, Microsoft has become a dominant force in all kinds of markets.

While Amazon, Google, Meta and also Apple are struggling to develop a coherent AI strategy, Microsoft seems to have found a winning formula: it is investing heavily in OpenAI, which uses the Microsoft Azure cloud, returning much of the investment back to Microsoft. Meanwhile, Microsoft does enjoy the capital appreciation via its 49% stake in OpenAI.

5. AI should be tested and probably regulated

Precisely because companies like Microsoft, Google, Meta and Amazon also dominate in the field of AI, the development of AI must be carefully monitored by governments. The years of privacy violations, disinformation and abuse of power taking place through social media, for example, show that these companies cannot regulate themselves.

The tech bro's motto remains unchanged: move fast & break things. But let them do that nicely with their own planet, not the current one. The potential impact of AI on the world is simply too great to let the mostly socially limited minds running tech companies make the choices for society.

An initiative like the AI-Verify Foundation can be a vehicle to achieve responsible adoption of AI applications. I close with the same quote as last week from OpenAI's Chief Scientist, Ilya Sutskever, which shows the world's AI leaders almost hope that future AI systems will have compassion for humanity:

"The upshot is, eventually, AI systems will become very, very, very capable and powerful. We will not be able to understand them. They’ll be much smarter than us. By that time, it is absolutely critical that the imprinting is very strong, so they feel toward us the way we feel toward our babies."