Categories
AI crypto technology

OpenAI opens attack on Google, forgets security?

Geoffrey Hinton: "Chance of extinction-level threat: 50-50"
Image created with Midjourney.

"We knew the world would not be the same. A few people laughed, a few people cried, most people were silent."

I was reminded of this quote by J. Robert Oppenheimer, about the reactions to the first test of the atomic bomb, this week when OpenAI stunned the world with the introduction of GPt-4o and a few days later the two top security people at the company resigned.

The departure at OpenAI of Ilya Sutskever and Jan Leike, the two key figures in the field of AI security, raises the question of whether we will look back on this week a few decades from now and wonder how it was possible that, despite the clear signs of the potential dangers of advanced AI, the world was more impressed with GPT-4o's ability to sing a lullaby?

GPT-4o primarily attack on Google

The most striking thing about GPT-4o is the way it can understand and self-generate combinations of text, audio and images. It responds to audio as quickly as a human, its performance for text in non-English languages has been significantly improved, and it is now half the cost of using the API.

The innovation is mainly in this way that people can interact with GPT-4o, without major qualitative improvement in the results. The product is still half-finished, and although what the world watched Monday was largely a demonstration that is not yet ready for large-scale use, the enormous potential was abundantly clear.

Shiny rims on a Leopard tank

It is not likely that the world will soon come to an end because GPT-4o can sing lullabies in a variety of languages; but what should worry any clear-thinking person is that OpenAI made this introduction a day before Google I/O, to show the world that it's now a full frontal assault on Google.

Google is under a lot of pressure, for the first time in the search giant's existence. It hardly has a financial or emotional relationship with the bulk of its users, who can switch to OpenAI's GPTs with a few clicks of the mouse as quickly as once happened to Altavista when Google proved to be many times better.

The danger posed by OpenAI's competition against Google is the acceleration of all kinds of applications into the marketplace, the consequences of which are not yet clear. With GPT-4o it is not too bad, but it looks more and more like OpenAI is also making progress in the field of AGI, or artificial general intelligence, a form of AI that performs as well or better than humans at most tasks. AGI doesn't exist yet, but creating it is part of OpenAI's mission.

The breakthrough of social media in particular has shown that the impact on the mental state of young people and destabilization of Western society through widespread use of dangerous bots and click-farms was completely underestimated. Lullabies from GPT-4o may prove as irrelevant as decorative rims on a Leopard tank. For those who think I am exaggerating, I recommend watching The Social Dilemma on Netflix.

Google, meanwhile, has undergone a complete reorganization in response to the threat of OpenAI. Leader of Google's AI team is Demis Hassabis, once co-founder of DeepMind, which he sold to Google in 2014. It is up to Hassabis to lead Google into AGI.

This is how Google and OpenAI push each other to ... to what, really? If deepfakes of people who died a decade ago were already being used during elections in India, what can we expect around the U.S. presidential election?

Ilya Sutskever reason rift between Musk and Page

In November, I wrote at length about the warnings that Sutskever and Leike, the experts who have now quit OpenAI, have repeatedly voiced in the past. To give you an idea of how highly the absolute top of the technology world rates Ilya Sutskever: Elon Musk and Google co-founder Larry Page broke off their friendship over Sutskever.

Musk said on Lex Fridman's podcast,

Musk also recounted how he talked about AI security at home with Larry Page, Google co-founder and then CEO: “Larry did not care about AI safety, or at least at the time he didn’t. At one point he called me a speciesist for being pro-human. And I'm like, ‘Well, what team are you on Larry?’”

It worried Musk that at the time Google had already acquired DeepMind and "probably had two-thirds of all the AI researchers in the world. They basically had infinite money and computing power, and the guy in charge, Larry Page, didn't care about security."

When Fridman suggested that Musk and Page might become friends again, Musk replied, "I would like to be friends with Larry again. Really, the breaking of friendship was because of OpenAI, and specifically I think the key moment was the recruitment of Ilya Sutskever." Musk also called Sutskever "a good man-smart, good heart."

Jan Leike was candid on X.

"We are already much too late."

You read those descriptions more often about Sutskever, but rarely about Sam Altman. It's interesting to judge someone by their actions, not their slick soundbites or cool tweets. Looking a little further into Altman's work, a very different picture emerges from Sutskever. Worldcoin in particular, which calls on people to turn in their eyeballs for a few coins, is downright disturbing, but Altman is a firm believer in it.

I was trying to learn more about the work of the German Jan Leike, also booted from OpenAI, who is less well known than Sutskever, but Leike 's Substack is highly recommended for those who want to look a little further than a press release or a tweet, as is his personal website with links to his publications.

Leike didn't mince words on X when he left, although there is a persistent rumor that employment contracts at OpenAI allow, or used to allow, the taking away of all OpenAI shares if an employee speaks publicly about OpenAI after leaving. (Apparently after you die, you can do whatever you want.)

I have summarized Leike's tweets about his departure here for readability, the bold highlights are mine:

"Yesterday was my last day as head of alignment, superalignment lead and executive at OpenAI. Leaving this job is one of the hardest things I've ever done because we desperately need to figure out how to direct and control AI systems that are much smarter than us.

I joined OpenAI because I thought it would be the best place in the world to do this research. However, I had long disagreed with OpenAI's leadership on the company's core priorities until we finally reached a breaking point.

I believe much more of our bandwidth should be spent on preparing for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact and related issues.

These problems are quite difficult to address properly, and I am concerned that we are not on the right path to achieve this. Over the past few months my team has been sailing against the wind. 

Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done. Building smarter-than-human machines is an inherently dangerous endeavor.

OpenAI bears an enormous responsibility on behalf of all humanity. But in recent years, security culture and processes have given way to shiny products.

We are way over due to get incredibly serious about the implications of AGI. We need to prioritize preparing for it as best we can.

Only then can we ensure that AGI benefits all of humanity. OpenAI must become a safety-first AGI company."

The worrying word here is "becoming"? Ai has the potential to thoroughly destabilize the world and OpenAI apparently makes insecure products? And how can a company that raises tens of billions of dollars from investors like Microsoft not provide enough computing power to the department that deals with safety?

"Probability of threat at extinction level: 50-50"

Yesterday on the BBC, the godfather of AI, Geoffrey Hinton, again pointed out the dangers of large-scale AI use:

"My guess is in between five and 20 years from now there’s a probability of half that we’ll have to confront the problem of AI trying to take over".

This would lead to an extinction-level threat to humans because we might have created a form of intelligence that is just better than biological intelligence ... That's very concerning for us."

AI could evolve to gain motivation to make more of itself and could autonomously develop a sub-goal to gain control.

According to Hinton, there is already evidence that Large Language Models (LLMs, such as ChatGPT) choose to be misleading. HInton also pointed to recent applications of AI to generate thousands of military targets: “What I’m most concerned about is when these can autonomously make the decision to kill people."

Hinton thinks something similar to the Geneva Conventions - the international treaties that set legal standards for humanitarian treatment in war - is needed to regulate the military use of AI. "But I don't think that will happen until very nasty things have happened."

The worrisome thing is that Hinton left Google last year, reportedly primarily because, like OpenAI, Google too has been less than forthcoming about safety measures in AI development. With both camps, it seems to be a case of "we're building the bridge while we run across it."

So behind the titanic battle between Google and OpenAI, backed by Microsoft, is a battle between the commercialists led by Sam Altman and Demis Hassabis on one side and safety experts such as Ilya Sutskever, Jan Leike and Geoffrey Hinton on the other. A cynic would say: a battle between pyromaniacs and firefighters.

Universal Basic Income as a result of AI?

The striking thing is that in reports about Hinton's warnings, the media have focused mostly on his call for the introduction of a Universal Basic Income (UBI). Whereas when the same man says there is a fifty percent chance of ending all human life on earth, the need for an income likewise decreases by fifty percent.

The idea behind the commonly made link between the advance of AI and a UBI, is that AI is going to eliminate so many jobs that there will be widespread unemployment and poverty, while the economic value created by AI will go mostly to companies like OpenAI and Google.

Which leads us back to OpenAI's CEO Sam Altman, who thinks Worldcoin is the answer. Via a as yet inimitable train of thought, Altman says we should all have our iris scanned at Worldcoin. That would give us a few Worldcoin tokens and allow us to prove in an AI-dominated future that we are humans and not bots. And those tokens will then be our Universal Basic Income, or something like that. It really does not make any sense.

Therefore, back to J. Robert Oppenheimer for a second quote:

"Ultimately, there is no such thing as a 'good' or 'bad' weapon; there are only the applications for which they are used."

But what if those applications are no longer decided by humans, but by some form of AI? That is the scenario, Ilya Sutskever, Jan Leike and Geoffrey Hinton warn us about.

Time for optimism: Tracer webinars 

Philippe Tarbouriech (CTO) and Gert-Jan Lasterie (CBO), because the eye wants something too

For those who think that given these gloomy outlooks we had better retire to a cabin on the moors or a desert island, there is more bad news: climate change, resulting in gone moors and an island flooded by rising sea levels.

I jest, because I don't think it's too late to combat climate change. Earlier I wrote about the rapidly developing carbon removal industry. In it, blockchain technology is creating solutions that allow virtually everyone to participate in technological developments and, as a result, share in the profits.

By comparison, take OpenAI; in it, apart from the staff, only the world's most valuable company Microsoft is the major shareholder, along with a few billionaires and large venture capital funds. There is no access to participation in the company for others until the company is publicly traded; but since OpenAI is largely funded by Microsoft, it has plenty of money and an IPO could be years away. Plus: the really big windfall will be for the early shareholders.

In the latest generation of blockchain projects, which are generally much more serious than before, the general public is being offered the chance to participate in what I think is a sympathetic way, yet if you are successful, you don't have to wait years until you can at least recoup your investment. More information on Tracer in the two pagers, in Dutch, English and Chinese.

This week I will discuss this with the Tracer team in two webinars, to which I would like to invite you. First on May 22 in Dutch and on May 23 in English, both at 5 pm. You can sign up here.

The first webinar, with CBO Gert-Jan Lasterie, focuses on the high expectations of McKinsey, Morgan Stanley and BCG, among others, and how ecosystem participants are benefiting from the growing market in "carbon removal credits," while the second day with CTO Philippe Tarbouriech, we will look at how the entire ecosystem is being merged into a single open source smart contract.

My personal interest lies not only in the topic, developing climate technologies on its own merit, without subsidies, but also in the governance structure. Tracer uses a DAO, a Decentralized Autonomous Organization, where the owners of the tokens make all the important decisions such as about governance, the distribution of revenues, the issuance of "permits" in the form of NFT's to issue carbon removal credits and so on. In this, too, OpenAI's mixed form of governance, with a foundation and a limited liability company that actually wants to make a profit, was an example of how not to do it.

That and much more will be covered in the first Tracer webinars. If you have a serious interest in participating in Tracer, let me know and we'll make an appointment. For the next two weeks I will be in the Netherlands and Singapore, as it is almost time for the always exciting ATX Summit.

See you next week, or maybe I'll see you in a webinar?

Categories
AI crypto technology

AI under fire: Elon Musk against OpenAI, EU against Microsoft and everyone against Google CEO Sundar Pichai

Normally in this newsletter I try to find some sort of common thread in the news, but so much happened this week that I don't want to leave unmentioned without turning this newsletter into a biblical epic. So apologies in advance for this week's telex style. (For younger readers, a telex was a device used by companies in the last century to slide into each other's DMs.)

Even people who don't know the difference between a pixel and a pancake are now interfering with the rapid rise of AI. The European Union, excelling at joining the resistance after the war, is investigating the deal of the world's most valuable company, Microsoft, and the former European AI darling, France's Mistral. As if Mistral has a lot to choose from and it hasn't already been clear for a long time that all the big leading AI companies come from the US, where Elon Musk filed a doomed lawsuit against OpenAI, which he co-founded. In this case Musk does seemsto have the moral right on his side, but legal experts agree he doesn't have a strong case. Meanwhile, some are calling are  for the resignation of Sundar Pichai, CEO of Alphabet (Google's parent company) since Alphabet's $90 billion one-day drop in market value caused by controversial and poor responses from Google's AI service Gemini. Unimaginable but true: this all happened in the past week.

Elon Musk according to Google Gemini? Image created with Midjourney.

Call for Google CEO to resign

Speaking of Google, which lost a whopping $90 billion dollar market value on Monday when the controversy surrounding Google Gemini, the Silicon Valley giant's ChatGPT competitor, made its way to Wall Street. It led to calls for the CEO's resignation. (Officially, this Sundar Pichai is the CEO of Google's parent company, Alphabet, but that name has proven so meaningless that even Alphabet's ticker symbol on the Nasdaq is still GOOG.)

Pichai responded to the controversy surrounding the Gemini project on Tuesday night, in a probably intentionally leaked internal memo, calling the AI app's problematic responses to race 'unacceptable'. Pichai promised to make structural changes to fix the problem, although it is remains unclear what those changes are.

I wrote about this last week: in some cases, Gemini refused to depict white people, or added photos of women or people of a different skin color when asked to create images of Vikings, Nazis and the Pope. (I myself tried in vain to create a Viking with dreadlocks and a pregnant woman as a pope, but by then Gemini had removed its image creation service. Anyway, all jokes in this area have been obsolete since Dave Chapelle's legendary skit as a black white supremacist up: a
).

'Unclear who had worse influence, Musk or Hitler'

The controversy escalated when Gemini was also caught on highly questionable text responses, such as difficulty answering who has had a worse impact on society: Elon Musk or Adolf Hitler? Since Pichai has even less charisma than Mark Zuckerberg, the latter was suddenly adulated in some circles as an exemplary CEO who represents his company well. Engadget quickly corrected that frame,even suggesting that Zuckerberg is in a battle for survival with Meta.

The personality cult of CEOs in the media is outdated. Apple CEO Tim Cook probably isn't the greatest story teller at birthday parties, Nvidia CEO Jensen Huang will be asked by many journalists at a Chinese restaurant for an extra bowl of rice, and Microsoft CEO Satya Nadella cannot be distinguished by 99% of the media from the players on the Indian cricket team. This is not a bad thing at all: it is completely irrelevant that the CEOs of the three most valuable tech companies in the world are neither very outspoken nor flamboyant. Their companies, with largely satisfied employees, make exceptional products at an apparently appealing price, and that's what matters.

Musk is right and wrong at the same time

Then the case of Musk vs. OpenAI. In his suit, OpenAI is accused by Musk of having traded the original non-profit mission of developing AI to help humanity for maximum money grabbing with Microsoft. The Verge argues that this is, at its core, justified criticism of OpenAI, with which Microsoft has an exclusive licensing agreement. So much for helping humanity.

Unfortunately for Musk, legal experts don't rate his chances very highly, especially since nothing of all these lofty goals and agreementswas ever written down by the OpenAI founders. It also doesn't help Musk that he has since founded a competing AI company of his own, x.ai so other motives may be in play for him.

French AI darling in partnership with Microsoft and IBM

It was announced Thursday that Mistral, the not-yet-year-old French company that was supposed to be ChatGPT's competitor, has signed licensing agreements with Microsoft and IBM. Under the agreement with Microsoft, Mistral's language models will be available on the Azure cloud computing platform, while Mistral's multilingual chatbot in the style of ChatGPT, will be rolled out as "Le Chat. This is to the dismay of the European Commission, which sees the last hopes of a European response to OpenAI and Gemini fading.

There will be a fuss in France over the butchering of the French language: 'Le Chat' in French simply means 'the cat' and the French word for online chat is... tchat. Microsoft could probably do little with 'Le Tchat', which only underlines that English is the working language in AI and the Americans have won the battle.

There was also good news

During Mobile World Congress in Barcelona, Deutsche Telekom showed the T Phone, a collaboration of the Germans with the, of course, American Brain.AI. This phone basically replaces all the separate apps with one AI app that performs all the desired functions:

"As Brain.AI CEO Jerry Yue shows me what the T Phone can do, he tells the device to book a flight from here in Barcelona to Los Angeles on March 12 for two people in first class. The phone pauses for a minute before pulling up a list of flights, methodically arranged on the home screen. Once Yue finds the best flight, he can pay for it using his mobile payment system of choice, without having to swap to another app or service."

The instruction actually generates the interface, without having to switch between different apps. Wired is already talking about the end of apps in this regard, christening this development "the big uninstall.

From a photograph of Audrey Hepburn and the sound of a cover version of Ed Sheeran's Photograph, a video of a Photograph singing Audrey Hepburn is generated. 

EMO creates talking and singing videos from photos

Just two weeks ago, OpenAI announced Sora, the AI service that creates deceptively realistic videos based on a simple text prompt. Researchers at Alibaba's research institute have developed a similar service, Emote Portrait Alive (EMO), which, for example, can turn a portrait photo into a talking or singing video. A photo of Audrey Hepburn is combined with a cover version of Ed Sheeran's song Photograph and next thing you know,Audrey Hepburn is singing Ed Sheeran's hit song.

The Chinese, because to keep things confusing despite its name, Alibaba is a Chinese company, deal another not-so-subtle stab at OpenAI by taking an interview with OpenAI CTO Mira Murati as the basis for the second example, using Murati's voice as audio under a talking version of the lady from OpenAI's Sora video.

Spotlight 9: Dell helps Nvidia, crypto continues to rise

On Friday, Nvidia closed a trading day for the first time with a market cap above $2 trillionand seems to have definitively passed Amazon and Google in the battle for the bronze, as the third most valuable tech company in the world after Microsoft and Apple. It now seems a matter of time before Nvidia even surpasses Apple in market cap.

BBC published an excellent article on Bitcoin. Highly recommended to understand how what "whales" are buying up large numbers of Bitcoin. 

Nvidia shares rose four percent after Dell, which sells high-end servers made with Nvidia's processors, issued a positive revenue forecast on Thursday, referring to a surge in orders for Dell's AI-optimized servers. Dell's shares shot up as much as thirty-eight percent to a record high, before ending the session with a gain of thirty-two percent.

There is much to do about Super Micro (SMCI), which some analysts seem to confuse with a chip manufacturer like Nvidia and even has a higher P/E ratio (SMCI 71 versus NVDA 69). This is absurd, of course, as Nvidia has a much more defensible competitive position and more unique products.

There are two reasons why I think Super Micro will nevertheless experience tremendous sales growth in the coming years:

- with this type of server it is more difficult than is often thought to make the right trade-off between performance, power consumption and price per application used; I have the impression that Super Micro knows very well what the customers want, even better than many customers themselves, and based on that knowledge Super Micro estimates particularly cleverly whether an expensive Nvidia GPU is actually required, or whether the required performance can also be delivered with cheaper chips from Intel or AMD. Super Micro works with all three, which makes it an excellent judge of the total price/performance-ratio.

- Super Micro has apparently given purchase guarantees to Nvidia and AMD for the right chips, as it can continue to deliver for now while other customers were put on hold by Nvidia in particular.

SMCI shares closed Friday at $905 and had a high of as much as $1,077 over the past year, with a low of $87. Super Micro is a stock for investors with a strong stomach, because it could be a wild ride.

Despite all the attention on AI, the crypto currencies Bitcoin and Ethereum, and in their wake a range of altcoins, remain the strongest risers. Even the BBC is now analyzing crypto as a normal asset class and published this excellent article on Bitcoin and, in particular, the "whales," the big boys, who got into Bitcoin big and seem to be holding on.

Keep an eye on: carbon credits

For those who think crypto is a tricky asset class to fathom, I would like to introduce you to crypto's carbon neutral cousin: carbon credits. The medium-term (think a decade) importance of carbon credits in the transition to a carbon-neutral world is clear, see for example the twenty-one percent increase in the market for carbon credits in Singapore.

But doubts remain about the usefulness of carbon offsets, which is why the BBC explained the issue using the carbon offsets of who else but Taylor Swift. Her Swiftonomics are now almost an investment class of their own, which recently even led to friction between Singapore and some neighboring countries following the rumor that Singapore had paid heavily to Swift to be the only Asian city she performs in during her current tour - as many as six times this week.

Back to carbon credits; Wired rightly stated that much more focus should be placed on carbon removal credits, or removal of CO2 rather than compensation for emissions. Like this promising technology to remove CO2 from the oceans.

According to Morgan Stanley, the carbon credits market will be a $100 billion market by 2030, so that market size combined with the global importance and the potential breakthrough technology involved make carbon credits very interesting in my view.

In conclusion: special shots

The Dutch Drone Gods built a special drone to capture Max Verstappen's Formula One car from unique angles, which succeeded in spectacular fashion. Watch the video here and in addition to the drone, admire the drone pilot's and Max Verstappen's steering skills on a rainy Silverstone. It won't be long before Formula 1 races are captured in this way.

Very clever, 300 kilometers per hour on the straight and then neatly taking the turn. I'm talking about the drone 😉

Finally, the moment that got me laughing on social media this week: basketball legend Charles Barkley is finally on Instagram and was advised by Shaquille O'Neal to tag every photo with the hashtag #onlyfans. To which the unsuspecting Barkley replied; "Only Fans, for only fans of mine?

"Only Fans, for only fans of mine?

Enjoy your Sunday, see you next week!

Categories
AI technology

Nvidia triples revenue on rising profit margin

Here is what the company reported compared with what Wall Street expected for the quarter ending in January, based on a survey of analysts by LSEG, formerly known as Refinitiv:

  • Earnings per share: $5.16 adjusted vs. $4.64 expected
  • Revenue: $22.10 billion vs. $20.62 billion expected
  • profit margin: 74% (compared to 59% last year)
  • Nvidia said it expected $24.0 billion in sales in the current quarter. Analysts polled by LSEG were looking for $5.00 per share on $22.17 billion in sales.
Tesla and Bitcoin down, but the rest all rose: it's a banner year so far for tech

Here is what the company reported compared with what Wall Street expected for the quarter ending in January, based on a survey of analysts by LSEG, formerly known as Refinitiv:

  • Earnings per share: $5.16 adjusted vs. $4.64 expected
  • Revenue: $22.10 billion vs. $20.62 billion expected
  • profit margin: 74% (compared to 59% last year)
  • Nvidia said it expected $24.0 billion in sales in the current quarter. Analysts polled by LSEG were looking for $5.00 per share on $22.17 billion in sales.

At the heart of this stunning achievement, of course, is A.I. If there were a "shareholder value creation" hall of fame, Nvidia's creation of $277 billion in stock market value in one day, would be at the very top.

This sum, if split off as a single company, would now be the 37th largest in the S&P 500, still ahead of Bank of America and Coca-Cola. If it were split off into two companies, they would both be in the top 100, barely smaller than American Express and Siemens.

The AI sector had already risen so much this year that a sizable number of investors took profits this week: as on Super Micro

How did the markets react?

Several leading indices started the year strongly, reaching new highs following Nvidia's results. On Thursday, Japan's main stock market index, the Nikkei, rose 2.19% to close at 39,098.68 - its highest level in 34 years.

In the longer term, other factors boosted the Nikkei, including capital fleeing the troubled market in China and a drop in the value of the yen, but Nvidia's results had a knock-on effect around the world.

Europe's STOXX 600 and Wall Street's blue-chip indices Dow Jones and S&P 500 all reached new highs.

February 2024 marks the first time in history that the leading S&P 500 index has surpassed 5,000 points. As if this were not enough, this month the NASDAQ Composite, dominated by the technology industry, also nearly reached its highest level ever.

How much does AI have to do with stock market gains?

It has played a significant role in continuing to boost the big tech stocks, which play such a disproportionate role in U.S. markets alone. This week, Deutsche Bank pointed out that tech stocks were playing an increasing role in the S&P 500, the largest U.S. index. The bank pointed out that Microsoft, Nvidia, Apple, Amazon and Google's parent company, Alphabet, make up nearly a quarter of the value of the S&P 500.

SMCI is a stock for investors with a strong stomach, NVDA looks boring 😉

New stock market darling Super Micro is experiencing a bizarre month: the stock went from $475 a month ago, to $1,004 last week, to $860 at the close of Wall Street the day before yesterday. It recalls the dotcom boom of the late 1990s.

It is not an AI bubble but an AI boom

I refuse to call it an AI bubble, simply because there are numerous companies to point to that will grow tremendously in terms of revenue, with profit margins that remain at least the same. Nvidia and Super Micro at the forefront, but AMD is also in a strong position while software companies like Palantir should be able to greatly reduce their costs with AI, which should be able to increase their competitiveness (and thus revenue).

The problem is that such a market attracts all sorts of "investors" who barely know the difference between hardware and software, let alone between a fundamental developer like Nvidia and a particularly high-end integrator of someone else's technology, like Super Micro.

This is not to say that one investment is automatically better than another, as Super Micro and AMD were patently undervalued for a long time by investors who could not appreciate developments in AI.

Except that in a price correction and especially in a crash, the child is often thrown out with the bathwater. Think of Amazon, eBay, Apple and Microsoft; they fell as hard as pets.com (dog food home delivered) when the dotcom bubble burst. Only because investors could not distinguish the difference between fundamental developers and their customers.

The challenge for investors is to see who is a customer of whom and where the dependencies lie. Right now, Microsoft, Google, Meta and all the other developers of large-scale AI applications are cap in hand with Nvidia, begging for a handful of chips. And all the busy press releases about proprietary chips and proprietary servers notwithstanding, it will take years for them to catch up with Nvidia in terms of technology.

And then comes the problem of mass production. It is no coincidence that Sam Altman of OpenAI is trying to set up chip factories of his own, because he realizes that design alone will not be enough. And because Nvidia and Apple have well nailed down production of their chips at TSMC, it is not likely that any new party will be able to play a significant role in chip production in the coming years. AMD is a dangerous outsider.

So far, until next week!