Categories
AI crypto technology

Harari: For the first time, no one knows what the world will look like in 20 years

Yuval Noah Harari was a guest at Stephen Colbert's late night talkshow, leading to an unexpectedly relevant conversation.

Harari: "I’m a historian. But I understand history not as the study of the past. Rather it is the study of change, of how things change, which makes it relevant to the present and future.”

Colbert: "Is it real that we are going through some sort of accelerating change?"

Harari: "Every generation thinks like that. But this time it’s real. It is the first time in history that no one has any idea what the world will look like in twenty years. The one thing to know about AI, the most important thing to know about AI, it is the first technology in history that can make decisions by itself and create new ideas by itself. People compare it to the printing press and to the atom bomb. But no, it is completely different."

Technology that makes its own decisions

Perhaps my fascination with the work of Harari, best known as the author of Sapiens, stems from the fact that I am a historian myself (history of communication), but have found that study to be most useful in assessing technological innovations. Harari confirms the idea that many of us have, that current technology involves a completely different, more pervasive and comprehensive innovation than anything the world has seen to date.

With his conclusion that AI is an entirely new technology, precisely because perhaps as early as the next generation AI will be able to make decisions on its own, Harari identifies the core challenge and does so in the very week that Amy Webb presented the new edition of the leading Tech Trends Report themed "Supercycle.( The report is availablehere and this is the video of Webb's presentation at SXSW).

Supercycle

Webb: ""

- Amy Webb, CEO Future Today Institute

Webb, like Harari, believes that technology will affect all of our lives more strongly than ever.

The face OpenAI CTO Mira Murati made after the simple question, "Have you used YouTube videos to train the system?

OpenAI CTO said 'dunno'

If Harari and Webb are right, it is all the more shocking what Mira Murati, the acclaimed Chief Technology Officer of OpenAI, maker of ChatGPT and others, blurted out during an interview with the Wall Street Journal. The question was simply whether OpenAI used footage from YouTube in training Sora, OpenAI's new text-to-video service.

Now OpenAI is under pressure on this issue, because the New York Times has launched a lawsuit against the alleged illegal use of its information in training ChatGPT. So getting this question wrong could possibly provoke a new lawsuit from the owner of YouTube, and that is Google, OpenAI's major competitor, of all people.

Murati obviously should have expected this question and could have given a much better answer than the twisted face she now pulled, combined with regurgitating some lame lines that can be summed up as "don't call me, I'll call you. It's of a sad level at a time just after OpenAI already experienced a true king drama surrounding CEO Sam Altman.

These people are developing technology that can make its own decisions and are undoubtedly technically and intellectually of an extraordinary level, but as human beings they lack the life experience and judgment to realize what impact their technology can have on society.

Your car works for your insurance company?

It is downright miraculous that Zuckerberg can still sleep after the Cambridge Analytica scandal, which is a consequence of peddling our privacy for financial gain. It is now not just the big tech companies that are guilty of this revenue model, even car manufacturers have joined the guild of privacy-devouring crooks.

LexisNexis, which builds profiles of consumers for insurers, turns out buyers of General Motors cars had every trip taken, including when they drove too fast, braked too hard or accelerated too fast. The result: higher insurance premiums. As if you needed another reason never to buy a car from this manufacturer of unimaginative, identity-less vehicles.

Google Gemini does not do elections

Partly because of stock price pressures, tech companies are forced to release moderately tested applications as quickly as possible. Think of Google with Gemini, which wanted to be so politically correct that it even depicted Nazis of Asian descent. Sweetly intended to be inclusive, but totally pointless.

This fiasco caused such a stir that Google announced Tuesday that Gemini is not providing information about all the elections taking place this year worldwide. Indeed, even to the innocent question "What countries are holding elections this year?" Gemini now replies, "I am still learning how to answer this question." I beg your parrrrdon?

Google Gemini does know all about Super Mario

Use Google's search engine and you come right to a Time article that begins with the sentence, '2024 is not just any election year. It may be *the* election year.' According to ChatGPT, elections will take place this year in the US, Taiwan, Russia, the European Union, India and South Africa; a total of 49% of the world's population will be able to go to the polls this year.

So when providing meaningful information about the future of the planet, Google Gemini is not the place to be. Fortunately, I do get a delightfully politically correct answer to my question: 'Did Princess Peach really need to be rescued by a white man? Wasn't Super Mario just being a male chauvinist?' Reading the answer, I get the feeling that Google Gemini has been fed a totally absurd worldview by well-intentioned people. The correct answer would have been, "Super Mario is a computer game. It's not real. Go worry about something else, you idiot.'

Anti-monarchists claim that this photo has been doctored. I deny everything.

Speaking of princesses, there is one who claims that, like us mere mortals, she sometimes edits photos herself. At least, so says the X account on behalf of Princess Catherine and Prince William. The whole fiasco not only draws attention to the issues surrounding the authenticity of photos, but also demonstrates the need for digital authentication when sending digital messages. It would be helpful if it were conclusively established that the princess herself sent the message that was signed with the letter C.

Where do we go from here?

Globally, people are wrestling with how to deal with and potentially regulate the latest generation of technology, which is also a source of geopolitical tension. See how China is reacting to the news that ASML is considering moving out of the Netherlands.

The possible ban on TikTok, or a forced sale of the U.S. branch of TikTok by owner ByteDance, will not happen as quickly as last week's news coverage might suggest. By the way, it is interesting what happened in India when TikTok was banned there in 2020: TikTok's 200 million Indian users mostly moved on to Instagram and YouTube.

India announced this week that a proposed law requiring approval for the launch of AI models will be repealed. Critics say the law would slow innovation and could worsen India's competitiveness; the economic argument almost always wins.

The European Union is beating its chest that the law for AI regulation has been approved, but it will be years before it takes effect. It is unclear how the law will protect consumers and businesses from abuse. Shelley McKinley, the chief legal officer of GitHub, part of Microsoft, compared the U.S. and European approaches as follows:

"I would say the EU AI Act is a ‘fundamental rights base,’ as you would expect in Europe,” McKinley said. “And the U.S. side is very cybersecurity, deep-fakes — that kind of lens. But in many ways, they come together to focus on what are risky scenarios — and I think taking a risk-based approach is something that we are in favour of — it’s the right way to think about it.”

Aviation as an example

Lawmakers often tend to create a new regulator in response to an incident, think of the U.S. Department of Homeland Security after 9/11. The EU is now doing the same with the new European AI Office, for which qualified personnel is being recruited.

It shows a far too narrow view of digital reality. As the aforementioned Tech Trends Report correctly shows, it's not just about AI: the "tech super cycle" is created by an almost simultaneous breakthrough of various technologies, such as, in addition to AI, bioengineering (submissions for a good Dutch translation are most welcome!), web 3, metaverse and robotics, to name just a few.

It would therefore be better to set up a digital technology regulator similar to the European Medicines Agency EMA or the U.S. aviation authority FAA. Not that things are flawless at the FAA right now, far from it, but the FAA has spent decades ensuring that aviation is the safest form of transportation.

It is precisely having oversight relaxed, coupled with the greed of Boeing management, that has created dire situations such as Boeing personnel saying they never wanted to fly on the 787 themselves. It is exactly the situation that should be avoided in digital technology, where already many former personnel are coming forward about abuses and mismanagement with major social consequences.

Spotlight 9: Bad week for AI, but what will next week bring?

It was a week of correction for AI stocks, but what happens when Monday Nvidia announces the latest AI chip...

It was a week of hefty corrections after an extremely enthusiastic start to the year in tech stocks and in crypto. Bitcoin lost 5% and Ethereum lost as much as 10%. My completely made-up AI Spotlight 9, or nine stocks that I think will benefit from developments in AI, also received hefty ticks.

On crypto, I like to quote Yuval Noah Harari again, this time on the Daily Show: "Money is the greatest story ever told. It is the only story everybody believes. When you look at it, it has no value in itself. The value comes only from the stories we tell about it, as every cryptocurrency-guru or Bitcoin-enthusiast knows. It is all about the story. There is nothing else. It is just the story."

Media critic Jeff Jarvis believes nothing of the doom-and-gloom talk about rapidly advancing technology and even scolded people like investor Peter Thiel and entrepreneurs Elon Musk and Sam Altman. It was striking to encounter Jarvis in one of my favorite sports podcasts. Jarvis apparently does not realize that just his appearance on this sports show to talk about AI underscores the impact of technology on everyday life. He is not invited to talk about the role of parchment, troubadours or the pony express.

Million, billion, trillion

Where startups once started in someone's garage, AI in particular is the playing field for billionaires. The normally media shy top investor Vinod Khosla (Sun, Juniper, Square, Instacart, Stripe etc) publicly opened fire on Elon Musk after he filed a lawsuit against OpenAI, not entirely coincidentally a Khosla investment.

OpenAI top man Sam Altman appears to still be in talks for his $7 trillion chip project with Abu Dhabi's new $100 billion sovereign wealth fund MGX, which is trying to become a frontrunner in AI with a giant leap. Apparently, Altman has also been talking with Temasek, a leading sovereign wealth fund of Singapore. These talks involve tens of billions.

From the perspective of Harari, let's look at Nvidia's story. That is offering developers a preview of its new AI chip this week. How long can Nvidia and CEO Jensen Huang wear the crown as the dominant supplier of AI chips in the technology world? Tomorrow, Huang will walk onto the stage at a hockey arena in Silicon Valley to unveil his latest products. His presentation will have a big impact on my AI Spotlight 9 stock prices in the coming weeks and maybe even months.

The shelf life of a giant

Payment processor Stripe, also a Khosla investment, reported in its annual reader's letter that the average length of time a company is included in the S&P 500 index has shrunk sharply in recent decades: it was 61 years in 1958 and it is now 18 years. Companies that cannot compete in the digital world are struggling. With the huge sums currently being invested in technology, that trend will only accelerate.

In conclusion

In that context, it is particularly fun and interesting to see that in Cleveland good old mushrooms are eating up entire houses and cleaning up pollution, even PFAS. Perhaps not an example of Amy Webb's bioengineering, rather bio-remediation, but certainly a hopeful example of how smart people are able to solve complex problems in concert with nature.

Have a great Sunday, see you next week!

Categories
AI technology

Does AI mean the end of the world for Do-It-Yourselfers?

'Reducing the risk of extinction from apple pie should be a global priority, alongside other societal-scale risks such as pandemics and nuclear wars.'

If this had been apple pie and not AI, the Journal would have opened with it.

Had that been the one-line statement made public last Tuesday by dozens of leaders in the field of AI (artificial intelligence, artificial intelligence), it would have been bigger world news than it is now. Only it did not mention apple pie as a threat to the world, but AI. That made the statement a lot harder for journalists to interpret, because AI is a kind of water of technology: it can be used to give people drinks, or waterboard them. The line between those is clear: It's about who decides to stop drinking.

The fear is that in the case of AI, the software itself decides when something happens. Or stops. I once started blogging and nowadays write this newsletter because it forces me to keep up with my field and then organize my thoughts publicly. So herewith my immodest attempt to put the latest developments in AI into a broader perspective.

Who are these people?

First, that statement last Tuesday, issued by the Center for AI Safety (CAIS, pronounced Kees) whose mission is "to reduce the risks of artificial intelligence on a societal scale. We learned from the Watergate scandal that the first thing you do is follow the flow of money, so where does Kees get the money? The Open Philanthropy Foundation donated over $5 million and is in turn funded by former Wall Street Journal reporter Cari Tuna and Dustin Moskovitz, one of the founders of Facebook. (You can guess for yourself whose piggy bank of that couple was turned over the most for this donation. Oh well, at least the money Facebook makes from selling out its users' privacy will be spent on something useful).

In Europe, tricky dossiers usually involve a covenant between government, industry and a party that policymakers describe as ''civil society'' in those kinds of papers that nobody reads. America is the land of the one-liner, so there they arrived at this chunky phrase: ''Reducing the risk of extinction from AI should be a global priority, alongside other social-scale risks such as pandemics and nuclear wars.''

And that was it, that's all there is in the 22-word statement. It led to rather vacuous media reports from which you can almost read the reporter's despair. Like "my goodness, do I now have to explain to what extent this statement is similar to Robert Oppenheimer's on the danger of nuclear weapons, or shall I just list the list of signatories? It became mostly the latter, of course, and you will recognize most of the names from previous newsletters. CNN bravely lists, "The statement was signed by prominent industry officials, including OpenAI CEO Sam Altman; the so-called "godfather" of AI, Geoffrey Hinton; top executives and researchers at Google DeepMind and Anthropic; Kevin Scott, chief technology officer of Microsoft; Bruce Schneier, the pioneer of Internet security and cryptography; climate advocate Bill McKibben; and musician Grimes.

Who didn't sign?

The latter is kind of funny, because Grimes is the baby mama of as far as we know the youngest son of Elon Musk, who is even named X Æ A-Xii because it is the elven spelling of the term AI. (Read that last sentence again and realize that this is a defenseless child.) The very name Elon Musk was missing from the signatories. Other people who conspicuously did not sign the statement, and whose names it seems to me would have made sense if CNN had inquired why, are Jeff Bezos (founder and chairman of Amazon's Supervisory Board), Sundar Pichai (CEO of Alphabet, Google's parent company, man of this brilliant speech), Andreessen Horowitz (the leading investor in technology companies), Mark Zuckerberg (CEO Meta, formerly known as Facebook, buyer of former competitors like Instagram and Whatsapp) and Peter Thiel (financier of, among others, LinkedIn, Yelp, Facebook and Palantir and through his Founders Fund also Airbn and Space X). And further missing are just about all players in the technology field from India, South Korea, Japan and China.  

All of these parties have the knowledge, clout and motivation to become a major player in the global market for AI applications. And they have not signed the no doubt well-intentioned declaration to take care that the world does not perish to AI. Of course, that doesn't mean that the chief bosses of the tech world will try to destroy the world with AI; after all, killing off the world's population would be bad for their quarterly numbers.

What about Bill Gates?

Microsoft co-founder Bill Gates publicly hopes that Amazon and Google will lose out to AI. Furthermore, he has little influence on the public debate about AI; it is no coincidence that CNN did not even mention Gates in the list of signatories and even Elon Musk's ex did. I place little value on the predictions about technology from the man who, in his November 1995 book The Road Ahead, called the Internet not the future, but a dirt road compared to the information super highway he himself would build in the form of MSN.

It remains incomprehensible to me that Gates does not provide more analysis on the business aspects of technology, but continues to muse on the social implications. Because precisely as an entrepreneur, he remains, in my view, unparalleled. His vision is brilliant when measured over say 24 months, not 24 years.

Remember from Bill Gates especially these two achievements:

  • IBM was looking for an operating system for their new product, the personal computer, in 1981; Gates had nothing on hand but bought the obscure Quick and Dirty Operating System (QDOS) from a small software maker for seventy-five thousand dollars, changed the name to MS-DOS (because the spotless IBM could not do anything with the word Dirty) and did not sell the software, but licensed it to IBM on a non-exclusive basis. That form of licensing was virtually unknown in the software world. Primarily on the basis of this one deal, Microsoft became the most valuable company and Gates the richest man in the world.
  • In 1995, Microsoft was the most powerful company in the technology world and Gates the world's richest man. Only, the whole image of Microsoft and Gates was focused on a world where computers barely worked together, let alone communicated together or enabled transactions. While Jeff Bezos was a few miles away building Amazon into an e-commerce machine and would follow in Gates' footsteps as the world's richest man, Gates wrote a memo to the top of Microsoft that would become known as "the Internet tidal wave. In fact, Gates said, "I was wrong. We need to make all our products Internet-capable.' I had never seen a CEO confess his own mistakes in such a way and have the entire corporation turned around and focused, in such a short time. Admitting that he had overlooked the Internet struck me as great. (And I was relieved, because my brainchild was called Planet Internet and it's not good to wake up every day thinking the world's richest man is saying your product sucks.)

His book The Road Ahead would come out six months later and already be dated upon publication. It was especially odd because Gates had so strongly emphasized the importance of the Internet in his memo. The Internet, Gates orated in his book, was built on antiquated technology and therefore too limited to transmit information, communications and transactions over it on a large scale.

What happened next was as hilarious as it was symbolic, because his book required a second version as quickly as his software did: just a month after the book was published, Gates began work on a second version, which appeared in October 1996 and was no less than 20,000 words longer, just as his software counted more and more lines of code. In the second version of the book, Gates made the Internet much more central.

The only thing I liked about The Road Ahead was that Gates had written it with then Microsoft CTO Nathan Myhrvold, a former world barbecue champion who had studied under Stephen Hawking. From Myhrvold, I would have liked to have read more.

Bill Gates is like a nerd version of Marco van Basten: a top player who is phenomenal as an analyst, but failing as a coach. I sincerely hope Bill Gates will write about applications of AI, about business models, opportunities and threats; about everything except what it will mean for society. And full disclosure: my opinion of Gates is independent of my own experiences with him and Microsoft in the browser war.

Impact, a Belgian employment agency for technicians, came up with this nice advertisement

Why is AI so promising and so dangerous?

Far more important than Gates' opinion on AI, I found this article about an officer in the U.S. Air Force who gave a reflection on a drone that went wild because of AI and wanted to kill its own driver. The first gasp was that this actually happened, but apparently it was just a scenario being discussed in the U.S. military. Thank goodness, because it is the ultimate Terminator nightmare when the monopoly of violence falls to computers.

While a huge technological achievement, even Nvidia's new supercomputer, which I wrote about last week, will not lead to a mass breakthrough of AI applications. Such computers are so expensive and complex that only a small number of companies have the capabilities to use them properly. Of course, it is a huge revenue generator for Nvidia, as Amazon, Microsoft, Meta and Google will gladly stock this computer en masse, but it is precisely open source AI that seems to be the definitive breakthrough of AI.

These are not my words, but this is according to a leaked internal Google document. According to the leaked document, the open source AI community is so active and highly developed, that as soon as more accessible development capabilities emerge, both OpenAI and Google are hopeless. While OpenAI and Google use "proprietary" LLMs (Large Language Models), the models in open source are actually ready for public use. This makes the group of global developers larger than the OpenAI and Google staffs, the thinking goes.

Hooray for QLoRA?

And now it appears those cheaper tools will be available within a year! Because it seems to be possible to develop AI applications on some out of the box gaming PCs. LLMs used to develop generative AI applications can normally only run on enormously powerful computers. That is the reason for the explosive price increases of the makers of such devices such as Nvidia and Marvell, which I wrote about last week. As one reader sent, "QLoRA completely changes the landscape. You can use the same 8x80GB on a single 48GB card. From an $8x15K piece of kit to a souped-up PC.'

Translated into slightly more normal Dutch: the fact that you can cram 96 billion 4-bit weights into 48GB (which is huge) means that AI development is now available to hobbyists. What normally costs a ton of equipment can now be done for a few thousand Euros. For enthusiasts: here the scientific article. And here the tweet predicting that within a year these computers will be commonplace.

AI for Do-It-Yourselfers

The question is what applications will be built if hobbyists, enthusiasts and rogues will have the ability to create AI applications. And the follow-up question is how to monitor and regulate this, if at all possible.

Finally for this piece on AI:

Notable links:

  • Artifact, from the founders of Instagram, is a personal news reader. Just downloaded, but not yet tested, with the slogan: "Finally, an AI-driven news feed with you in control. Because no startup can do without the word AI in its slogan in 2023. I'd love to hear readers' opinions, anonymity guaranteed.
  • Bold: a detailed forecast of the development of AI Singularity through 2029. Someone should check this annually for accuracy; I certainly forget.
  • Meta (Facebook's) wants every employee assigned to a particular branch to show up at the office at least three days a week starting in September. Unfortunately, it is not clear what percentage of employees this will apply to. It remains to be seen whether this will cause many talented employees to leave, as as many as 150,000 jobs were lost in the US tech sector this year alone.

Event of the week: ATxSummit Singapore

A not-so-subtle humblebrag: the creator of your Sunday tech newsletter is participating Tuesday in a panel on Web 3.0 beautifully titled "Everything, Everywhere, All at Once. It's part of the ATxSummit in Singapore, where "governments, businesses and knowledge centers gather to discuss the role of technology in our shared digital future.

27 recipients of an email about a panel in Singapore with four participants

People often ask what working in Singapore is like, and I usually answer that question with "intense. Everyone is professional, from a receptionist to a minister, focused and dedicated. At the same time, I worry about whether people are relaxing enough and not working too hard. See above screenshot of an email about the preliminary online meeting on our panel, which consists of only four participants and yet went out to 27 people. You'd think this would lead to a huge bureaucracy, but officials, for example, answer email inquiries substantively within three business days. Sometimes I begrudge everyone in Singapore a daddy or mommy day a week.

Since I will have access to a make-up artist, something that has been at the top of my wish list for years, I expect there will be a livestream that I will share through my accounts on Twitter, LinkedIn and Instagram. The panel will take place from 9 a.m. to 9:45 a.m. Dutch time. Advance warning: it's only for the connoisseur/lover of concepts like "participatory data" and "decentralization of identity.

Topping the Spotlight 9 inside: Nvidia

For years, the technology sector has been talking about a handful of dominant players: Alphabet (Google's parent company), Amazon, Apple, Facebook (now Meta) and Microsoft. Since this week, we can count Nvidia among them, which passed Meta in market value. For a while, Nvidia was even "a trillion dollar company," or worth more than a trillion: a thousand times a billion. (A billion in English is a billion and a trillion in English is a trillion. They are not the inventors of the useless inch and driving on the left for nothing).

Meta past in market value, 175% increase this year: Nvidia belongs in Spotlight 9

Therefore, in my completely arbitrary survey of key economic indicators for the tech world, my Spotlight 9, I threw out the Dow Jones Index and replaced it with Nvidia. After all, for the overall market, the S&P 500 is already in the list, for crypto the tokens Bitcoin and Ethereum, and that leaves no fewer than six indicators of stock market sentiment for the tech sector.

But beware: anyone who buys a share of Nvidia now does so at a P/E ratio of over 200! Compare that to Apple, with a P/E ratio of 30, and then I dare say it is unrealistic to expect Nvidia to grow more than six times as fast as Apple. In other words, Nvidia stock is extremely expensive, regardless of that AI-driven demand for GPUs and the new Nvidia supercomputer.

Speaking of Apple, I wrote, to the annoyance of a number of Apple employees who I thought I could count among my circle of friends until that article, about the long-awaited Apple mixed reality headset, probably called the Apple Reality Pro. This device, the first all-new device since the Apple Watch in 2015, is expected to be unveiled at WWDC on Monday. If it really is something special, I will write an extra edition of this newsletter on Tuesday morning. If not, thanks for your interest and see you next week.