Categories
AI crypto technology

Interesting stuff to click, read or watch in January 2024

  • Interesting food for thought: legendary investor Fred Wilson (Twitter, Etsy, Kickstarter, Coinbase) describes a new business model for Web 3 and perhaps AI applications: minting. At its core: shared ownership with users.
  • Sam Altman, the founder and CEO of OpenAI, is seeking billions to build a network of chip factories. The goal is clear: become less dependent on NVIDIA. Remarkable that the man manages to raise billions as a side hustle.
  • Investor and founder of LinkedIn Reid Hoffman shares his thoughts on "a plausible reality. Note his description of a search for an oasis rather than a mirage.
Categories
technology

Notable tech news in January 2024

World's most valuable company seriously hacked

Russians hack companies, including Microsoft and Hewlett-Packard. Microsoft shrouds itself in mists about what happened: "The threat actor then used the legacy test OAuth application to grant them the Office 365 Exchange Online full_access_as_app role, which allows access to mailboxes."

That's something like describing the 9/11 WTC attacks as, "some grumpy passengers paid uninvited visits to the cockpits and then rudely parked the planes in a well-known commercial real estate property near a landmark statue."

For what Microsoft admits here, according to experts, is that the hackers had gained the same access as a system administrator and then could do anything they wanted, including reading Microsoft management's email messages. It's embarrassing for Microsoft, which was just so proud to have passed Apple as the world's most valuable company with a stock market value of as much as $3 trillion ($3,000 billion).

Microsoft's hack at this level also shows that companies and organizations should make a special effort to store as little relevant information centrally as possible, because 100% security is a myth.

Boys play shooter games, girls are on social media

Founder of famed startup incubator Y Combinator Paul Graham shared a remarkable message: research shows that boys are becoming more conservative and girls more progressive.

Graham thinks he knows the reason: "This trend has a blandly obvious explanation. Boys and girls used to get along more. The girls made the boys more liberal, and the boys made the girls more conservative. But now the boys are sitting at home playing shooter games, and the girls are sitting at home posting on Instagram.'

To my knowledge I am childless so not an expert, but this sounds like a plausible explanation. Except that young people are mostly on TikTok and those over 25 are on Instagram. Facebook is for grandparents.

Fake nude turns out to be worse than real death

In the United States, alarm bells have been ringing since fake nude photos of celebrities such as Taylor Swift, probably created with AI, flooded social media. Of course, you wouldn't wish anyone to be put through the wringer as vulgar as TayTay, but as a European, it remains miraculous to note that while Americans are taking action against fake nudity, the most gruesome images of child corpses in war zones spread through social media hardly caused a stir.

AI and crypto mining fast-growing energy guzzlers

The growth of the Internet, the AI explosion and the resurgence of crypto and associated mining are causing data center energy consumption to double in the next two years, the International Energy Agency reports in a new report. It is expected that in Ireland, for example, which is eager to reel in large data centers, as much as a third of all energy will be consumed by data centers as early as 2026.

Meta propels NVIDIA stock price upward

Mark Zuckerberg is doing his part to help global warming, he reported on Instagram that his company Meta has purchased as many as 350,000 H100 systems from NVIDIA and has stocked a total of 600,000 H100-like systems.

It will be interesting to see how Meta will link its huge reach with AI to develop new applications. It will also be interesting to see if Zuckerberg uses a make-up artist again in his next video to make him look like the reincarnation of an embalmed Lenin.

Categories
AI technology

January was a jubilant month - in the tech world, that is.

After a short winter break, I want to look back on a strange January in this year's first newsletter. While much of humanity lies awake over wars in Ukraine, Gaza and Syria, and half the world's population votes on major issues in over 50 countries this year, including India, Mexico, the United Kingdom, Russia and the United States, January was one big celebration month in the tech world. No dry January, the new year kicked off as a month-long party.

It was another wild kegger at the World Economic Forum in Davos. Image: created with Midjourney.

800 women on a mountain seems more than it is

Speaking of parties, the annual reunion of old white men on a mountain was another great success. The World Economic Forum (WEF) congratulated itself on the participation of 800 women in the official program, a whopping 28% of participants.

It should not get much crazier, if they are not careful WEF will count almost one woman to every two guys this century! The question arises as to whether secretaries were counted as delegates, as participants reported that the number of women they encountered at Davos was as high as the number of MMA fighters at the annual Ladies Hairdressers Day.

Women are in short supply at WEF, as are dark-skinned people. Like motorcyclists on a drive or penguins in a zoo, I have caught myself in Davos politely waving back or nodding to other fellow pigmented people.

A week later, no WEF participant can remember what else was discussed or agreed upon, because unlike the COP climate conferences, for example, Davos is not about jointly formulating measurable goals. There is old-fashioned networking and job hunting.

Global domination started at WEF?

I am sorry to disappoint the conspiracy thinkers, but there is no talk at WEF of world domination by a small, ruling elite at the expense of the common people; there is not much thought about the future at all. WEF excels mainly in zizagging into the future, looking in the rearview mirror - with our glasses in the cheese fondue.

Those attending WEF without a driver have snow boots to get around. Photo: self.

Don't get me wrong: that annual January week in Davos is above all an opportunity to very efficiently have many meetings in a short period of time, and that saves travel time in the rest of the year. In the years I attended the WEF, 2018 for the last time, I met several people who crammed up to 20(!) appointments into a day. But those discussions were almost exclusively about the here and now. Only on rare occasions was the day of tomorrow discussed, let alone the rest of the century.

Just look at the WEF agendas of the last 50 years: in the 1970s, after the oil crisis, the main topics discussed were ... oil and energy. Today, it is conferencing AI and climate change. But that happens after ChatGPT took the world by storm, in the wettest and warmest year in history.

Just as the Internet was barely discussed at the WEF in the 1990s, until it had become indisputably the fastest-growing medium in history, climate change was a secondary issue at Davos for many years. Rather, there was dithering and interesting snouts on topics like the Fourth Industrial Revolution, typically one of those WEF topics you never hear anyone talk about anymore.

Gucci has a trust layer

A standout moment of WEF 2024 was the monologue delivered by Salesforce CEO Marc Benioff during an otherwise soporific panel. It remains fascinating how easily and casually many Americans can present, tossing around one catchphrase after another like snowballs at a winter children's party.

Benioff mentioned that Salesforce has an AI product, humbly christened Einstein, and casually remarked, "it has a trust layer. People in Durex's marketing department must have spontaneously burst into a crying fit that they hadn't coined that cry for their latest generation of thin condoms. 'It is a trust layer.' So much better than Durex's slogan: 'made to make you last longer.'

It was delightful to see the poker face of OpenAI CEO Sam Altman when Benioff publicly described him as "my good friend Sam." Not just his friend; that would be too little honor. Altman is his good friend.

Those likely to be less friendly with Benioff after this panel are the people at Gucci. Benioff proudly mentioned that he had just visited Gucci's help desk in Milan, where hundreds of employees use Salesforce software to return broken Gucci products and call customers who want their money back. Probably not information Gucci would want the world to hear about. That Salesforce trust layer did not reach Marc Benioff's mouth.

WEF counts many of these lackluster gentlemen as by bitch. Photo: self.
Categories
AI crypto technology

My Christmas request is: help invest in a sustainable solution

We are just before Christmas 2023 and this might be the time for a flashy annual review or an exciting look ahead to 2024. But there is something we can't ignore that urgently needs our attention. Last week, the UN climate conference COP28 concluded with a hollow declaration of compromise. The Guardian wrote this balanced summary about it.

The US position as the world's largest oil and gas producer remains unaffected. China will continue to expand coal production and India's industry need not fear either. Saudi Arabia tried to remove any reference to fossil fuels, Russia worked behind the scenes to thwart progress and will try again next year when the climate summit is held in Azerbaijan.

Even as an optimist, the lack of specific CO2 reduction targets stops me from cheering over the agreement reached to move away from fossil fuels. Many countries, especially large CO2 emitters, have not agreed to concrete emission reduction targets. That makes the agreement as empty as children's promises in the weeks before Christmas to be less naughty next year.

It is now up to all of us

In December 2015, my colleague Hans Tobé and I attended COP21 in Paris, where the expectation was that for the first time ever serious plans would be forged to combat climate change and, in short, save the world as we know it.

With colleague Hans Tobé on the doorstep of COP21 in Paris, December 2015

At the time, Hans and I had just started Blue City Solutions with a group of like-minded people in the US and France, which aims to support projects that promote CO2 reduction. For various reasons, one of which was the Covid pandemic, this has been more difficult than we had hoped.

Our thinking at the time was that it was important for government and business to act together. In practice, through unwillingness or incompetence, or an unfortunate combination thereof, politicians around the world are proving unable to come up with a coherent policy to combat global warming.

Meanwhile, promising technological innovations have been developed, such as CSS technology that removes CO2 from the atmosphere and dissolves it in water but there have also been breakthroughs in ocean fertilization. Major breakthroughs are being made in the field of energy efficiency, which has convinced me that the fastest way to save this planet is through innovations from within society, with governments only facilitating and not guiding.

iXora, from The Netherlands

Everyone reading this newsletter, including through LinkedIn, Medium or Marketing Report, uses modern technology in their daily lives. Whether it is cloud services like Dropbox, Google Cloud or Microsoft OneDrive, AI applications like ChatGPT or streaming services like Netflix; modern life is made possible by services delivered from data centers, a market that is currently growing nearly 20% per year!

Those very fast-growing data centers are eating up power, especially to cool the modern, latest generation servers. Thus, together we are part of the problem. In my opinion, the solution is not to trade in our smartphones for old Nokias, but rather to take a leap forward and cool data centers in a better way.

That is what iXora does based on 'immersion cooling', cooling by means of liquid instead of air cooling, a patented technology that allows data centers to save over 30% on their energy consumption and also generates residual heat that can be used by houses and offices, for example. In short, iXora's technology leads to significant cost savings and structural reduction of CO2 emissions.

Netherlands most interesting startup

I have previously explained in detail why I am not neutral when it comes to iXora and why I think iXora is Holland's most interesting startup.

Watch the short introductory video of iXora here

In a nutshell: first of all, the data center industry is a global billion-dollar market that is forced to reduce energy consumption, and thus CO2 emissions, as soon as possible. If only because of energy costs!

Second, the unique technology that iXora employs to cool servers in the worldwide standard 19-inch enclosure is well patented, providing a competitive advantage. And third, I have come to know the founders as knowledgeable, energetic and reliable.

Those three factors together are rare to see in a Dutch startup. iXora offers an investment in accordance with the planet-people-profit principle, where technological advances enable a sustainable world in a profitable way. That approach appeals to me.

I also expect a lot from the R&D project announced this week by iXora to apply iXora's cooling technology to the equipment of NVIDIA, the undisputed leader in servers for AI applications. As a participant in the NVIDIA Inception Program, iXora will have access to NVIDIA engineers in making iXora technology suitable for NVIDIA's CPUs and GPUs.

And admittedly, in the context of full transparency: I also think, as a thrifty Dutchman, that the valuation of iXora, the price per share, for a company in such a global market that already delivers its products to paying customers, is modest. If iXora were based not in the Netherlands but in Palo Alto, the company would be worth at least fivefold. It's as simple as that.

The Christmas spirit in 2023: invest in sustainability

With any innovation, what matters most is what the customer thinks of it. This is precisely why the opinion of Ludo Baauw, CEO of Intermax, is so important. As a Rotterdam native, he makes no bones about it. Watch here his clear presentation on the first installation of iXora at Intermax, in the data center of NorthC. (I hope your version of YouTube has subtitles in your preferred language.)

Because you, as reader of this newsletter, are also strongly interested in innovations that can improve our lives, I am therefore asking you to support iXora. That's my request to you this Christmas.

Participating is possible from as little as €5,000 and all information is available here. There are people who invest in their children's names so that any profits will go to the next generation. A nice thought, but I would carefully consider how savvy your offspring is because it potentially involves serious pocket money.

Be careful anyway, of course: despite my enthusiasm, I want to emphasize that investing in startups is high-risk. Do this only with money you can spare and also assume you will lose it; but if you do start to see a return, it will probably be much more than you put in.

Spotlight 9: technology had a banner year in 2023

Speaking of investing and risk, it remains striking to see that despite the war in Ukraine, the misery in Israel and Gaza, and the uncertainty surrounding China's economy, with the U.S. presidential election looming, tech stocks achieved phenomenal returns in 2023.

NVIDIA, Meta and Bitcoin were the winners of 2023. Looking over the last five years, it was different.

In addition to looking at 2023, I also looked back at the best-performing stocks in the last five years. That leads to a different picture and different conclusions. What stands out the most in 2023 is not that NVIDIA, up 242%, was by far the best investment of the Spotlight 9, because with the explosion of the AI market, that was no surprise.

But I don't know anyone who expected Meta (Facebook, Instagram, Whatsapp) shares to rise 168% this year after the disastrous 2022. The comeback of Bitcoin and Tesla was also remarkable. Investing, especially in technology, remains a strange combination of analytical thinking and belief in magic.

Therefore, it also makes sense not to lose sight of the S&P 500: in this chart it is the slowest kid in class, but in 2023 this index rose 23% and over the last five years the increase was as much as 93%. For the prudent investor, still a return many times better than a savings account.

Looking at the last five years, Ethereum, NVIDIA and Tesla have been the top three investments with staggering increases:

  • Ethereum: 1,841%
  • NVIDIA: 1,409%
  • Tesla: 1,089%

I certainly expected Bitcoin to be on the podium, but this shows once again that when it comes to investing, I'm better off focusing on analysis than predictions. Because I still can't give a single meaningful answer to the most frequently asked question, "what will be the next Ethereum, NVIDIA and Tesla in the next five years?

I want to thank everyone for their interest, tips and feedback and wish all readers and their loved ones a very Merry Christmas, a Happy New Year and all the happiness, love and health in 2024. Until next year!

Categories
AI crypto technology

Google in total panic by OpenAI, fakes AI demo

At last, Google's response to ChatGPT's OpenAI appeared this week, highlighted by a video of Gemini, the intended OpenAI killer. The response was moderately positive; until Friday, when it was revealed that Google had manipulated some crucial segments of the introductory video. The subsequent reactions were scathing.

Google makes a video, fake 1. Er, take 1. (Image created with Dall-E)

Google was showered with scorn and the first lawsuits should be imminent. A publicly traded company cannot randomly provide misinformation that could affect its stock price. Google is clearly in panic and feels attacked by OpenAI at the heart of the company: making information accessible.

Google under great pressure

It was bound to happen. CEO Sundar Pichai of Alphabet Inc, Google's parent company, went viral earlier this year with this brilliant montage of his speech at the Google I/O event in which he uttered the word AI no less than twenty-three times in fifteen minutes. The entire event lasted two hours, during which the term AI fell over one hundred and forty times. The message was clear: Google sees AI as an elementary technology.

Meanwhile, Google's AI service Bard continued to fall short of market leader OpenAI's ChatGPT in every way. Then when Microsoft continued to invest in OpenAI, running up the investment tab to a whopping $13 billion while OpenAI casually reported that it was on its way to annual sales of more than a billion dollars, all alarm bells went off at Google.

The two departments working on AI at Google, called DeepMind and Google Brain - there was clearly no shortage of self-confidence among the chief nerds - were forced to merge and this combined brain power should have culminated in the ultimate answer to ChatGPT, codenamed Gemini. With no less than seventeen(!) videos, Google introduced this intended ChatGPT killer.

Fake Google video

Wharton professor Ethan Mollick soon expressed doubts about the quality of Gemini. Bloomberg journalist Parmy Olson also smelled something fishy and published a thorough analysis.

The challenged Gemini video

Watch this clip from Gemini's now infamous introduction video, in which Gemini seems to know which cup to lift. Moments later, Gemini seems even more intelligent, as it immediately recognizes "rock, paper, scissors" when someone makes hand gestures. Unfortunately, this turns out to be total nonsense.

This is how Gemini was trained in reality. Totally different than the video makes it appear.

Although a blog post explained how the fascinating video was put together, hardly anyone who watched the YouTube video will click through to that apparently accompanying explanation. It appears from the blog post that Gemini was informed via a text prompt that it is a game, with the clue: "Hint: it's a game."

This undermines the whole "wow effect" of the video. The fascination we initially have as viewers has its roots in our hope that a computer will one day truly understand us; as humans, with our own form of communication, without a mouse or keyboard. What Gemini does may still be mind-blowing, but it does not conform to the expectation that was raised in the video.

It's like having a date arranged for you with that very famous Cindy, that American icon of the 1990s, and as you're all dressed up in your lucky sweater waiting for Cindy Crawford, it's Cindy Lauper who slides in across from you. It's awesome and cozy and sure you take that selfie together, but it's still different.

The line between exaggeration and fraud

The BBC analyzed another moment in the video that seriously violates the truth:

"At one point, the user (the Google employee) places down a world map and asks the AI,"Based on what you see, come up with a game idea ... and use emojis." The AI responds by seemingly inventing a game called "guess the country," in which it gives clues, such as a kangaroo and koala, and responds to a correct guess by the user pointing to a country, in this case Australia.

But in reality, according to Google's blog post, Gemini did not invent this game at all. Instead, the following instructions were given to the AI: "Let's play a game. Think of a country and give me a clue. The clue must be specific enough that there is only one correct country. I will try to point to the country on a map," the instructions read.

That is not the same as claiming that the AI invented the game. Google's AI model is impressive regardless of its use of still images and text-based prompts - but those facts mean that its capabilities are very similar to those of OpenAI's GPT-4.'

With that typical British understatement, the BBC disqualifies the PR circus that Google tried to set up. Google's intention was to give OpenAI a huge blow, but in reality Google shot itself in the foot. Several Google employees expressed their displeasure on internal forums. That's not helpful for Google in the job market competition for AI talent.

Because in these very weeks when OpenAI appeared to be even worse run than an amateur soccer club, Google could have made the difference by offering calm, considerate and, above all, factual information through Gemini.

Trust in Google damaged

Instead, it launched a desperate attack. I'm frankly disappointed that Google faked such an intricate video, when to the simple question "give me a six-letter French word," Gemini still answers with "amour, the French word for love. That's five letters, Gemini.

The brains at Google who fed Gemini with data have apparently rarely been to France, or they could have given the correct answer: 'putain, the French word for any situation.'

Google's brand equity and market leadership are based on the trust and credibility it has built by trying to honestly provide answers to our search questions. The company whose mission is to make the world's information organized and accessible needs to be much more careful about how it tries to unlock that information.

Techcrunch sums it up succinctly, "Google's new Gemini AI model is getting a mixed reception after its big debut yesterday, but users may have less confidence in the company's technology or integrity after finding out that Gemini's most impressive demo was largely staged."

Right now, Google is still playing cute with rock-paper-scissors, but once Gemini is fully available it is expected to provide relevant answers to questions such as, I'll name a few, who can legitimately claim Gaza, Crimea or the South China Sea. After this week, who has confidence that Gemini can provide meaningful answers to these questions?

Hey Google, you're on the front page of the newspaper. True story (Image created with Dall-E).

How many billion ican OpenAI snatch rom Google?

The reason Google is reacting so desperately to the success of OpenAI is obviously because it feels it is being threatened there were it hurts: the crown jewels. In the third quarter of 2023, Alphabet Inc. the parent company of Google reported total revenue of seventy-seven billion dollars.

A whopping 78% of that was generated from Google's advertising business, which amounts to nearly sixty billion dollars. Note: in one quarter. Google sells close to seven hundred million dollars in advertising per day and is on track to rake in thirty million dollars - per hour.

ChatGPT reached over a hundred million users within two months of its launch, and it is not inconceivable that OpenAI will halve Google's reach with ChatGPT within a few years. Everyone I know who uses ChatGPT, especially those with paid subscriptions, of which there are already millions of users, says they already rarely use Google.

Google has far more reach than it can sell so decrease in reach does not equate to a proportional decrease in revenue; but it is only a matter of time before ChatGPT manages to link a good form of advertising to the specific search queries. I mean: there's a company that makes millions per hour selling blue links above answers...

Falling stock market value means exodus of talent

Google will then quickly be able to drop from one of the world's most valuable companies with a market capitalization of $1.7 trillion (1,700 billion) to, say, half - and then be worth about as much as Google's hated, loathed competitor in the advertising market: Meta, the creator of in Google's brains simple, low-down social media like Facebook, Instagram and Whatsapp. Oh, the horror.

This is especially important because in this scenario, the workforce, which in the tech sector never perks up from declines in the value of their options, is much more likely to move to companies that do rapidly increase in value. Such as OpenAI, the maker of ChatGPT.

Spotlight 9: the most hated stock market rally

'The most hated rally,' says Meltem Demirors: the rise of Bitcoin and Ethereum continues.

'The most hated rally,' is how crypto oracle Meltem Demirors aptly describes the situation in the crypto sector. ' Everyone is tired of hearing about crypto, but baby, we're back!'

After all the scandals in the crypto sector, the resignation of Binance CEO Changpeng Zhao, CZ for people who want to pretend they used to play in the sandbox with him, seems to have been the signal to push the market upward. I wrote last March about the problems at Binance in meeting the most basic forms of compliance.

According to Demirors, macroeconomic factors play a bigger role, such as expected interest rate declines and the rising U.S. budget deficit. The possible adoption of Bitcoin ETFs is already priced in and the wait is on for institutional investors to get into crypto. Consumers already seem to be slowly returning. Crypto investors, meanwhile, seem more likely to hold Ethereum alongside Bitcoin.

Investing and giving birth

I continue to be confirmed in my conviction that professional investors understand as much about technology as men understand about childbirth: of course there are difficult studies and wonderful theoretical reflections on it, but from what I hear from experts in the field of childbirth (mothers) it turns out to be a crucial difference whether you are standing next to a delivery, puffing along, or bringing new life into this world yourself. There is a similar difference in investing in technology or developing it.

I don't think there is a person working in the tech sector who, after reading through the reactions to Google's Gemini announcement, thought, "that looks great, I need to buy some Alphabet shares soon.

But what did Reuters report, almost cheerfully: "Alphabet shares ended 5.3% higher Thursday, as Wall Street cheers the arrival of Gemini, saying the new artificial intelligence model could help close the gap in the race with Microsoft-backed OpenAI."

Ken Mahoney, CEO of Mahoney Asset Management (I detect a family relationship) said "There are different ways to grow your business, but one of the best ways is with the same customer base by giving them more solutions or more offers and that's what I believe this (Gemini) is doing for Google."

The problem with people who believe something is that they often do so without any factual basis. By the way, Bitcoin and Ethereum rose more than Alphabet (Google) last week.

Other short news

The Morin and Lessin couples are journalists, entrepreneurs and investors, making them a living reflection of the Silicon Valley tech ecosystem.

Together they make an interesting podcast that this week includes a discussion of Google's Gemini and the crypto rally.

It's great that Google founder Sergey Brin is back to programming at Google out of pure passion. The Wall Street Journal caught onto it this summer. Curious what Brin thinks of the marketing efforts of Gemini, which he himself is working on.

Elon Musk's AI company, x.AI, is looking for some start-up capital and with a billion, they can at least keep going for a few months. Which does immediately raise the question of why Musk accepts outside meddling and doesn't take the round himself. Perhaps he already expects to have to make a substantial contribution to x.com, the former Twitter.

Mistral, the French AI hope in difficult days for the European tech scene, didn't make a video, not even a whitepaper or blog post, but it linked in a tweet to a torrent file of their new model, attractively named MoE 8x7B. It made one humorous Twitter user sigh "wait you guys are doing it wrong, you should only publish a blog post, without a model." It will be a while before people stop taking aim like this at Google. Anyway, as far as I'm concerned, only amour for Mistral.

Details should become clear in the coming days, but the fact that Amnesty International is already protesting because of the lack of a ban on facial recognition is worrying. EU Commissioner Breton believes this puts Europe at the forefront of AI and therefore he would likely thrive as a tech investor on Wall Street.

CFO Paul Vogel got kicked while he was already down: "Spotify CEO Daniel Ek said the decision was made because Vogel did not have the experience needed to both expand the company and meet market expectations." Vogel was not available for comment but still sold over $9 million worth of options. It remains difficult to build a stable business as an intermediary of other people's media.

Apparently, MBS is an avid gamer. After soccer and golf, Saudi Arabia is now plunging into online gaming and e-sports.

I hold out hope that AI will be used in medical technology, to more quickly detect diseases, make diagnoses or develop treatments. But right now, the smartest kids in the class seem focused on developing AI videos that mimic the dances of real people on TikTok.

Where are the female automotive designers? 'Perhaps the way forward in the automotive industry lies neither with the feminine (the unwritten page) nor the masculine (full steam ahead), but somewhere in the middle that combines the practical and the poetic, with or without a ponytail,' according to Wired.

Categories
AI technology

OpenAI gives Google, Amazon and Apple a hard time and Elon Musk had a tough month

OpenAI's ChatGPT is an accelerating snowball: how long before people search ChatGPT first for answers to their questions and for the best deals on products? Image: ChatGPT4

With all the wrangling at OpenAI, you would almost forget, but ChatGPT just celebrated its first birthday this week. Over a hundred million people use an OpenAI service each month, and annualized revenue is over $1.3 billion, a first step toward possible market dominance. 

ChatGPT4 nicer than Google

As a subscriber to ChatGPT, these days I ask almost every question first to ChatGPT4, instead of searching on Google. 'The best day to fly between Europe and Asia, what shoe size is Shaquille O'Neal and under what three names was that movie starring Tom Cruise and Emily Blunt released?' Just three questions I asked ChatGPT today. But also, 'tell me about investors Vinod Khosla and Reid Hoffman,' but more about them in a moment.

Compared to Google, ChatGPT's answers seem better and I like that I don't have to click through to other websites. No doubt Google has tracked the change in search behavior through Google Chrome and the other gimmicks Google uses to capture people's behavior. This makes it all the more painful that Google, according to The Information, decided this weekend to delay the launch of OpenAI competitor Gemini until early next year.

Search and buy through ChatGPT?

One company that is also seriously threatened by OpenAI is Amazon, and it is rarely noticed. Especially in the US, Amazon has become "the Google of buying": as soon as Americans think about buying something, they search directly on Amazon. Other websites no longer play a role here.

It looks like it will be months rather than years before ChatGPT is fed sales information from the world's largest online stores. All parties that now sell through Amazon Marketplace can then directly serve their customers outside of Amazon. Of course, fulfillment then remains an issue, and in that Amazon is almost unbeatable, but the company is not worth $1.5 trillion because it is so good at shipping packages efficiently.

Amazon is so valuable because it is where buyers find their products and where transactions take place. OpenAI has a great tool with ChatGPT to take over that function, because with its Plus subscribers it already has a payment relationship that can be easily expanded. Amazon is no doubt already formulating a response to this threat.

iPhone users switch from Siri

Apple is surely following with suspicion how many people program the new "action" button on the iPhone 15 with ChatGPT. The idea was that it would allow people to launch their email or camera app faster but article after article appears urging people to get rid of Siri as if from herpes after a ski weekend with a frat house.

A headline like "Throw Siri off your phone and use ChatGPT for help" must hurt intensely at Apple. Siri never became what Apple had hoped it would, and if many people use ChatGPT as the first search function on the iPhone, heads will roll at Apple. The question has long since ceased to be whether ChatGPT has this potential, but whether the OpenAI board will become stable enough quickly to successfully introduce this kind of product.

Hoffman and Khosla, billionaires with an opinion

Viewed in this light, it was nice to see an excellent podcast on Thursday featuring legendary entrepreneurs and investors Reid Hoffman (PayPal, LinkedIn, Greylock) and Vinod Khosla (Sun Microsystems, Khosla Ventures), both investors in OpenAI.

By the way, note the almost mocking title with which Khosla describes himself on LinkedIn: "venture assistant. That's like Lionel Messi creating a LinkedIn profile with the feature "ball boy.

Hoffman was the first contributor to OpenAI from one of his private foundations, when it was still just a benevolent club of academics. After all, you don't do well as a billionaire until you have at least one foundation named after yourself, although I found this one-page website for Hoffman's other foundation, the Aphorism Foundation, amusing.

Khosla put into OpenAI double what he had previously invested in a startup: $50 million. In short, Messrs. Hoffman and Khosla are not entirely neutral (cough).

No restriction of competition, China a risk

Hoffman focused on market forces in the conversation. " Startups are not hindered right now," he explained, despite the apparent dominance of OpenAI and mega-cap tech companies such as Microsoft. Hoffman has been on Microsoft's board since he sold LinkedIn to Microsoft and doesn't think his "own companies" have too much power. "I don't think it limits competition on any level," he said, to nobody's surprise.

Khosla called the focus on existential risks of AI "nonsensical talk from academics who have nothing better to do". But he sees China as a major risk and thinks the U.S. is "in a techno-economic" war with China and should take a tougher stance. " I would ban TikTok in a nanosecond," said Khosla, in contrast to Hoffman, who had spoken with President Biden just before recording the podcast. After all, if anyone knows the value of a good network, it is LinkedIn's founder.

Khosla is firmly against open-source AI models as well due to the China risk. Bio-risk and cyber risk are real concerns too, he noted. But if China or rogue viruses don’t kill us, Khosla thinks the near-future is very bright: “I do think in 10 years we’ll have free doctors, free tutors, free lawyers” all powered by AI.

Elon Musk had a tough month

At the last minute, Tesla published this amusing video in which the new Tesla Cybertruck makes mincemeat of a Porsche 911 in a sprint. Pay special attention to the bouncer, because the Cybertruck has something hanging from the tow hook and it's not a caravan. Marques Brownlee made a whopping 40-minute review video.

Tesla Cybertruck towing a Porsche 911 is faster than ... a Porsche 911.

Not everyone is a fan of the Cybertruck, for example, Engadget writes: "Teslaa's Cybertruck is a dystopian, masturbatory fantasy. In Elon's future, the rich should be allowed to dominate (and probably run over) the poor with impunity."

Cute that a Cybertruck gets to 60 miles an hour (100 km/h) in 2.6 seconds, but I am particularly curious to see how this paintless, silver doghouse weighing over three thousand kilos (over 6,000 lbs) behaves on a mountain pass full of hairpins. Or how you park it in reverse in the parking lot of your local supermarket.

Musk chases advertisers off

Reuters puts it beautifully, "Elon Musk is keen to achieve what no business leader has done before, from mass-producing electric cars to developing reusable space rockets. Now he is blazing another trail most chief executives have avoided: the profane insult.." Not only that: gross insult to customers.

Musk said it twice to advertisers who left his social media platform X for complimenting a text with anti-Semitic content: "go fuck yourself. 

Musk felt it necessary to name one such departing advertiser, Disney CEO Bob Iger, unprovoked. Consider the weeks Musk has had behind him: on November 18, SpaceX sent the Starship into space, where it blew up, intentional or not.

The same weekend saw the leadership fiasco at OpenAI, with the fired Sam Altman and the other key players communicating with the outside world almost exclusively through Musk's platform X. That was vindication for Musk, who earlier this year saw Mark Zuckerberg's Threads becoming the fastest-growing social media platform - dying out as quickly as it emerged, but that's for another time.

Due to Musk's unprecedentedly stupid action (his own words) of complimenting anti-Semitic sewer texts, many advertisers withdrew from X and the long term consequences remain to be seen. So Musk headed to Israel, which has been an unpopular midweek destination lately.

The irony of fuck and freedom of speech

Watch the whole item at Fox. Musk looks like captain Jack Sparrow after a rough night. He, probably just back from Israel and suffering from jet lag, looks even whiter than the average Fox viewer, and despite trying to appear masculine in his leather jacket with smutty teddy collar over a too-hot-washed t-shirt, he makes a vulnerable and frustrated impression. Here sits someone trying very hard to look like someone who is not trying very hard.

The whole segment at Fox is an adulation of Musk as a defender of freedom of speech, which was supposedly sorely missed when X was just called Twitter. I think the nice thing is that all five panelists, with that ball room dancing teacher in the middle surrounded by four born again nymphs, don't realize that it's hilarious that they spend minutes talking about freedom of speech, but the term "fuck" used by Musk is bleeped out twice by Fox.

So no viewer knows what Musk actually said. You can guess, but you don't know, because you can't hear it. No one repeats it or comes back to it. I love that discomfort. It's a moment like when a notorious meat eater finds out that the barbecue he's generously serving for his second plate is made of vegan meat.

Musk, willingly or unwillingly, with all his absurd attempts to regulate X on the one hand and then open it up on the other, demonstrates the total insanity of American morality that he himself struggles with as a South African. Anti-Semitism? Bad for business. Saying fuck on TV? Not allowed, but you can then worship him as a champion of Fox's supposedly so desired freedom of speech.

Breakfast TV (h)honest

It reminded me of the time I gave an interview about virtual reality during Web Forum in Dublin for a BBC breakfast program. I made the unforgivable mistake of saying that in the future VR would have all sorts of wonderful applications, from news to film, music and sex. Hey ho ho no jeez Louise, stop, panicked the BBC crew: I had said sex.

That wasn't allowed in a breakfast program. Because it's early in the day, blushing kids just eating their oatmeal, well surely I understood. "Am I allowed to say machine gun or weapon of mass destruction?" I asked. It certainly was allowed. 'Bloody mass murder?' No problem at all. ' How graphically may I describe the Catholic Church's misconduct with young boys?' Um, there were no specific rules for that, so in the end I had a great morning. That the item ever aired shows the editing skill of BBC editors.

Techbros need help

Musk is not the only tech CEO struggling with freedom of speech and regulation of his social media network. I recently wrote about my own struggle with freedom of speech when subscribers to my first company Planet Internet were found to be distributing child pornography. Deciding not to distribute certain messages was easy, but determining where that boundary lay was difficult, not to mention technically complex.

Mark Zuckerberg is in deep trouble now that the Meta platforms (Facebook and Instagram in particular) seem to have become popular platforms among pedophiles. According to the Wall Street Journal, Meta has spent months trying to fix child safety issues on Instagram and Facebook, but is struggling to prevent its own systems from enabling and even promoting a vast network of pedophile accounts.

The Meta algorithms unrestrainedly promote the content the user clicks on, with dire consequences. The U.S. Congress is becoming more alert, and the European Union is also now rightly targeting Meta.

I firmly believe that the limited social gifts of people like Musk and Zuckerberg have led them to think differently, more autonomously, than us simple souls and are therefore capable of achieving more; at the same time, they have limited empathy and genuinely don't understand why the world has trouble with their policies. What makes them great as entrepreneurs keeps them small as human beings.

Special links

  • What did an iPhone do to the arms of this distraught bride?

Bride-to-be stands in front of the mirror, takes a picture and suddenly she looks bewitched. Pay close attention to her arms.

Tessa Coates tells in her Instagram Story what happened.
  • High costs and fierce competition lead to battleground among streaming services

In Europe, Viaplay is struggling; in America, Apple and Paramount are discussing a partnership.

  • DNA data should never be stored centrally

The day you knew was coming: 23andMe was hacked and highly confidential data of thousands of users was captured.

The UN climate conference COP28 has begun in Dubai, led by Abu Dhabi's oil boss. Before we get cynical, here is the good news that hard work is being done on sustainable aviation. Applause for Virgin Atlantic.

Spotlight 9: crypto week!

Bitcoin toward $40,000 and Ethereum over $2,000, party time in the crypto world.

It was a dull week for investors, unless you dare to get into cryptos because Bitcoin and Ethereum seem to be definitely back!

Categories
AI technology

Five conclusions after the chaos at OpenAI

Sam Altman is back at OpenAI and more powerful than before, but is that a good thing?

A few days after the kings drama at OpenAI, let's try to look over the ruins at this company whose mastermind, Chief Scientist Ilya Sutskever, has said that AI could herald the downfall of the world. Far too often it is forgotten that Sutskever and his colleague Jan Leike, also no slouch, published this text on OpenAI's official blog in July:

"Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction."

Ilya Sutskever and Jan Leike, OpenAI

And oh yes, they have over $10 billion in the bank to figure out if they are going to save the world, or end it. Yet there are few serious attempts to seriously monitor, let alone regulate, OpenAI and its competitors.

Imagine Boeing developing a new plane with a similar PR text: 'this super-fast plane will fly on spot-cheap organic pea soup and could help humanity make aviation accessible to all, but it could also backfire and crash and blow up the earth.' The chances of getting a license wouldn't be very good. With AI, things are different; the tech bros just put the website live and see how it goes.

Move along, the turkey was great

What is the media reporting on now at OpenAI? About the Thanksgiving dinner where reinstated CEO Sam Altman sat down with Adam D'Angelo, one of the board members who had fired him just six days earlier. Both tweeted afterwards that they had a great time together. They are so cute.

Despite the media's tendency to quickly lapse into picking a hero and a villain when conflicts arise, the much-lauded Sam Altman is increasingly being viewed as demonstrating some odd behavior from time to time. According to the Washington Post, Altman's dismissal had little to do with a disagreement over the safety of AI, as first reported, but mostly with his tendency to tell only part of the truth while trying to line his pockets left and right.

Even if they would be Granny Smith green

Meanwhile, then, Altman is back with a new board about which there are many doubts. Christopher Manning, the director of Stanford's AI Lab, noted that none of the board members have knowledge of AI: "The newly formed OpenAI board is presumably still incomplete,” he told TechCrunch. “Nevertheless, the current board membership, lacking anyone with deep knowledge about responsible use of AI in human society and comprising only white males, is not a promising start for such an important and influential AI company."

I don't care what color and what gender they are, even if they are Granny Smith green with three types of reproductive organs, but I do prefer when they have an understanding of the matter that their own experts say has the potential to put humanity over the cliff.

Five conclusions after the chaos

1. The AI war has been won by America.

Look after a week of craziness and fuss at OpenAI and we see that Microsoft, the old board and the new board are all Americans. The competitors? Amazon, Google, Meta, Anthropic, you name them: Americans. The rest of the world watches and holds meetings and speeches, but it's a done deal.

2. Good governance is nice, but bad governance is disastrous.

By this I do not mean that the people who fired Altman were right or wrong, because no one still knows that; the crux of their argument was that Altman had not given full disclosure, and if that is true, that remains a mortal sin.

But the root of the problem was deeper. The OpenAI board had been appointed to safeguard the mission of the OpenAI Foundation, which, in a nutshell, was to develop AI to create a better world. Not to create maximum shareholder value, as has now become the goal. The problem arose because of those conflicting goals.

3. Twitter, or X as it is now called, remains the only relevant social network in a crisis.

Elon Musk went on a rampage again last week and that seems to cost him $75 million in revenue, but Altman and everyone else involved still chose X as a platform to tell their story. Not Threads or TikTok – although I would have liked to have seen this mud fight for power portrayed in dance.

4. Microsoft wins.

Under Bill Gates, I already thought Microsoft was a funny name, because the company was neither micro nor soft then either, but in the nearly 10 years under CEO Satya Nadella, Microsoft has become a dominant force in all kinds of markets.

While Amazon, Google, Meta and also Apple are struggling to develop a coherent AI strategy, Microsoft seems to have found a winning formula: it is investing heavily in OpenAI, which uses the Microsoft Azure cloud, returning much of the investment back to Microsoft. Meanwhile, Microsoft does enjoy the capital appreciation via its 49% stake in OpenAI.

5. AI should be tested and probably regulated

Precisely because companies like Microsoft, Google, Meta and Amazon also dominate in the field of AI, the development of AI must be carefully monitored by governments. The years of privacy violations, disinformation and abuse of power taking place through social media, for example, show that these companies cannot regulate themselves.

The tech bro's motto remains unchanged: move fast & break things. But let them do that nicely with their own planet, not the current one. The potential impact of AI on the world is simply too great to let the mostly socially limited minds running tech companies make the choices for society.

An initiative like the AI-Verify Foundation can be a vehicle to achieve responsible adoption of AI applications. I close with the same quote as last week from OpenAI's Chief Scientist, Ilya Sutskever, which shows the world's AI leaders almost hope that future AI systems will have compassion for humanity:

"The upshot is, eventually, AI systems will become very, very, very capable and powerful. We will not be able to understand them. They’ll be much smarter than us. By that time, it is absolutely critical that the imprinting is very strong, so they feel toward us the way we feel toward our babies."

Categories
technology

Is Sam Altman returning as CEO at OpenAI?

Casual Friday at OpenAI. Image created with Midjourney.

Play it again, Sam?

That went quickly. The Verge just reported, "The OpenAI board is in discussions with Sam Altman to return to the company as its CEO, according to multiple people familiar with the matter. One of them said Altman, who was suddenly fired by the board on Friday with no notice, is “ambivalent” about coming back and would want significant governance changes."

Update: "According to The Verge, a source close to Altman indicated that the board had agreed in principle to resign to allow Altman and Brockman to return, but later wavered and missed an important deadline of 5 p.m. California time. This was a turning point for many OpenAI employees who were considering resigning. If Altman decides to leave and start a new company, these employees will almost certainly follow him."

I suspect Altman will wait a while longer until the board is salted away, because he really has no desire to start all over again and give up the lead OpenAI currently has. Especially since there are billions in cash and no investor control. Meanwhile, Mira Murati is already posting hearts under the tweets of the man she followed up on Friday.

Look how sweet.

Panic among the remaining board members

Panic among the remaining board members is evident after president Greg Brockman resigned on Friday over Altman's resignation, followed yesterday by the departure of three reputable OpenAI researchers and renowned investors in the company such as Ron Conway and ex-Google CEO Eric Schmidt voicing their support for Altman.

Brockman's tweet, which he had obviously drafted with Altman because in it he speaks about both Altman and himself in the third person, must have been especially painful for all of OpenAI's employees, investors and directors. This puts the expected funding round in jeopardy, and with no new money and departing staff, there would be little left of a technology company to build, even for OpenAI. The request for Altman to return, within 48 hours of his ousting, must be seen in this perspective.

Brief match report from OpenAI against Sam Altman

An almost inexhaustible stream of reports, theories and especially rumors has been unleashed since Friday about the resignation of Sam Altman as CEO of OpenAI, known as the creator of ChatGPT, the stunning service that introduced AI to a global audience. I will try to summarize events as briefly as possible and then share my initial thoughts.

Last week, ChatGPT had another top week. The DevDay for developers with the new 'Do It Yourself ChatGPT' product turned out to be a huge success worldwide. The story is that the new round of funding will take place at a valuation of around $90 billion; triple that of earlier this year, when Microsoft invested a quick $10 billion and, after an earlier 2019 investment of a billion, purchased a 49% stake in OpenAI at a valuation of $30 billion.

And then suddenly, on Friday afternoon California time, this message appeared on OpenAI's site, announcing Sam Altman's immediate dismissal without any pleasantries. It was soon leaked that Microsoft knew nothing about it and had been notified only minutes before the news of his ungraceful exit was announced.

Microsoft CEO furious

Microsoft quickly came out with a bizarre statement of a few curt sentences expressing support for OpenAI; it could do nothing else after already investing a total of over $11 billion in the company whose applications also run entirely in Microsoft's cloud environment. The absurdity was that Microsoft did not mention Altman's name a single time; it's like writing an article about a family event that takes place in late December when children receive presents from an old man with a bunch of reindeer, but avoiding the name Santa Claus.

New interim CEO Mira Murati was quoted by Microsoft only as "Mira". Because oh well, the founder and CEO was fired and then we don't even mention his name anymore, but while you're here we are so West Coast cool that we only do first names, you know because that's how we roll, bro. This Mira did do something strange herself: she deleted her LinkedIn profile, to which I had linked back in July when I first wrote about her.

Before I lapse into a chronological summary of this absurd Friday as if it were a Tour de France stage, I refer fans of this power struggle to two excellent articles:

  • first of all, there is this piece from Ars Technica, which is convinced that there is an almost religious battle going on within OpenAI between, in one corner, the fundamentalists, led by Chief Scientist Ilya Sutskever, who advocate a carefully developed form of AGI (Artificial Generative Intelligence, "human-like" intelligence); and, in the other corner, the commercial rascals, led by Sam Altman, who want to dethrone Google with continuously updating versions of ChatGPT, following to the time-honored techbro principle of "move fast and break things'.
  • Techcrunch made a nice timeline that would not have been out of place on the day Caesar met his end on the steps of the Roman Senate. Meanwhile, Techcrunch also reports that Microsoft CEO Satya Nadella would welcome Altman's return.

Organization structure of OpenAI absolutely unworkable

As far as I am concerned, the key question is not whether Altman is coming back or who is right in the religious struggle within OpenAI, because the reporting still relies too much on rumors. The question that must be asked is: How can a company apparently worth toward $100 billion and developing such a fundamentally important product for our society, worldwide, operate so ridiculously amateuristic? The answer lies in its organizational structure.

OpenAI has an unusual structure in which its commercial arm is owned and operated by a nonprofit charitable organization. Until Friday, that nonprofit was controlled by a board of directors that included CEO Sam Altman, President Greg Brockman, Chief Scientist Ilya Sutskever and three others who are not OpenAI employees: Adam D'Angelo, the CEO of Quora; Tasha McCauley, an adjunct senior management scientist at RAND Corporation; and Helen Toner, director of strategy and basic research grants at Georgetown's Center for Security and Emerging Technology. Currently, only Sutskever, D'Angelo, McCauley and Toner remain.

Three well-intentioned outsiders, along with a remaining staff member, control the company in which Microsoft has invested over $11 billion for 49% of the shares - without having any say. Source: website OpenAI.

Like CEO Altman and president Brockman, Sutskever, D'Angelo, McCauley and Toner own no shares in OpenAI. Investors find that unpleasant, because it means they almost always earn less at OpenAI than at any other job where they do get shares which makes the team members vulnerable to good offers elsewhere. But those investors, including such absolute legends as Vinod Khosla (Sun Microsystems, Juniper), Reid Hoffman (founder LinkedIn) and Eric Schmidt (ex-CEO Google), have as much to say at OpenAI as Santa Claus' reindeer.

No doubt they only agreed to this lack of control because OpenAI was so clearly winning the battle in the AI market that they were willing to accept this deal.

In, out, what will it be for Altman?

Et tu, Ilya?

A complicating factor is the American form of governance with a Board of Directors that consists of a combination of executives who work full time at the company, and a number of external directors.  

So at OpenAI on Friday morning there were six Directors, three from OpenAI and three externally, and since Altman was fired without Brockman's knowledge, it was immediately clear that Chief Scientist Ilya Sutskever had either abstained, as cowardly countries tend to do in the United Nations, or had voted along for the resignation of his own colleague and CEO Sam Altman. It's going to be a fun moment if and when Altman returns and they run into each other at the coffee machine. But who is Ilya Sutskever, anyway?

Ilya Sutskever apparently believes baby blue makes his eyes come out better. Image: LinkedIn.

Ilya Sutskever is an AI fundamentalist and that's a good thing

The name Ilya Sutskever and his background (Russian-Israeli-Canadian) suggest a double life as a villain in an old James Bond movie including a creepy cat on his lap. I love his old school personal homepage. I don't know him personally, but what I read from and about Sutskever is many times more interesting than anything I've heard coming out of Sam Altman's mouth so far. For example, read this excellent recent piece by Nirit Weiss-Blatt, who spoke with Sutskever at an event this summer. A few quotes:

When asked about specific professions – book writers/ doctors/ judges/ developers/ therapists – and whether they are extinct in one year, five years, a decade, or never, Ilya Sutskever answered (after the developers’ example):

“It will take, I think, quite some time for this job to really, like, disappear. But the other thing to note is that as the AI progresses, each one of these jobs will change. They'll be changing those jobs until the day will come when, indeed, they will all disappear.
My guess would be that for jobs like this to actually vanish, to be fully automated, I think it's all going to be roughly at the same time technologically. And yeah, like, think about how monumental that is in terms of impact. Dramatic."

Weiss-Blatt concluded:

'He freaked the hell out of people there. He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”
The snapshots above cannot capture the lengthy discussion. The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.

This idea of AI as causing a total economic apocalypse, with the disappearance of all jobs that are based on information analysis and decision-making, is in itself not new and more often proclaimed by science fiction writers, techno utopians and people holding a can of beer in the morning at the park.

Only Ilya Sutskever is not a crackpot, alcoholic or villain from a James Bond movie; he is the Chief Scientist of OpenAI. And you don't get that job through a phone call from your dad to a friend. After his time at the University of Toronto and work at Google, he started at OpenAI back in 2016, and in everything, Sutskever seems thoughtful and responsible. Call him the anti-Zuckerberg.

The world has no use for a trillion-dollar OpenAI

Despite all possible efforts to limit OpenAI's profits and the resulting warped organizational structure, I do believe in the mission Sutskever sees for OpenAI. More than in the muddled vision of lobbyist Altman, who travels the world meeting politicians and then talks about accountability and regulation, but in actual fact does everything he can to dethrone Google.

But how will this benefit the world? What do we gain from yet another American company with enormous power and influence over the way we deal with knowledge, information and communication and which may eventually take over our jobs? Have we learned nothing from Facebook and Cambridge Analytica?

AI must love people like we love babies

According to Sutskever, AI must learn to love people. In AI, the process by which AI systems are taught things is called "imprinting," specifically the phase in which the system must learn to recognize and conform to certain values, goals or behaviors.

AI systems such as ChatGPT, according to Sutskever, must learn to behave in ways that are beneficial or non-harmful to humans, even as the system becomes more intelligent and autonomous. It is a strategy proposed to mitigate risks associated with advanced AI by establishing a positive, protective relationship with humans from the start.

Sutskever: "The bottom line is that eventually AI systems will become very, very, very capable and powerful. We will not be able to understand them. They will be much smarter than we are. By that time, it is absolutely critical that the imprinting be very strong, so that they feel toward us as we feel toward our babies."

Keep that in mind when Sutskever is portrayed in the media as the evil genius who secretly worked that lovable treasure Sam Altman out of their company.

And now it's time for the Formula One race in Las Vegas. An absurdist spectacle in the desert, with $15,0000 tickets to the Paddock Club described in a way no AI system could have dreamed up: "Come and enjoy a recovery brunch, with aerial champagne pours and silent meditation." My favorite Formula One analyst is a manicured Englishman who puts too much sugar in his tea before recording his videos. See you next week!

Categories
technology

Sam Altman gone as CEO of OpenAi, hello Mira Murati

Extremely intelligent and also winner of the genetic lotto: the new CEO of OpenAI, Mira Murati. Life is unfair. Source photo: OpenAI.

Sam Altman is gone as CEO of OpenAI, the company he co-founded. There will be much speculation about the reasons, because the departure of the CEO of the most successful company of the last decade, in the midst of an investment round that values the company at nearly $100 billion, must have been extreme.

I will elaborate on Altman's departure in my Sunday newsletter, but it is more interesting now to look at the woman who will replace him, albeit probably temporarily. I wrote a few months ago about this Mira Murati, a 35-year-old Albanian woman. Who is this mysterious woman, who has disappeared from LinkedIn but will attract many visitors to this page from a Swedish receptionist of the same name?

More on Sunday, but here's the link to what I just wrote on LinkedIn about this very special woman.

Categories
AI crypto NFTs technology

Build your own ChatGPT, an ex-Apple couple builds an AI pin and Ethereum through $2,000

Last week was busy and filled with travel days, so I was unable to follow the news closely. Instead, I saved interesting links and perused them yesterday. It is amazing to see all that is happening in technology in one week, especially within AI and crypto.

I have tried to briefly summarize and comment on the most noteworthy developments. I hope it has not become too much of a shopping list of links:

OpenAI launches DIY GPT

OpenAI allows developers and ordinary people to share custom chatbots with the public through a "GPT Store," a proprietary app store where verified developers can upload their chatbots and make them available for users to download. In the coming months, developers will also be able to earn money based on how many people use their chatbot.

Venture Beat published a sort of match report from OpenAI's Developer Day, but the five examples of what is already being built with ChatGPT custom and the instruction on how to use GPT Builder are more relevant.

AI is an arms race and the generals are getting rich

OpenAI is paying $10 million to AI researchers by holding an employee stock sale that would nearly triple the startup's valuation to more than $80 billion. The company's recruiters are trying to lure away top artificial intelligence professionals from Google with millions of dollars and a simple message: join OpenAI now to lock in a stock package at the current valuation of $27 billion and benefit from the impending increase.
 

This is a brave new world, because until now companies like Google and Apple were able to snatch talent away from startups by offering them an offer they couldn't refuse through a combination of guaranteed top salary (think four years guaranteed at $3 million) plus a minimum equivalent equity package. OpenAI now benefits from the fact that its valuation is rising much quicker than the market caps of Apple and Google.

By allowing new private investors to buy a portion of employees' stock, they can benefit from the increase in value much faster than in the traditional model where they have to wait for an IPO and subsequent lock up period. It is good to keep in mind that when Facebook went public in 2012, its market cap was similar to that of OpenAI today. Except that OpenAI is not expected to go public anytime soon.

It won't keep raining billions in the AI sector for long

OpenAI's stratospheric valuation will be of great concern to investors in independent competitors such as Anthropic (maker of Claude) and Inflection.ai (maker of chatbot Pi). The market for applications like OpenAI's ChatGPT is very similar to the search engine market, in which Google has over 80% market share and the number two, Bing, less than 10%.

That makes it very risky for investors to invest in Anthropic and Inflection at valuations above roughly $5 billion, because the numbers two and three always get a lower valuation per customer or per dollar of revenue than the market leader. A thinning of the field of AI developers within a year therefore seems logical.

A camera, no screen: the 'pin' of Humane. Source: Humane website.

AI pin of a quarter of a billion

That said, this week's big news is undeniably that Humane, the company of former Apple employees Bethany Bongiorno and Imran Chaudhri, described by The Wall Street Journal as "spouses and co-founders" who have already raised nearly a quarter of a billion (!) dollars in investment money, launched its first product: the Ai pin, or artificial intelligence pin, which you are supposed to wear on your clothes.

The pin weighs 55 grams (two ounces), about the weight of a tennis ball, is controlled by your voice to make phone calls and look up data (in OpenAI, of course, by Sam Altman, also one of the investors in Humane) and stands out mainly because it includes a camera to take pictures, but no screen to read anything from.

Ars Technica doesn't like it: "The Human AI Pin is a bizarre cross between Google Glass and a pager. The Human AI Pin has no screen, no apps, and a creepy in-your-face camera." The laser projection, which allows you to project information onto your hand, is appreciated but seemingly more because of its high James Bond vibe. The lack of an app store for third-party apps is rightly seen as a major omission.
Ars Technica continues: "It’s also too early to tell whether Humane’s hope that the Pin can help people to live more in the moment will prove true, or whether it will simply provide a new way to be unhealthily obsessed with technology."

The entire presentation video is interesting to watch, but perhaps not for the reasons the founders hope. First of all, I don't understand why you would buy a $699 device that can do little more than a smartphone, which everyone always carries with them and is not going to be replaced by an AI pin. To live stream with perhaps, from your chest? I don't see that market to become huge anytime soon.

Battery = perpetual power system?

Besides, I always get a serious itch for devices that come with slogans and marketing-speak that make no sense. The pin's replaceable battery is called in Human terms a "perpetual power system" and the orange light that indicates the camera is on is a "trust light.

What I can greatly appreciate, however, is the straight face of Imran Chaudhri with which he presents his devices. He and Ms. Bongiorno do not have a good morning at all, but look like they are delivering a eulogy at the funeral of a beloved relative. This is so much nicer than those pumped-up marketing figures who coo "we are so excited" when announcing a new printer driver.

I also like that Mr. Chaudhri is humble enough to function as a "second-in-command" under Ms. Bongiorno, the CEO.

Spotlight 9: BlackRock believes in Ethereum

This is what happens when BlackRock, the world's largest asset manager, plunges into Ethereum.

I have often written enthusiastically about Ethereum, the most popular development platform for blockchain applications, incidentally also adorned with a wonderful slogan: "Ethereum, the world's computer. But because it is not entirely clear what the total number of ETH in circulation will be, one can have doubts about Ethereum as an investment. Function and value are often not connected. Consider the value of tap water (and in developed nations, potable tap water) to our lives and the low price we pay for it.

The unsurpassed Meltem Demirors explained on CNBC why Bitcoin's share price continued to rise and ETH jumped over 10% this week. The news that BlackRock plans to introduce an ETF (Exchange Traded Fund) for Bitcoin in addition to one for Ethereum is a huge catalyst for the end of the crypto winter.

A BlackRock ETF for Bitcoin and Ethereum, subject to SEC approval of course, offers investors a more accessible and potentially safer way to invest in cryptocurrencies without the technical complexities of buying, storing and managing cryptocurrencies directly. Purchases are made like a normal stock on a conventional exchange, with the underlying management and security of the digital currencies provided by BlackRock.

it's easy to forget how big BlackRock is because the nearly $10 trillion under management is an incomprehensible large number. But $10 trillion is ten thousand times a billion(!). Once BlackRock can offer Bitcoin and Ethereum to its clients and even only 1% goes into crypto, that would already mean almost 10% additional capital in the crypto market immediately.

Other short news

Whatever happened to NFTs?

The BBC almost gloats over the collapse of the NFT market and does report that Bitcoin is down about 50% compared to its peak, without mentioning that Bitcoin is up a whopping 880% compared to five years ago. Ethereum's 1762% rise in the last five years is not mentioned at all. Mediocre journalism.

Investor Ben Evans is not a fan of Elon Musk

Ben Evans, in his excellent newsletter on the demise of Twitter under the reign of Elon Musk, writes this wonderful sentence: "It turns out that social networks are harder than rocket science."

Chinese startup quickly stockpiled Nvidia chips

Just before the US export ban, the Chinese company 01.AI quickly purchased chips from Nvidia for a year and a half. CEO Kai-Fu Lee laments the trade war: "We will have two parallel universes. Americans will supply their products and technologies to the U.S. and other countries, and Chinese companies will build for China and whoever uses Chinese products. The reality is that they will not compete very much in the same market."

Google about to invest in AI startup Character.ai

Google is in talks to invest hundreds of millions of dollars in Character.AI, as the fast-growing ai-chatbot startup seeks capital to train models and keep up with user demand, according to Reuters. I doubt that user demand, because I don't see so many people eager to engage in a conversation with a fake psychologist or banana chatbot.

WeWork bankrupt

I never understood why a landlord of overly trendy, expensive office space would be worth $50 billion. Apparently most people agreed.

Skiing gets more dangerous, but technology helps

Climate change increases the risk of avalanches, but smart techniques like patrolling drones help keep it safe.