All Content

AI Whistleblower: We Are Being Gaslit By The AI Companies! They’re Hiding The Truth About AI!

The Diary Of A CEO138 views
0:00

So much of what's happening today in the AI industry is extremely inhumane.

0:04

But this is me playing devil's advocate. And logically, it could be the case that the civilization that accelerate their research with AI is going to be the superior civilization.

0:13

No, it's not. This is a prediction that you're making, right?

0:15

Elon's making, Zuckerberg's making, Altman's making.

0:18

And do you know what the common feature of all of them is? They profit enormously off of this myth. You know, I have all of these internal documents showing that they're purposely trying to create that feeling within the public so that they can extract and exploit and extract and exploit.

0:31

So what do we do about it?

0:32

We need to break up the empires of AI. You know, I've been covering the tech industry for over eight years, interviewed over 250 people, including former or current OpenAI employees and executives. And I can tell you that there are many parallels between the empires of AI and the empires of old, right? Like Lay claimed the intellectual property of artists, writers, and creators

0:50

in the pursuit of training these models. Second, they exploit an extraordinary amount of labor, which breaks the career ladder because someone gets laid off and then they work to train the models on the very job that they were just laid off in,

1:02

which will then perpetuate more layoffs if that model then develops that skill. And when they talk about that there's gonna be some new jobs created that we can't even imagine, a lot of the jobs that are created are way worse than the jobs that were there.

1:14

And then there's the environmental and public health crisis that these companies have created, and how they're able to also spend hundreds of millions to try and kill every possible piece of legislation that gets in their way and will censor researchers that are inconvenient to the Empire's agenda. But what I'm saying is not that these technologies don't have utility, it's that the production of these

1:34

technologies right now is exacting a lot of harm on people. But we have research that shows that the very same capabilities could be developed in a different way that doesn't have all of these unintended consequences. So let's talk about all of that.

1:53

This is super interesting to me. My team give me this report to show me how many of you that watch this show subscribe. And some of you have told us, according to this, that you are unsubscribed from the channel randomly. So favor to ask all of you,

2:03

please could you check right now if you've hit the subscribe button, if you are a regular viewer of the show and you like what we do here. We're approaching quite a significant landmark on this show in terms of a subscriber number.

2:12

So if there was one simple free thing that you could do to help us, my team, everyone here, to keep this show free, year over year and week over week, it is just to hit that subscribe button and to double check if you've hit it. Only thing I'll ever ask of you, do we have a deal? If you do it, I'll tell you what I'll do. I'll make sure every single week, every single month, we fight harder and harder and harder and harder

2:33

to bring you the guests and conversations that you wanna hear. I've stayed true to that promise Please help us, really appreciate it. Let's get on with the show. Karen, how? You've written this book in front of me here called Empire of AI, Dreams and Nightmares in Sam Altman's OpenAI.

2:55

I guess my first question is, what is the research and the journey you went on in order to write this book we're gonna talk about and the subjects within it today?

3:03

I took a strange route into journalism. I studied mechanical engineering at MIT. And so when I graduated, I moved to San Francisco, I joined a tech startup, I became part of Silicon Valley. And I basically received an education in what Silicon Valley is about because a few months into joining a very mission driven startup that was focused on building technologies that would help facilitate the fight against climate change, the board fired the CEO

3:28

because the company was not profitable. And this was, in hindsight, a very pivotal moment for me because I thought if this hub is ultimately geared towards building profitable technologies and many of the problems in the world that I think need

"99% accuracy and it switches languages, even though you choose one before you transcribe. Upload → Transcribe → Download and repeat!"

Ruben, Netherlands

Want to transcribe your own content?

Get started free
3:45

solved are not profitable problems like climate change, then what are we actually doing here? Like what, how did we get to a point where innovation is not actually necessarily working in the public benefit and sometimes even undermining the public benefit in pursuit of profit? In that moment, I had a bit of a crisis where I thought, well, I just spent four years trying to set myself up for this career that I now don't think I am cut out for.

4:15

And I thought, well, I might as well just try something totally different. I've always liked writing. And that's how, after two years, I landed at a role at MIT Technology Review covering AI full-time.

4:28

And that gave me a space to then explore all of these questions of who gets to decide what technologies we build, how does money and ideology also drive the production of those technologies, and how do we ultimately make sure

4:42

that we actually reimagine the innovation ecosystem to work for a broad base of people all around the world? And so that is kind of how I then set off on this journey of ultimately writing a book. I didn't realize that I was working towards writing a book, but starting in 2018 when I took that job was essentially the moment in which I began researching the story that I document in it.

5:08

A very timely time to start working in artificial intelligence. For anyone that doesn't know, this is pre-OpenAI, ChatGPT launch moment that shook the world. But in writing this book,

5:20

you interviewed a lot of people and went to a lot of places. Can you give me a flavor of how many people you've interviewed, where it's taken you around the world, et cetera?

5:27

I interviewed over 250 people, so over 300 interviews. Over 90 of those people were former or current OpenAI employees and executives. So the book covers the inside story of OpenAI's first decade and how it ultimately got to where it is today. But I didn't want to write a corporate book.

5:46

I felt very strongly that in order to help people understand the impact of the AI industry, we would also have to travel well beyond Silicon Valley. These companies tell us that AI is going to benefit everyone and that's their mission. But you really start to see that rhetoric break down when you go to the places that look nothing like Silicon Valley,

6:07

that speak nothing like Silicon Valley, and that have a history and culture that are fundamentally different as well. And that's where you start to really understand the true reality of how this industry is unfolding around us.

6:22

Karen, I often try and steer conversations, but in this situation, I feel like it's probably my responsibility to follow. So with that in mind, I'm going to ask you, where does this journey begin and where should we be starting if we're talking about the subjects of empire of AI,

6:36

AI generally, artificial intelligence? And also I'd say, one thing I'm really keen to do in this conversation, which is I often see in conversations is left out, is let's assume that our viewers know nothing about AI. So they don't know what scaling laws are, or GPUs, or compute, or whatever.

6:51

And let's try and keep this as simple as we possibly can in terms of language, or explain all the complicated language so that we can bring as much people with us as we possibly can. Yes.

7:01

Where should we start? I think we should start with when AI started as a field. So this was back in 1956, and there were a group of scientists that gathered at Dartmouth University to start a new discipline, a scientific discipline, to try and chase an ambition.

7:19

And specifically, an assistant professor at Dartmouth University, John McCarthy, decided to name this discipline artificial intelligence. This was not the first name that he tried. The previous year, he tried to name it automata studies. And the reason why some of his colleagues

7:35

were concerned about this name was because it pegged the idea of this discipline to recreating human intelligence. And back then, as is true today, we have no scientific consensus around what human intelligence is.

7:51

There's no definition from psychology, biology, neurology. And in fact, every attempt in history to quantify and rank human intelligence has been driven by nefarious motives. It's been driven by a desire to prove scientifically that certain groups of people are inferior

8:12

to other groups of people. There are no goalposts for this field and there are no goalposts for the industry when they say that they are ultimately trying to recreate AI systems that would be as smart as humans. How do we even define what that means?

8:28

And when are we going to get there if we don't know how to define the destination? And what that effectively means is that these companies can just use the term artificial general intelligence, which is now the term to refer to this ambitious goal to recreate human intelligence, they can use it however they want to, and they can define and redefine it based on what is convenient for them.

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
8:52

So in open AI's history, it has defined and redefined it many times. When Sam Altman is talking with Congress, AGI is a system that's gonna cure cancer, solve climate change, cure poverty. When he's talking with consumers that he's trying to sell his products to, it's the most

9:09

amazing digital assistant that you're ever going to have. When he was talking with Microsoft, you know, in the deal that OpenAI and Microsoft struck, where Microsoft invested in the company, it was defined as a system that will generate $100 billion of revenue. And on OpenAI's own website, they define it as highly autonomous systems

9:30

that outperform humans in most economically valuable work. This is like not a coherent vision of one technology. These are very different definitions that are spoken out loud to the audience that needs to be mobilized to ward off regulation or get more consumer buy-in into the industry's quest, or to get more

9:54

capital, more resources for continuing on this journey with ambiguous definitions.

10:00

I mean, speaking about different definitions through time, in 2015, in a blog post that Sam Altman wrote before OpenAI was officially announced, he explicitly outlined the existential risk by saying, development of superhuman machine intelligence is probably the greatest threat

10:16

to the continued existence of humanity. There are other threats that I think are more certain to happen, for example, an engineered virus, but AI is probably the most likely way to destroy everything.

10:27

In general, when Altman is writing for the public or speaking for the public, he does not just have the public as the audience in mind. There are other people that he is trying to motivate or mobilize when he says these things. And in that particular moment,

10:45

Altman was trying to convince Elon Musk to join him on co-founding OpenAI. And Musk in particular was spending all of his time sounding the alarm on what he saw as a huge existential threat that AI could pose. And so in that blog post,

11:03

if you look at the language that Altman uses side by side with the language that Musk was using at the time, it mirrors all the things that Musk was saying.

11:11

It's identical. Yeah. I mean, for 10 years ago, Musk was going on podcasts, saying, tweeting, whatever, that the greatest existential risk to humanity was AI.

11:19

Yeah, and so, you know, like, parenthetical, there are other things that we, that might actually be more likely to happen, like engineered viruses. It's because up until then, Altman had been talking just about engineered viruses. And so now that he needs to pivot

11:35

to speak to an audience of one, to Musk, he needs to kind of resolve the contradiction between what he's now elevating as his new central fear to be the same as Musk's new central fear with what he had previously been saying. So that's why he's like, I think this is now,

11:52

even though before I said this.

11:55

And are you saying that Sam Altman manipulated Musk? Because Elon did end up donating a huge amount of money to OpenAI and co-founding it, I believe, with Sam Altman.

12:07

Elon Musk did end up co-founding it with Altman. And certainly from Musk's perspective, he does feel manipulated because he feels like Altman was engineering his language in a way that would make Musk trust him as a partner in this endeavor. And of course, then Musk is, leaves, and through some of the documents that came out

12:32

during the lawsuit that Musk and Altman are engaged in now, it has become clear that there was a degree to which Musk was actually muscled out a little bit. And so that's why he's left with this very intense personal vendetta against Altman, saying that somehow Altman tricked him into being part of this.

12:54

So in 2015, Sam Altman is writing these blog posts saying this is, you know, one of the greatest existential threats. At the same time, in 2015, Musk is doing some very famous speeches at the time. At MIT, he said that AI was the biggest existential threat and compared developing AI to summoning the demon. And what you're saying here is you're saying that

13:12

Sam Altman was just mirroring the language that Elon was using to get Elon involved in OpenAI, and later it appears, and again, there's a legal case taking place now, that Sam might have muscled Elon out in some capacity.

"Cockatoo has made my life as a documentary video producer much easier because I no longer have to transcribe interviews by hand."

Peter, Los Angeles, United States

Want to transcribe your own content?

Get started free
13:25

Yeah, so we know from the lawsuit and the documents that have come out in the lawsuit that Ilya Sutskever, who was the chief scientist of OpenAI at the time, and Greg Brockman, chief technology officer at the time, when they were deciding whether or not

13:40

to maintain OpenAI as a non-profit, because it was originally found as a non-profit, because it was originally found as a non-profit, they decided, okay, we need to create a for-profit entity. But the question was, who should be the CEO of this for-profit entity? Should it be Musk or should it be Altman? Because they were the two co-chairmen of the non-profit. And in the emails, it became clear that Ilya and Greg first chose Musk to be the CEO. But through my reporting, I discovered

14:07

that Altman then appealed personally to Greg Brockman, who was a friend of his, that they'd known each other for many years through the Silicon Valley scene, and said, don't you think that it would be a little bit dangerous to have Musk be the CEO of this company,

14:25

this new for-profit entity, because, you know, he's a famous guy. He has a lot of pressures in the world. He could be threatened. He could act erratically. He could be unpredictable. And do we really want a technology that could be super powerful in the future to end up in the hands of this man. And that convinced Greg, and Greg then convinced Ilya, you know, I think there's a point here. Do we really want to give this much power to Musk?

14:55

And that is why Musk then leaves, because then the two switch their allegiances. They say, actually, we want Altman to be the CEO. And then Musk is like, if I'm not CEO, I'm out.

15:06

So it sounds like Sam again managed to persuade someone to do something. I guess this begs the question, what do you think of Sam Altman?

15:17

I think he's a very controversial figure.

15:19

You did an interesting pause. It's a pause where someone tries to select their words.

15:25

Well, this is what's so interesting about those interviews is people are extremely polarized on Altman. No one has in-between feelings about him. Either they think he's the greatest tech leader of this generation, akin to the Steve Jobs of the modern era, or they think that he's really manipulative and an abuser and a liar. And what I realized, because I interviewed so many people,

15:55

is it really comes down to what that person's vision of the future is and what their goals are. So if you align with Altman's vision of the future, you're gonna think he's the greatest asset ever to have on your side because this man is really persuasive.

16:12

He's incredible at telling stories. He's incredible at mobilizing capital, at recruiting talent, at getting all the inputs that you need to then make that future happen. But if you don't agree with his vision of the future, then you begin to feel like you're being manipulated by him to support his vision, even if you

16:33

fundamentally don't agree with it. And this is the story, especially of Dario Amadei, CEO of Anthropic, who was originally an executive at OpenAI.

16:42

So for people that don't know, Dario now runs Anthropic, which is the maker of Claude. A lot of people probably are more familiar with Claude.

16:49

Yeah, and it's one of the biggest competitors to OpenAI. And Amadei, at the time, when he was an executive at OpenAI, he thought that Altman was on the same page with him, and then over time began to feel that Altman was on the same page with him, and then over time began to feel that Altman was actually on exactly the opposite page of him,

17:11

and felt that Altman had used Amadei's intelligence, capabilities, skills to build things and bring about a vision of the future that he actually fundamentally didn't agree with. And so that's why people end up with this bad taste in their mouths.

17:28

And so, you know, I've been covering the tech industry for over eight years and covered many companies. I've covered Meta, Google, Microsoft, in addition to OpenAI. And OpenAI and Altman, it's the only figure that I've seen this degree of polarization with,

17:47

where people cannot decide whether he's the greatest or the worst.

17:53

You mentioned Dario there, and I found it really, what I found really interesting is to look at how people's quotes evolve over time with their incentives. So I was looking at all of the things they've said on the record on podcasts, in their blog posts, to see how it's evolved over time.

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
18:07

And Dario, who was the former VP of Research, OpenAI, and has now moved on to Anthropic, who are taking a slightly different approach to developing AI, said back in 2017, while he was still at OpenAI, that, this is a quote, "'I think at the extreme end is the Nick Bostrom style of fear that an AGI could destroy humanity. I can't see any reason in principle why that couldn't happen. My chance that something goes really quite catastrophically wrong on the scale of human

18:34

civilization might be somewhere between 10% and 25%. And also you mentioned Ilya, who was a co-founder of OpenAI, and then left. I guess the first question I'd ask is, why did Ilya leave?

18:49

That's a great question. So he was instrumental in trying to get Sam Altman fired. And he's another one of the people who, over time, began to feel like he was being manipulated by Altman towards contributing something that he didn't believe in.

19:06

How'd you know?

19:07

Because I interviewed a lot of people. Ilya in particular had two pillars that he cared about deeply. One is making sure we get to so-called AGI, and the other is making sure that we get to it safely. And he felt that Altman was actively undermining both things.

19:29

He felt that Altman was creating a very chaotic environment within the company, where he was pitting teams against each other, where he was telling different things to different people.

19:39

Have you ever spoken to him?

19:40

I have, so I interviewed him in 2019 for a profile that I did of OpenAI for MIT technology review.

19:48

And back in 2019, he has a quote where he says, the future is going to be good for AIs regardless. It would be nice if it was also good for humans as well. It's not that it's going to actively hate humans or want to harm them, but it's just going to be so powerful. And I think a good analogy would be the way that humans treat animals.

20:04

It's not that we hate animals. I think humans love animals and have a lot of affection for them. But when the time comes to build a highway between two cities, we are not asking the animals for permission.

20:14

We just do it because it's important to us. And I think by default, that's the kind of relationship that's going to be between us and AI, which are truly autonomous and operating on their own behalf. And that was in 2019, the year that you interviewed him.

20:29

One of the things that I feel like we should take a step back to examine is going back to this idea of what even is artificial intelligence and what do we mean by intelligence? And a huge part of the views of the different people and the quotes that you're reading

20:43

derives from a specific belief that they each have in this question of what is intelligence, what constitutes intelligence. For Ilya, he has, throughout his research career, felt that ultimately our brains are giant statistical models.

21:03

This is not something that we you know, we actually know, but this is his own hypothesis, also the hypothesis of his mentor, Geoffrey Hinton, who also was on this podcast. This is why they have such a strong conviction in the idea of building AI systems

21:18

that are statistical models, and that this particular approach is going to lead to intelligent systems as we are intelligent. It's a hypothesis that they have. It's not one that has been proven by science.

21:31

And some people vehemently disagree with them on this particular thing. But if you step into their shoes and take on that hypothesis and assume that it's true, that our brains are in fact statistical engines, and that these systems that they're building

21:48

are also statistical engines that they're making bigger and bigger and bigger until they become the size of the human brain. That's why they say that making this comparison where the system will become equal to human intelligence and then maybe exceed human intelligence

22:04

is relevant in their framework. where the system will become equal to human intelligence and then maybe exceed human intelligence is relevant in their framework. And Ilya gave a talk at one point at this really prominent AI research conference that happens every year called Neural Information Processing Systems.

"Your service and product truly is the best and best value I have found after hours of searching."

Adrian, Johannesburg, South Africa

Want to transcribe your own content?

Get started free
22:18

It's a mouthful. But he gave this keynote where he shows this chart of the size of brains and the intelligence of a species. And it's roughly linear. The bigger the size of the brain,

22:31

the more intelligent the species. And so for him, he thinks he's building a digital brain because he thinks brains are just statistical engines. So from that logic, it's like, okay, if we then build a bigger statistical engine he thinks brains are just statistical engines. So from that logic, it's like, okay, if we then build a bigger statistical engine

22:48

than the human brain, then based on this chart, it will be more intelligent, and then we will be subjected to the same treatment that we've subjected animals. But it's really important to understand that these are scientific hypotheses

23:02

of specific individuals within the AI research community, and there's a lot, a lot of debate about whether this is in fact the case. And some of the biggest critics say it's very reductive to think of our brains as simply just statistical engines.

23:17

Why does it matter to know the mechanism? Is it not just important to know the outcome, which is that it's gonna be able to do, make a video for me, or agents are gonna be able to do the work that I do? Does it really, really matter for us to know the mechanism behind it?

23:36

Yes and no. So it matters because these companies, they are driving their future actions based on this hypothesis. So they have decided, we think that this hypothesis is true,

23:53

like we should just continue building larger and larger statistical models in the pursuit of artificial general intelligence. And that's then having global consequences. Like in order to continue doing that, they're hoovering up more and more data.

24:07

They're building more and more data centers. They are having, they're exploiting more and more labor in order to continue on this path. Here's a question that I think is important to ask is, why are we trying to build AI systems that are duplicative of humans?

24:23

We're kind of having this conversation right now where we've just taken the premise of this industry as a good thing. Like they said that we should be building AGI, so we say that we should be building AGI. But I would like to ask, like, why are we doing that? Why is it that we are building a technology that is ultimately designed to replace and

24:44

automate people away. That is not the enterprise of technology. Like, we should be building technology and the purpose of technology throughout history has been to improve human flourishing, not to replace people.

25:00

And so this is like a critical part of my critique of these companies and these scientists that have just adopted this goal and have relentlessly pursued it and have had enormous capital and enormous resources to pursue it.

25:14

Is this the right goal? Like, why are we doing this? Why can't we just build AI systems that do things like accelerate drug discovery and improve people's healthcare outcomes, which are systems that have nothing to do with the statistical engines that they're trying to build

25:31

to duplicate the human brain.

25:33

So why are they doing it? I mean, you've interviewed all these people. I think it's, what, 300 people in total? 80 or 90 of them from OpenAI, the maker of Chachibiti. Why do you think they're doing it?

25:45

I think it's because they're driven by an imperial agenda. And that is why I call these companies empires of AI.

25:50

What do you mean by an imperial agenda? What does that term mean?

25:53

Empire is the only metaphor that I've ever found to fully encapsulate all of the dimensions of what these companies do, and the scale that they operate, and what motivates them to do what they do. And there are many parallels that you see

26:09

between what I call the empires of AI and the empires of old. They lay claim to resources that are not their own in the pursuit of training these models. That's the data of individuals, the intellectual property of artists, writers, and creators.

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
26:22

They're land grabbing in order to build

26:24

these supercomputer facilities for training the next generation models. the intellectual property of artists, writers, and creators. They're land grabbing in order to build

26:25

these supercomputer facilities for training the next generation models. Second, they exploit an extraordinary amount of labor. They contract hundreds of thousands of workers all around the world, including in the US, to ultimately make these technologies.

26:40

We can talk about that more. And they also design their tools to be labor automating so that when the technologies are deployed it also affects labor rights because it erodes away labor rights. And this is a political choice that they have. Third, they monopolize knowledge production. So they project this idea that they're the only ones that really understand how the technology

27:03

works. And so if the public doesn't like it, it's because they don't actually know enough about this technology. They do this to the public, they do this to policymakers, and they've also captured the majority of the scientists that are working on understanding the limitations and capabilities of AI.

27:17

You think they're gaslighting the public, in a way?

27:20

They are, yeah. So if most of the climate scientists in the world were bankrolled by fossil fuel companies, do you think we would get an accurate picture of the climate crisis?

27:31

No.

27:32

And in the same way, they employ and bankroll, the AI industry employs and bankrolls most of the AI researchers in the world. So they set the agenda on AI research in soft ways, simply by funneling money to their priorities so that only certain types of AI research are produced.

27:50

But they also will censor researchers when they do not like what the researcher has found. And so I talk about the case of Dr. Timnit Gebru in my book, who was the ethical AI team co-lead at Google when she was literally hired to critique the types of AI systems that Google was building, she then co-wrote a critical

28:14

research paper that was showing how large language models specifically were leading to certain types of harmful outcomes. And in an attempt to try and stop this research from being published, Google ended up firing Gebru and then fired her other co-lead, Margaret Mitchell. And so they control and quash the research

28:39

that is inconvenient to the empire's agenda.

28:42

Did you have an example where this is happening to journalists as well, that are asking questions of their team members? I think I was watching a video of yours where there was a young man that was saying he had someone show up at his door,

28:55

knocked on his door and asked for information, emails, text messages, and this person was from one of the big AI companies.

29:01

This was opening, I started subpoenaing some of its critics, yeah, as part of a, what appears to be a campaign of intimidation, but also what appeared to be a campaign of fishing for more information to figure out, to map out the network of critics further.

29:19

But this was a man who runs a small watchdog nonprofit,

29:25

and they had been doing a lot of work during that time But this was a man who runs a small watchdog nonprofit

29:28

and they had been doing a lot of work during that time to try and ask questions about OpenAI's attempt to convert from a nonprofit to a for-profit. Ultimately, OpenAI was successful in that conversion, but during the period where it was sort of existential for OpenAI to complete this conversion, there were a lot of civil society groups and watchdog groups like Midas

"The accuracy (including various accents, including strong accents) and unlimited transcripts is what makes my heart sing."

Donni, Queensland, Australia

Want to transcribe your own content?

Get started free
29:48

who were trying to prevent the process from happening in the dead of night. They were trying to get more transparency. They were trying to have more public debate about this because it's unprecedented. And it was then that there was a knock on his door and he was served papers.

30:08

What did the papers say?

30:10

The papers asked him to reproduce every single piece of communication that he had had that might've involved Musk. So this was like the strange paranoia that opening I had that Musk was somehow funding these people to block the conversion.

30:23

None of them were actually funded by Musk. So in this particular case, their request, he simply was just answered, you know, I don't have any documents because this doesn't exist.

30:33

So going back to this point of empires, you were saying that one of the factors of an empire is a land grab. And then the next one was?

30:40

Was labor exploitation.

30:42

Labor exploitation.

30:43

The third one, controlling knowledge production. And one of the other ones that's really important to understand about the AI empires in particular is empires always have this narrative that they say to the public, like, we're the good empire. And we need to be an empire in the first place because there are also bad empires in the world.

31:07

And if you allow us to take all the resources and use all of the labor, then we promise we will bring you progress and modernity for everyone. We will bring you to this utopic state akin to an AI heaven. But if the evil empire does it first,

31:25

we will descend into a hell.

31:28

And the evil empire being in this case?

31:30

In this case, most often it's China. But actually in the early days, open AI evoked Google as the evil empire. So all of their decisions were about, we need to do it first, because otherwise Google, this evil corporation

31:44

that's driven by profit, us as a benevolent non-profit, like this is a critical contest of who wins.

31:54

Do you think the people building these AI companies believe that the outcome is going to be all good now? Do you think they think that it's going to be, it's going to serve everyone, it's gonna serve everyone, it's gonna be the age of abundance, everything's gonna go well? What do you think they believe?

32:08

What do you think Sam believes?

32:11

So this is so funny is such a core part of the mythology that they create around the AI industry includes the belief that it could go very badly. It goes hand in hand. Like they need that part of the myth in order to then say,

32:28

and that's why we need to be in control of the technology, because that's the only way that it's gonna go really, really well. And Altman has said publicly, you know, the worst case lights out for everyone, but best case, we cure cancer,

32:41

we solve climate change, and there's abundance. And Dario Amadei, same kind of rhetoric. It's like, worst case, catastrophic or existential harm for humanity. Best case, mass human flourishing. So this is like two sides of the same coin. Like they have to use both of these narratives in order to continue justifying an extremely anti-democratic approach to AI development

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
33:08

where there should not be broad participation in developing this technology. They must be the ones controlling it at every step of the way.

33:16

Sam Altman did a tweet saying, there are some books coming out about OpenAI and me. We only participated in two of them. One by Kesh Hagee. Keech Hagee. Keech Hagee focused on me

33:28

and one by Ashley Vance on OpenAI. He went on to say, no book will get everything right, especially when some people are so intent on twisting things, but these two authors are trying to.

33:41

You quote retweeted that tweet from Sam Altman and you said, the unnamed book, Empire of AI is mine. Do you believe that tweet from Sam Altman was in reference to your book?

33:55

100%, because there's only three books coming out about him.

33:58

And he caught wind that your book was coming out and-

34:00

He knew my book was coming out because I had contacted OpenAI from the very beginning of my process and said, I'm working on a book now, will you participate in it? And actually, initially they said yes, even though, so my history with OpenAI,

34:13

I profiled the company for MIT Technology Review. I embedded within the office for three days in 2019. My profile comes out in 2020. The leadership are very unhappy. And in my book, I actually quote an email that I received comes out in 2020, the leadership are very unhappy. And in my book, I actually quote an email that I received

34:28

that Sam Altman sent to the company about my profile saying, yeah, this is not great. And from then on, the company's stance to me was, we are not going to participate in anything that you do. We are not going to participate in anything that you do. We are not going to respond to anything,

34:49

any questions that you receive. And this was, you know, this was things that they explicitly articulated. It wasn't like me inferring. So I had a colleague at MIT Technology Review that also covered AI.

35:02

And at one point OpenAI sent him this press release being like, we would love for you to cover this story. And he was like, I'm really busy. Will you send it to Karen? And they were like, oh no, we have a history you understand. And so for three years, they refused to talk to me.

35:20

But then I ended up at the Wall Street Journal where if they felt a bit compelled, because it was the journal, to reopen the lines of communication. And so I started having, you know, more dialogue with them. Every time I wrote a piece, I would always send them, here's my request for comment. I would always ask them, like, will you sit for interviews?

35:41

And we did get to a more productive relationship. And then I embarked on the book. So I left the journal to focus on the book full-time, and I told them right away, I'm working on this book, I want to continue this productive conversation where I make sure I reflect OpenAI's perspective in the book. And so they were like, we can arrange interviews for you. You can come back to the office.

36:07

We'll set up some conversations. And then as we were going back and forth on this, the board fires Sam Altman. And that's when things started going kind of south because the company started becoming very sensitive to scrutiny. And so then they started pushing, kicking the can down the road, down the road, down the road.

36:28

And I kept saying, hey, when are we rescheduling this? What's going on? And then I get an email saying, we are not going to participate at all. You are not coming to the office. You're not doing interviews. And I had actually already booked my tickets.

36:40

So I was already going to fly to San Francisco to have the interviews. And so then I told them, I was like, that's fine. I will still engage in the process. Well, I'll give you extensive requests for comment. I'll ask through my reporting, I'll keep you updated on all the things that I'm finding

36:59

so that you can choose to still comment. I gave them 40 pages of requests for comment and I gave them over a month to respond to all of that. So this was when the tweet came out was we were doing all this back and forth, trying to, and that's when Altman tweeted this.

37:25

Sam Watman does a lot of interviews. Yeah. You know, he's doing a lot of interviews all the time. He's done every podcast. I've seen him on everything from Tucker Carlson to I think he's done Theo Von Jorogen.

37:36

Podcasts all over the world. I wonder why he won't do mine. Well, maybe. I don't know why. I think I'm fair with everyone. I just ask, I just ask questions I genuinely care about. I don't come in with huge preconceptions or at least meet people for the first time, but I've heard through the grapevine, um, that

37:59

he doesn't want to do mine. I mean, going back to what you were saying earlier, that with this, the way that OpenAI and these companies control research, you asked, do they also do this with journalists? I mean, yes, the answer is yes. And apparently they also do it with anyone who has, you know, a broad mass communications platform.

38:20

It's not just about the conversation that you're going to have with them. It's about who you also choose to platform. And there's this huge problem in technology journalism where companies know that a really big carrot that they can give to technology journalists is access.

38:39

Yeah, yeah, yeah.

38:40

And they will withhold that access at the drop of a hat if they catch wind that you're speaking to someone that they didn't want you to speak to.

38:49

This is so true, and I don't think the average person really truly understands this. So this kind of sounds like theory as you say it, but I'm not gonna name names here because I don't think it's important, but there is a particular person in AI

39:03

whose team have basically dangled the carrot of them coming here for like 18 months. And I'm like, you don't have to dangle the carrot. I'm gonna speak to whoever I want to, regardless of the carrot or not. And when this person comes, if they want to come,

39:17

I'll give them a fair shot. I'll ask them all genuinely curious questions about what they're doing, their incentives. I won't gotcha them. I don't have a history of ever gotchering anybody. Even if I have a different of opinion, I'll ask the question. But they dangle carrots and they say, well, if he's thinking about it, let's think about a date. And what the strategy is, and I don't think they think both people don't understand, is if we just then they will perform in the way that we want them to do. And they'll be pleasant about us.

39:48

They won't be critical. They won't give a voice.

39:51

They won't platform our critics.

39:52

Our critics. And I think a lot of their game is just dangle the carrot forever.

39:56

Yes, yeah.

39:57

That's like the optimal outcome is that we just dangle it. If we just tell them, yeah, we're just trying to look at the schedule, it just doesn't work. I think in the modern world, you just have to go there and give your opinion and allow the clash of ideas in the public forum. Let the viewers decide for themselves what they think.

40:12

Yeah.

40:13

But this is such a huge part of their machinery, is the way that they use these tactics to massage the public image of these companies and make sure that information that they don't want out, and even opinions that they don't want out there, go out there. And so this is, you know,

40:31

I feel very lucky now that opening eyes shut the door early on me. At the time, I didn't feel lucky. I felt like I had screwed myself over. I was like, should I have been nicer to them in the profile so that I could maintain access?

40:46

Which is a horrible question to ask as a journalist, right? Like you're supposed to report the truth and you're always supposed to report in the interest of the public. Like that is the point of journalism. And in that moment, I was like relatively junior

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
41:00

in my career, I was like, did I misunderstand what journalism is about? Like, should I have actually been playing the access game?

41:09

But it was too late.

41:10

I had the door shut to me. And so I had to build my career understanding that the front door was never gonna be open. And that actually really strengthened my own ability to just tell it like it is.

41:26

Be objective.

41:27

Yeah, and just report what I see are the facts being presented to me, irrespective of whether the company likes it or not. And most often the company really does not like it, but I can continue to do the work. They don't need to open the front door for me. I was still able to do more than 300 interviews.

41:45

So Sam Altman gets kicked off the OpenAI executive team. Did you find out why that happened?

41:57

Yeah, there's a scene-by-scene recounting.

42:01

From who?

42:03

I can't remember the exact number of sources, so I don't want to misquote myself, but it was around six or seven people that were directly involved or had spoken to people directly involved in the decision-making process. So, Ilya Sutskever is seeing these serious concerns

42:23

about the way that Altman's behavior is leading to bad research outcomes and poor decision-making at the company. He then approaches a board member, Helen Toner.

42:39

Ilya, for anyone that doesn't know, is the co-founder we mentioned earlier, the co-founder of OpenAI we mentioned earlier.

42:44

Yes. And he kind of does a bit of a sounding board thing to Helen just because Ilya's freaking out. He's been like sitting on these concerns for a while. And he's like, if I tell this to someone, this could also be really bad for me if Altman finds out.

43:07

And so he asks for a meeting with Toner, and in that first meeting, he's like, like he barely says a thing. He's just like dancing around trying to figure out, hey, is this someone that I can maybe trust to divulge more information?

43:25

And Tona's role and responsibilities at OpenAI were?

43:28

She was a board member at the time.

43:29

Just a board member.

43:30

Yeah, and specifically an independent board member. So OpenAI, when it was a nonprofit, the board was split between people who had a stake, financial stake in the company, and then people who were fully independent. And this was meant to be a structure that would balance the decision-making

43:46

to be in the benefit of the public interest rather than to be in the benefit of the for-profit entity that Opening Eye then created. And Ilya, as a non-independent board member, was approaching Toner as an independent board member to try and see whether or not she was potentially

"99% accuracy and it switches languages, even though you choose one before you transcribe. Upload → Transcribe → Download and repeat!"

Ruben, Netherlands

Want to transcribe your own content?

Get started free
44:07

seeing or hearing the same things that he was about the effect that Altman was having on the company. This then sets off a series of conversations, first between Ilya and Helen, and then between Amir Moradi and some of the board members. So Amir Moradi was at that point, the chief technology officer of OpenAI,

44:28

where these two senior leaders, essentially through these conversations and through documentation that they're pulling together, like email, Slack messages and so forth, they convey to the independent board members, three independent board members,

44:40

we are very concerned about Altman's leadership. Like he is creating too much instability at the company. And it is like, he is the root of the problem. It's not, they were trying to say to these independent board members, like the problem will not be fixed

45:01

unless Altman is removed because of the way that he's pitting teams against each other and creating this environment where people are unable to trust each other anymore. And they're competing rather than collaborating on what's supposed to be

45:15

this really, really important technology.

45:17

When you say instability, that's quite a vague term. That could mean lots of things. Like instability could mean pushing people hard to work harder. What do you mean by instability? In as specific terms as you can possibly say them.

45:31

When ChatGBT came out in the world, OpenAI was wholly unprepared. They didn't think that they were launching a gangbusters product. They thought they were releasing a research preview that would help them get the data flywheel going, collect a bunch of data from users

45:48

that would then inform what they thought would be the gangbusters product, which was a chatbot using GPT-4, and chat GPT was using GPT-3.5. And because of that, there were servers crashing all the time

46:06

because they had to scale their infrastructure faster than any company in history. And there were all of these outages. They were trying to also hire faster than any company in history to try and have more personnel there.

46:20

And they were then sometimes hiring people that they were like, actually, we made a mistake. We shouldn't have hired you. So they were then sometimes hiring people that they were like, actually, we made a mistake. We shouldn't have hired you. So they were firing people left and right. And people were just disappearing off of Slack.

46:31

And that's how their colleagues would learn that they were no longer at the company. And so it was, yes, like many fast growing companies, a very chaotic environment and a particularly chaotic environment because it was extra fast.

46:47

Like, they had to accelerate more than any other startup. And on top of that, Miramarati and Ilya Zatskirer felt that Altman was making it worse. Like, he was not actually effectively ameliorating the circumstances of the chaos.

47:04

He was actually sowing more chaos, getting these teams to be more divided. And this is where it's important to understand that the executives and the independent board members, they're all operating under this idea that they're building AGI,

47:22

and that AGI could either be devastating or utopic to humanity. And so it's not, yes, it's like any other company and no, it's not like any other company. You cannot have, like in their view, you cannot have this degree of chaos as the pressure cooker for creating a technology that they, in their conception, could make or break the world. And so that is basically what the independent board members

47:52

also begin to reflect on. They have these conversations amongst themselves where they're like, well, based on what we're hearing about Altman's behavior, like if this was an Instacart, would that warrant firing him? And they concluded, maybe not. if this was an Instacart, would that warrant firing him?

48:05

And they concluded, maybe not, but this is not Instacart. And that's why they were like, well, crap, maybe this is actually, this does rise to the bar where we should consider replacing him because we are ultimately building a technology that we think could have transformative impacts,

48:28

either in the positive or negative direction. And so that is what happens. It's like these two executives, and then the independent board members also, they were hearing other feedback as well from their connections within the company,

48:38

with other people in the industry. At one point, Adam D'Angelo, who is one of the independent board members and the CEO of Quora, which is a tech startup in the Valley, he is at a party in San Francisco and he starts to hear some of these rumors that there's something weird about the way that OpenAI has structured its OpenAI startup fund, which was this fund that the company had created

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
49:05

to start investing in other startups. And he realizes they'd never really seen documentation about how the startup fund had been set up from Altman. And finally, they get the documents and it turns out that OpenAI startup fund is not OpenAI's startup fund, it's Altman's startup fund.

49:23

And this was something, like one of several experiences that the independent board members were also having where they're like, there's something not right about the fact that there continuously are inconsistencies between the way that Altman is portraying what is being done versus what is actually being done.

49:46

And so when these two executives approach the board or the independent board members, then they're like, okay, this lines up with also the experiences that we've been having. And at that point, they then have this series of very intense discussions

50:04

where they're meeting almost every day, talking about should we actually really consider removing Altman? And in the end, they conclude, yes, we should. And if we're gonna do it, we need to do it quickly because they were very concerned

50:21

that the moment that Altman found out his persuasive abilities would make it impossible to do. And so they end up firing Altman without telling anyone. You know, they don't talk to any stakeholders to get them on the same page. Microsoft gets a call right before they execute the action saying, we're going to fire Altman.

50:42

And Microsoft, for anyone that doesn't know, are a lead investor in OpenAI at the time.

50:46

Yes.

50:47

One of the only investors in OpenAI at the time. And that is what then devolves the whole thing, because every single person that is affected by this decision is now extremely angry that they were not involved. And that is what then creates this campaign to bring Altman back,

51:09

and then Altman is reinstalled as CEO days later.

51:13

This company that I've just invested in is growing like crazy. I wanna be the one to tell you about it because I think it's gonna create such a huge productivity advantage for you. WhisperFlow is an app that you can get on your computer

51:22

and on your phone, on all your devices, and it allows you to speak to your technology. So instead of me writing out an email, I click one button on my phone and I can just speak the email into existence. And it uses AI to clean up what I was saying. And then when I'm done, I just hit this one button here

51:36

and the whole email is written for me. And it's saving me so much time in a day because Whisper learns how I write. So on WhatsApp it knows how I am a little bit more casual, on email a little bit more professional. And also there's this really interesting thing they've just done. I can create little phrases to automatically do the work for me. I can just say Jack's LinkedIn and it copies Jack's LinkedIn profile for me because it knows who Jack is in my life. This is saving me a huge amount of time. This company is growing like absolute crazy. And this is why I invested in the business and why they're now a sponsor of this show.

52:05

And WhisperFlow is frankly becoming the worst kept secret in business productivity and entrepreneurship. Check it out now at whisperflow, spell W-I-S-P-R-F-L-O-W dot A-I slash Steven. It will be a game changer for you. There's a phase a lot of companies hit

52:21

where they're no longer doing the most important thing which is selling and they get really bogged down with admin and it's often something that creeps up slowly You don't really notice until it's happened slowly momentum starts to leak out This happened to us and our sponsor pipe drive was a fix I came across 10 years ago and ever since my teams across my different companies have continued to use it Pipe drive is a simple but powerful sales CRM that gives you the visibility on any deals in your pipeline.

52:46

It also automates a lot of the tedious, repetitive, and time-consuming parts of the sales process, which in turn saves you so many hours every single month, which means you can get back to selling. Making that early decision to switch to PipeDrive was a real game changer,

52:58

and it's kept the right things front of mind. My favourite feature is PipeDrive's ability to sync your CRM with multiple email inboxes so your entire team can work together from one platform. And we aren't the only ones benefiting. Over 100,000 companies use PipeDrive to grow their business. So if something I've said resonates, head over to pipedrive.com slash CEO where you can get a 30 day free trial, no credit card or payment required. How does a CEO of a major company get fired by the board because board members, there's a quote in your book on page 357 where you say about Ilya saying,

53:33

I don't think Sam is the guy who should have the finger on the button for AGI. Now, I asked myself this question. You know, I work with lots of people here. We have 150 people that work in this business and those people know me best.

53:48

They see me on camera, they see me off camera. So if they said that we don't think Stephen is the right person to host The Diary of a CEO, for example, it would take a lot for them to say that.

53:58

Yeah.

53:58

They must have seen some shit off camera for them to go, we don't think he's the right person to be on camera. Or for whatever reason. And in the case of AI, which is much more consequential than a podcast that is filmed in my old kitchen, it almost sends a chill down one's body to think that the co-founder of a business

54:15

has gone to the board and said, this isn't the guy to lead this consequential technology.

54:19

And it wasn't just Ilya, Miramarotti then also said, I don't think Altman is the right guy. And then they both left later. So then Altman comes back and lo and behold, Ilya never comes back. So his concerns about the fact that Altman founding out would be bad for him manifested. He ended up not coming back and Miramarotti then left shortly thereafter.

54:41

Quite a lot of these people leave, don't they? Open AI. And I think that's what we're going to see. And I think that's what we're going to see. And I think that's what we're going to see.

54:46

And I think that's what we're going to see. And I think that's what we're going to see. And I think that's what we're going to see. And I think that's what we're going to see. the heart of Silicon Valley that was one of Elon Musk's favorites whenever he was coming up from LA to the Bay Area. And there was this dinner that was there where Altman was intending to recruit the OG team

55:14

that would start OpenAI. So he's kind of telling everyone, you might have a chance to meet Musk because Musk is going to come to this dinner and he cold emails Ilya and gets Ilya to then come because, and Ilya specifically wants to come because he wants to meet Musk. And he also emails all these other people

55:33

including Greg Brockman, Dario Amadei.

55:35

These are the people that ended up working at OpenAI.

55:37

And they all, almost all of them, not every one of them, but almost all of them end up working at OpenAI. And leaving. Almost all of them end up leaving specifically after they clash with Altman.

55:52

And Ilya, he left and launched a company called Safe Superintelligence.

55:58

Yeah.

56:00

Which is, I mean, that's an indirect if I've ever heard one. Do you know what I mean? Do you know what I mean?

56:06

If someone like co-founded this podcast with me and then they left and started a podcast called Safe Podcasting, I'd take that as a slight. I'd have people knocking on their door and asking for their texts.

56:24

One of the things that is happening here is, it is not a coincidence that every single tech billionaire has their own AI company. They want to create AI in their own image. And that's why they keep not getting along. And in fact, it's not just don't get along.

56:47

They end up hating each other after working together and then splinter off into their own organizations. So after Musk leaves, he starts XAI. After Darger leaves, he starts Anthropic. After Ilya leaves, he starts Safe Superintelligence. After Mira leaves, she starts safe superintelligence. After Mira leaves, she starts thinking machines lab.

57:06

They want to have control over their own vision of this technology. And the best way that they have... derived from their experiences of trying to put their vision into the arena is by creating a competitor and then competing with OpenAI and with all the other companies out there.

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
57:33

Do you think some of these AICs realise that they are quite literally summoning the demon, as Elon said, 10 years ago, but they don't really care because being the person that summoned the demon makes you consequential and powerful and historical, even if the outcome is potentially horrific, even if there's like a 20% outcome of it being horrific. I remember, I think it was Dario,

57:56

he's the one that said, there's somewhere between a 10% and 25% chance of things going catastrophically wrong on the scale of human civilization. 25% is a one in four chance. If you put bullets in a four-chamber revolver

58:14

and said, Stephen, the upside is you could become a multi-gazillionaire and be remembered forever. The downside is that there'd be a bullet in your head. There is no chance that I would take that bet with a 25% potential chance of things going catastrophically wrong.

58:31

So I have a very long answer to this, because do they know if they're summoning the demon? It really depends on what we define as summoning the demon. And in this particular case, to go back to what we were saying before, there's a mythology that the AI industry uses

58:49

where summoning the demon is an integral part of convincing everyone that, therefore, they can be the only ones that are developing this technology.

59:00

I got it. So, on one end, you've got to say, if we don't, China will, and that's terrible.

59:06

Yeah.

59:06

But if we let anyone else do it other than me, then we're fucked as well.

59:10

Exactly.

59:11

So that means that I have to do it and you have to give me money and support.

59:13

Exactly. So when they're saying these things, we should understand it as not as like a genuine prediction based on what they're seeing, because first of all, we don't predict the future, we make it. We should understand this as an act of speech to persuade other people into believing

59:32

that they should cede more power, more resources to these individuals. And so, do they know that they're summoning the demon? I mean, they are purposely trying to create this... this... feeling within the public that they are purposely trying to create this feeling within the public that they are,

59:49

because it is a crucial part of their power. But do they, if we were to define, just do they realize that the things that they're doing are having already really harmful impacts all around the world on vulnerable people, vulnerable communities, vulnerable countries,

1:00:08

that's where I'm like, maybe yes, maybe no, and they don't really care. Because in the frame of mind, like I sometimes use the analogy that the AI world is like Dune.

1:00:22

Dune, for anyone that doesn't know Dune.

1:00:23

Science fiction epic written by Frank Herbert. And it's set in this intergalactic era where there are all of these houses and they're fighting each other for spice. So it's a callback to colonialism and empire. And they all are trying to control the spice.

1:00:38

But one of the features of this story is that there are these myths that are seated on the different planets about a religious myth, basically, about the coming of the Messiah that are used as ways to control the people. And Paul Atreides, when he arrives at the planet Arrakis

1:00:56

with the intention of trying to then fight against the Empire and avenge his father's death, he steps into a myth that has been seeded on this planet that says that one day there will be a Messiah that comes and saves the planet. So he steps into the role of the Messiah

"Your service and product truly is the best and best value I have found after hours of searching."

Adrian, Johannesburg, South Africa

Want to transcribe your own content?

Get started free
1:01:17

and leans into this idea in order to better control the people and rally them behind him as a leader to help with this quest. He knows that it's a myth in the beginning, but because he lives and breathes and embodies it,

1:01:36

it kind of starts to blur in his mind whether this is really a myth or whether he's really the Messiah. And this is what I think happens in the AI world. On one hand, there are all these executives that actively engage in myth-making,

1:01:53

because I have all of these internal documents that I write about in the book where they are very keenly aware of how to bring the public along with them by showing them dazzling demonstrations of the technology, by using, crafting a mission that will sound really good

1:02:12

and make people give more leniency to their companies. So they know they're doing the myth-making. And also, I think many of them lose themselves in the myth because they have to live and breathe and embody it day in and day out. And so when, you know, Dario says he thinks that

1:02:33

10 to 25% of the future could be catastrophic or whatever, the probability is 10 to 25%, he is actively engaging in the myth-making, but also he's losing himself in the myth. Like, I think if you were to ask him, do you genuinely believe that?

1:02:47

He would be like, yes, I genuinely believe that. Because there's been a blurring of when he's saying something just to say something versus when he actually believes what he's required to believe in order to then continue doing the things that he's required to believe in order to then continue doing the things that he's doing.

1:03:09

And this is the whole psychology of cognitive dissonance, right? Where the brain struggles to hold two conflicting world views at the same time. So it's incentivized or it endeavors to dismiss one. So if you wanted to be a healthy person, but also a smoker,

1:03:24

and I pointed out smoking's bad for you, the first words out of your mouth are going to be, yes, but smoking helps me with stress. Yes, but I only do it when, and I think, I don't know, I kind of see that at the moment because these companies have to raise extortionate,

1:03:37

like huge amounts of money to fund their AI research and they're building out all of these data centres. So when they're out in the public, they're always fundraising. All of these major companies are fundraising all the time at the moment. So you can't be fundraising and saying, I'm going to destroy your children's future, potentially. There's a 25% chance that your children aren't going to have a great life.

1:03:57

Which might be the truth.

1:03:59

I mean, that is actually what they say. This is what famously Dario Amede does. He's like- He does that, but the others, Sam's not doing that as much anymore. Yes. And it's because, you know, it goes back to like each of them kind of distinguish themselves a little bit as the brand that they need to project.

1:04:17

Do you think any of them are more, have a stronger moral compass than others? Because I think Dario often gets the credit for having more of a, you know, more of a backbone and being more conscious of implications.

1:04:31

He does get a lot of credit for that.

1:04:33

He's from Claude and Anthropic, for anyone that doesn't know.

1:04:37

I don't think it truly matters, that question, the answer to that question, because to me, even if you were to swap all the CEOs for someone that people would say is better at running these companies, it doesn't fix the problem that I identify in the book, which is that there is a system of power that has been constructed where these companies and the people running

1:05:00

these companies get to make decisions that affect billions of people's lives around the world, and those billions of people do not get any say in how it goes.

1:05:10

Those people, they can go to the polls, right? So if the public are sufficiently educated, they can go to the polls and pick a leader that says they're going to legislate or pass laws or try and pass laws.

1:05:22

Yes, but at the speed and pace at which these companies operate and at the sheer scale and size, they're able to also spend extraordinary amounts of money, hundreds of millions in this upcoming midterms, to try and kill every possible piece of legislation

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
1:05:37

that gets in their way and craft legislation that would codify their advantage. And so to me, I think sometimes as a society we obsess a little bit with, are these leaders good or bad people? And to me, the bigger question is,

1:05:56

is the governance structure that we've created a sound one or that allows broad participation or an anti-democratic one that has consolidated this decision-making power in the hands of the few because no person is perfect. I don't care who is on the top of these companies.

1:06:13

They're not going to have the ability to make decisions on behalf of so many people around the world who live and talk and have a culture and history that are fundamentally different from them without things going wrong. And so that is why throughout history,

1:06:31

we've moved from empires to democracy. It's because empire as a structure is inherently unsound. It does not actually maximize the chances of most people in the world being able to live dignified lives.

1:06:49

I'm going to try and take on their point of view. So this is me playing devil's advocate. Okay. But Karen, if the US don't continue to accelerate their research with AI, at some point, China's model is going to become so smart and intelligent that we're basically going to have to rent it off them and we're going to be, you know, they'll get the scientific discoveries, they'll discover the new era of autonomous weapons and we will be their backyard. And like, logically, that

1:07:19

argument does appear to be pretty true. No, it's not. If we scale up, if we just imagine any rate of change with this intelligence, at some point, we're gonna come to a weapon that could theoretically disable, um, all of the United States' electricity, their weapons systems.

1:07:35

It would know exactly how to disable the United States from a cyber perspective, because it would be that smart. All you've got to imagine is any rate of improvement over any period, any sort of long period of time. So this is a theory that might be true. And if it's true, if-

1:07:51

I mean, yeah, any theory might be true.

1:07:53

But if, but, you know, again, going to this point of like, even if it's a small percentage, it's worth paying attention to on the other side of the foot. This is a theory that people talk about. It could be the case that the most intelligent civilization is going to be the superior civilization. Logically, that's a pretty sound thing to say, no?

1:08:14

So there's a lot of fundamentals in this argument that would need to be true in order for this to be a viable argument. And let's knock them down one by one. So the first one is that these systems are intelligent and that just scaling them

1:08:31

is gonna bring us more intelligence.

1:08:34

So far so true?

1:08:35

No, it's actually not because first of all, again, we don't actually know if these systems are, like intelligence is not, it's not like the right analogy almost. we don't actually know if these systems are, like, intelligence is not, it's not like the right analogy, almost. It's sort of like,

1:08:50

it's like, is a calculator, a calculator can do math problems faster than a human. Does that make it intelligent?

1:08:56

It has a narrow intelligence because it's solving a narrow problem, which is like one plus one equals two, but.

1:09:01

And these systems, they actually also are quite narrowly intelligent in the sense that even though these companies say that they're everything machines that can do anything for anyone, they actually can only do some things for some people. This is like the jagged frontier of these AI models. Some of the capabilities are quite good, other capabilities are not that good.

1:09:20

You know why that happens? It's because the company can only focus on advancing certain types of capabilities. It can't literally focus on advancing all types of capabilities. They have to actually set their mind to advancing a certain by gathering the data that is needed for that capability, by taking, you know, getting a bunch of human contractors to annotate and train the model to do that exact thing. And so scaling these models is actually a perpendicular question to,

1:09:58

I would argue that most of the top people in AI believe that the intelligence is gonna continue to scale for some time.

1:10:04

A lot of them do, like Geoffrey Hinton does. And again, it's back to his hypothesis about how human intelligence works and what the appropriate model of the brain is. His hypothesis throughout his career has been the brain is a statistical engine. But that's his hypothesis, and that is not universally agreed upon, especially among people that are not in the AI world. When you talk with neuroscientists and psychologists, people who actually study human intelligence

1:10:31

and the human brain, that is where you start to get a lot of debate and disagreement about this particular view that Hinton has. And so, this is kind of like one of the things is like, AI is already being used in the military and has been used in the military for a long time. But it's specifically accelerating large language models, isn't just the only path for getting military capability.

1:11:01

Like the companies would have to choose to specifically pick military capabilities to accelerate, not just like general intelligence. It's like, you know what I'm saying? Like they create this myth that they are actually pushing the frontier

1:11:15

of all of the capabilities of the model, but that's not what's actually happening internally. And I have, I had hundreds of pages of documents on like how they were specifically training models. They pick what capabilities they want to advance. And you know how they pick them?

1:11:29

It's based on which industries would be able to pay them the most money for their services. So they pick finance, law, medicine, healthcare, commerce. It's not actually intelligent like a baby, where the more that the baby grows up, they start having these general abilities. I think I have dragon intelligence, I'll be honest.

1:11:54

I wasn't going to say it, but I think I know a little bit about, no, I know a lot about a little

1:12:01

bit. Yeah, but you also have the capability to learn and acquire knowledge by yourself. And you also have the ability to choose what you're going to learn and acquire by yourself.

1:12:10

It's not easy. And it takes a lot more time than these models, it seems. Less compute.

1:12:14

And you can learn how to drive in one place and then immediately know how to drive in another place. These models cannot do that. Every time a self-driving car is shifted to another location, it has to completely retrain on that location. It's like all the self-driving cars, I mean, we're sitting in Austin right now

1:12:30

and there's all these self-driving cars that are driving through Austin.

1:12:34

But when one of them learns, they all learn, which is-

1:12:37

Well, it's just because it's an operating system that has an AI model as part of it and you're training the AI model, and then you deploy that AI model across all the self-driving cars.

1:12:48

Which is a big advantage, because if one Optimus robot learns one thing in one factory, they all learn it. And imagine that, imagine if humans, if we all learned what all the other humans learned, that would give us such an unbelievable competitive advantage.

1:13:02

I mean, one of the ways we did that is through communication. Or it could not, because they could be learning the wrong thing, which has also happened again and again with these technologies, is that all of them then learn the wrong thing, and they all have the same failure mode. I mean, part of the resilience of human society is that we do have different expertises,

1:13:17

and we also have different failure modes.

1:13:19

I think sometimes we hold AI models to a higher standard than we hold humans to. And in a weird way, because I'd hear on stage, we're in Austin at the moment, and I'd hear people go, ah, but, you know, them AI models, they hallucinate sometimes. I'm like, have you met a human? Like, I hallucinate all the time. I can barely spell or do math. So.

1:13:39

Yes, but it's once again, like using this analogy that was specifically picked in the early days of the field as a way to market these technologies. Like we're repeatedly using the intelligence analogy and relating these machines to human intelligence as a way to try and gauge whether or not it is good or worthy or capable in society.

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
1:14:01

I think the output is the thing that really matters, is the most consequential, which is like, okay, it might have a different brain, a different system, but does it arrive at the same capability? Like, is it able to do surgery on someone's brain? Is it able to drive a car?

1:14:13

Like my car drives itself in Los Angeles. I don't touch the steering wheel and I can drive for new cyber cabs. So I go, it doesn't really matter if it's using a different system. If it's navigating through the world as a car, it has a better safety record than human beings.

1:14:31

Then as far as I'm concerned, intelligence or not, it's like, you know.

1:14:36

Yes, but that was not the original argument that you made, which was like, these systems are just generally gonna become more intelligent across different things based on the prediction. This is a prediction that you're making, right? Like that. And this is a prediction that all the AI...

1:14:50

Ilya's making, Dario's making, Elon's making, Zuckerberg's making, Altman's making, Demis is making.

1:14:56

And do you know what the common feature of all of them is? They profit enormously off of this myth.

1:15:01

Elon has recently spearheaded the construction of Colossus, a massive supercomputer in Memphis, housing 100,000 GPUs, specifically to scale up their Grok API models faster than their competitors. It appears that they've all converged around this idea that you can brute force your way

1:15:17

to greater, more generalized intelligence.

1:15:20

They've converged around the idea that you can brute force your way into models that they can sell to people for automating certain tasks that are financially lucrative.

1:15:30

And I heard Elon say that if you're a surgeon, there's just no point. He was like, don't train to be a surgeon. He says in a couple of years' time, optimists and AI generally are going to be better than any surgeon that's ever lived.

1:15:40

Yeah. Do you think these things are true? Well, you know, I'm pretty sure it was Hinton that famously slash infamously said, there would be no need for radiologists anymore. There would be no need for radiologists anymore. And he set a deadline that we've already passed. I don't remember how many years.

1:15:58

Radiology is doing great as a profession.

1:16:01

Do you think it will be in five years?

1:16:02

Okay, so this once again goes back to this question of like, why do we build technology and why should we specifically be building AI? Okay, and for me, like the whole project of technology development advancement is not to advance technology for technology's sake. It's to help people. and early diagnoses of certain types of cancer

1:16:52

that then help improve the prognosis of the patient.

1:16:55

Do you believe that in the coming years, all the cars, pretty much all the cars on the road will be driving themselves? No. You don't think so? Mm-mm.

1:17:03

How come? Because of the way the technology works. Because these are statistical, I mean, currently the way that AI models are primarily developed, they're statistical engines. You have what's called a neural network,

1:17:22

Parameters, is this what they call parameters?

1:17:24

Yeah, pretty much. And you're just pumping a bunch of data into it and then it's analyzing the data and creating this, all of these, finding all these correlations in the data, finding all these patterns. And then it's through those patterns

1:17:37

that the machine is then able to act autonomously, right? And so the way that they're training a self-driving car is they're recording all this footage, and then they have tens of thousands or hundreds of thousands of human contractors that draw literally around every single vehicle

1:17:56

in the footage, every single pedestrian, every single traffic light, every single lane marking, and label it exactly as such, so that then it's fed into an AI model that can identify all of these different components. And then it's connected to another piece of software

1:18:14

that is not AI that's saying, okay, if the AI model recognizes a pedestrian, we do not run over the pedestrian. If the AI model recognizes a red traffic light, we stop. And so, like, the thing about statistical engines is that it's based on probabilities.

1:18:34

It's not based on deterministic logic. So, systems make errors all the time, and it's impossible, it is technically impossible to get them to stop making errors.

1:18:48

Humans make errors way more than systems in this case. Yeah. Like the safety record is like, isn't it like 10 times more safe to be driven in a Tesla with autonomous driving than it is for a human to drive

1:18:59

a mile? It depends on the place. It depends on whether the Tesla was trained to specifically navigate the place that you're driving.

1:19:05

Because humans get drunk.

1:19:06

Because if it's in Mumbai, in some place in Vietnam, no, it would not be safer. I would much rather be driven by someone that has been driving in that place their whole life. I'm not arguing against the fact that in certain places where the car has been explicitly trained to drive in this place that

1:19:27

It has a better safety record than the humans that are driving in that place but you specifically asked if I think that all of the most cars most cars in The world in the US in the United States because we're here. I don't actually think that it's like imminently on the horizon ten years No, I don't actually think that it's like imminently on the horizon. 10 years? No, I don't think so.

1:19:45

I sat with Dara from Uber and he's pretty convinced that his 9 million couriers will be replaced by autonomous vehicles.

1:19:51

I mean, how long has self-driving cars been invested in such for? It's been more than 10 years. And what percentage of cars right now are autonomous? On the US roads. I mean, so part of it is it's actually not a technical problem, right? right now are autonomous on the US roads. So part of it is, it's actually not a technical problem.

1:20:08

Part of it is also a social problem. Like do people even trust getting into these vehicles? Part of it's also a legal problem, which is if the self-driving car kills someone, which it has happened.

1:20:20

Yeah, it has happened.

1:20:21

Who is responsible?

1:20:24

So in the case in LA, it was both Tesla and the driver, because the driver dropped their phone, they looked down, and this was a couple of years ago, I believe, and they went to grab their phone and they hit someone. And so it went to court and they were held both responsible, both the driver and Tesla.

1:20:40

In terms of Tesla, pretty much everyone that gets the car, it comes with autonomy now for pretty much most people, I believe.

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
1:20:49

Partial autonomy.

1:20:50

Yeah, it's called full self-driving at the moment.

1:20:51

I mean, yes, it is called full self-driving.

1:20:54

Full self-driving supervised, where you kind of have to be looking in the right direction.

1:20:59

Yeah, so it's partial autonomy.

1:21:01

And here in Austin, it's full autonomy because there's no steering wheel on the new car. So you can't drive it anyway. But it is, you know, the Model Y is the undisputed highest selling car, best selling car in the world across all brands.

1:21:14

I guess my point here is like these predictions where they say AI is going to completely change transportation and driving. It's going to completely change. Lawyers aren't going to completely change transportation and driving. It's going to completely change, lawyers aren't going to have jobs, accountants aren't going to have jobs.

1:21:27

Do you believe that they are true? Do you believe that there's going to be mass job displacement?

1:21:32

Okay, so I do think that there is going to be huge impacts on employment, and we are already seeing those impacts. It is not simply because the AI models are just automating those jobs away. It is specifically because the models are improving in certain capabilities based on what the companies that are developing them choose to improve them on. And executives at other companies are then deciding to fire or lay off their workers because they think that AI can replace the worker,

1:22:05

irrespective of whether that might be true. And there have been cases of like the Klarna CEO who laid off a bunch of people thinking that he would replace everyone with AI and then it didn't actually work and he had to ask some people to come back.

1:22:16

I actually DMed him about this. If you're hearing this, this is because I've DMed Sebastian and he's fine with me sharing this. He said, because I've heard his name mentioned a lot. And so when we talked about AI in the past and people mentioned Sebastian and Klarna as the example, I wanted to clarify with him what the truth was.

1:22:32

He said, it's great to hear from you. I think sometimes people struggle with two things can be true at the same time. I think it might be time to come back on your podcast. To your point, this is the media misinterpreting my tweet. We are doubling down on AI more than ever. Klana is shrinking with almost 100 employees per month due to AI. We used to be 7,400 at the peak.

1:22:53

A year ago, 5,500. Now we're 3,300. And by the end of summer, so this was last year, we'll be 3,000 people. AI handles 70% of our customer service conversations at this moment. This is because we have realized that with AI,

1:23:10

the production cost of software comes down to almost zero, just like manufacturing used to be all handcrafted. And then the machines came, code used to be all handcrafted up until a few years ago. And now it is machine produced. And ultimately we pay people more than ever

1:23:26

for the unique handcrafted man-made stuff. Plana is a bank. People will want to connect to humans, not only machines. They want us to be personable, relatable, even flawed. So we need to make sure while we are automating, replacing with AI in parallel,

1:23:41

we make sure we offer a super available human experience.

1:23:47

I'm really glad you read this because I think it touches on some really important nuances to the AI, like the impact that AI is going to have on employment. So I think there's often these binary narratives. It's like AI is going to come for every job, or people say AI is not actually working and it's not actually coming for jobs. And like, the reality is it's coming for jobs.

1:24:12

There are definitely jobs that are being automated away because of the capabilities of their models. And there's also jobs that are being lost because executives are deciding to lay off the workers, even if the models don't match the capabilities because it's good enough. Like they would rather have the good enough model

"99% accuracy and it switches languages, even though you choose one before you transcribe. Upload → Transcribe → Download and repeat!"

Ruben, Netherlands

Want to transcribe your own content?

Get started free
1:24:27

for way cheaper.

1:24:28

Or they made a mistake with hiring. They bloated their team and it's a great convenient thing to say.

1:24:32

Exactly. Like there's many reasons, but like clearly we're already seeing impacts on the job market. Like the US jobs report that came out earlier this year showed that there has been a decline in hiring, is a slowdown in hiring

1:24:48

across especially white collar professional industries.

1:24:53

And you saw Anthropic's report, didn't you, this week? The TLDRs, it matches kind of what you were saying, where Anthropic looked at exactly how people were using their models and they looked at what people are saying and they said that there's been a 40% reduction

1:25:07

in entry-level jobs in particular. And then they made this graph, which has gone viral over the internet. The red shows where we are now in terms of capability. And based on how people are currently using the models,

1:25:16

they-

1:25:17

That's their prediction. Extrapolate it out. The blue part will be the disrupted parts. This is the things that they say AI can do right now, but people don't realise it yet. So if you look at it, it's like, it's kind of all the stuff you'd expect. It's the physical, real world, human stuff, which robots maybe can do someday, like construction or agriculture that are untouched, but like office and admin, like saying finance stuff,

1:25:40

math. And notice that these are all the things that I just named that they purposely finance, math, law, healthcare.

1:25:45

Media and arts, that's me cooked.

1:25:46

Yeah.

1:25:47

So, obviously, office and admin, I mean, they do focus a lot on like assistant type and managerial work. So but the other thing that the Clarno CEO said was, but people also want human experiences. So it's not actually just about the capabilities of the models. It's also about what people want. Like some things they would turn to AI for and some things they wouldn't, irrespective of whether or not AI is capable of doing it. But because of a preference that they want human to human interaction.

1:26:25

And so what we're seeing right now is, yeah, the thing that happens with every wave of automation, which is that there is a bunch of entry-level work that gets automated away. And there are also new jobs created, but the jobs that are created are in one of two categories.

1:26:43

There are people that get even higher skilled jobs. And what he was saying, like, we pay people more for like the handcrafted code now. And there's also the people who get way worse jobs. And so there was this amazing article in New York Magazine that was talking about how a lot of people

1:27:01

are getting laid off, and then they end up working in data annotation, which is the labor that I've been referring to throughout this conversation that companies need in order to teach their models the next thing that the companies are trying to automate.

1:27:17

And so like a marketer gets laid off and then they go and work for a data annotation firm to train the models on the very job that they were just laid off in, which will then perpetuate more layoffs if that model then develops that skill.

1:27:34

And the article was talking about how this has become a huge catch-all for a lot of people that are struggling with finding job opportunities right now, including like award-winning directors in Hollywood that are actually secretly doing this data annotation work

1:27:53

to put food on the table. And so when they talk about there's going to be mass unemployment and then there's going to be some new jobs created that we can't even imagine, I think a lot of these narratives rarely talk about, like, first of all, why are some jobs

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
1:28:09

going away? It's not just because of the model capabilities, it's also because of executive choices and because of the rhetoric that they use if they want to just downsize. But the other thing that is rarely talked about is the jobs, a lot of the jobs that are created are way worse than the jobs that were there. And it breaks the career ladder. So it's the entry level and the mid-tier jobs that get gouged out.

1:28:33

It's higher order jobs and then way more lower order jobs that get created. And so how do people continue to progress in their careers? There's no more rungs on the ladder.

1:28:46

I actually don't know the answer to this question, and I've been furiously trying to find a good answer to this question, because I can, you know, everything is theory. And for my audience, I would say most of my audience don't run businesses. A lot of them do, a lot of them aspire to, but they don't run businesses. So they're kind of, they're also in the land of theory. They're hearing lots of different things. Jack Dorsey does his tweet saying he's halving his headcount

1:29:05

because of AI. They don't know what's true. They don't know the sort of internal economics at Jack's company. And did he bloat the company during the pandemic? And he's just using this as an excuse

1:29:13

to make this share price spike seven points because his investors now think they're an AI company or whatever. Eventually I go, okay, what am I doing? I have hundreds of team members, probably 70 companies I invest in, maybe five or six that I'm like the lead shareholder in. What am I actually doing on a day-to-day basis right now? I also consider myself to be head of recruitment.

1:29:34

But in the last month in particular, I have met extremely capable candidates in terms of cultural alignment, hard work, those kinds of things. But I've had to take a great deal of pause because when I run the experiment of, can I get an AI agent to do that exact same thing?

1:29:48

The answer is increasingly yes, especially in a world of open clause.

1:29:53

And so what, I'm curious, like, now you confront this decision where you're seeing in this short-term period, you could just choose the AI agent.

1:30:05

And in the long-term period, you could just choose the AI agent.

1:30:10

And in the long-term period, there is no career ladder. So who are you promoting into these senior roles? Like, how do you resolve it for your own company?

1:30:16

Yeah, that's a good question. So there's kind of two ways I'm thinking about it. I think really deep expertise is very, very valuable because if you're now the orchestrator of potentially AI agents, it's really about having a deep understanding of the right question to ask.

1:30:29

And that's someone who has deep expertise on something. So I need my CFO, because if she's going to be orchestrating our team of agents that might be doing financial analysis or whatever else, she needs to understand what to tell them to do

1:30:42

in our company. And in turn, financial analysts can't do that. They need the 50 odd years of experience that Claire has. On the other end, I need Kaz. Kaz is 25. Kaz knows everything about AI agents.

1:30:54

He's a young Japanese kid who's highly, highly curious. On the weekend, he's building AI agents to solve problems in my life. I need those two kinds of thinking, which is highly proficient agent maxing young kids, or they don't necessarily need to be young,

1:31:08

but like really lean in high curiosity. That's creating a force multiplier in my business. And then I need deep expertise. Now, everything else outside of, there is another one I've thought of, another group, is like people with extremely great IRL people skills.

1:31:23

Because we do meet people in real life. We greet you when you arrive here. We greet, when we go for lunch with big clients that we have, whether it's Apple or LinkedIn or whoever it might be, we need to schmooze. And we have teams who are in person in the office.

1:31:38

So we do a lot of stuff IRL. And increasingly we're building communities, even for this show, we're doing community events all around the world. So we need people that are good at that as well. IRL, bringing people together in real life and organizing stuff.

1:31:53

And if you were to, all of the roles that could be done by AI agents, if you were to replace them with AI agents, do you think you would still have these three roles, pools of people to hire and promote into the three critical things that you need in the longterm?

1:32:09

If things carry on at the current rate of trajectory, one could assert that even those roles would experience pressure. If you just imagine, like people think of things either statically or linearly or exponentially. If you imagine an exponential rate of improvement,

1:32:24

which is kind of what I've seen, even like a 10% compounding rate of improvement, at some point... At some point, I think what remains is actually the IRL irreplaceably human stuff, human to human. Our Maslovia needs of being in person like we are now

1:32:42

aren't gonna change. We need connection. Humans get very sick when they don't have other human beings in their life and strong, deep relationships. So that stuff is gonna matter a whole lot.

1:32:53

I have this contrarian weird take that actually maybe this is the first technology that's gonna deliver on the promise of making us human and connected. Because we're gonna be rendered useless at everything else other than what humans are good at. Because all the other technologies said,

1:33:05

oh, we're gonna make you more connected, connecting the world. And they disconnected the world and isolated the world. But maybe this is the one, it's so intelligent now that it doesn't need us to fuck around in spreadsheets anymore.

1:33:13

Do you see that actually happening in real time right now that it's making us more able to be in person, connected with one another, having deeper social community engagements?

1:33:28

Yes. Yeah. I'll give you some data points. Okay. Data point number one, the Financial Times released a report on social media usage

1:33:36

and what they saw is 2022 is the peak and it's plateaued ever since. The generation that's plateaued the fastest and heading down is the younger generations. The boomers are still off to the races, right? On Facebook and stuff.

1:33:48

And then you look at the way Gen Alpha are using social media, they're not posting as much. They call it posting zero. They're scrolling sometimes, but they're in dark social environments

1:33:57

like WhatsApp and Snapchat and iMessage. They're not like performing to the world. They also value IRL experiences much more than any other generation. They're like not getting smashed. We're seeing every brand has a run club. We're seeing, I mean, run clubs exploding around the world. And we're seeing this real sort of, sort of almost like innate realisation that like technology let us down at some fundamental level. Like dating apps let us down, social networking kind of has let us down. And we're seeing, I think, maybe a bifurcation of society

1:34:26

where a lot of people are going, fuck this. Like, I want to go back to what it is to be a human. And I would imagine that in such a world where intelligence is so sophisticated that we no longer needed to sit at laptops and like, da, da, da, da, da, da, you're going to see people sat at laptops, you're going to see something completely different. And I think maybe, you know,

1:34:46

and then we talk about robots and Optimus robots. Elon says there'll be 10 billion Optimus robots. Elon has been wrong with timing before. He's almost never been wrong on the big things completely. He's just, his timing has got a bad track record. So I think he's probably right.

1:35:03

You know, I think I've got some people on the way from Boston Dynamics and these other big companies like Scale.ai, and they're actually bringing the robots here to show it like folding laundry, doing the dishes. I'm not saying that's what I would want in my home, but I think factory work is gonna completely change.

1:35:15

I think a lot of manual labor is gonna completely change. And I think we're gonna be forced to do Sebastian, who's the CEO of Klana, has actually just called me.

1:35:26

Hello, Sebastian. You all right? Hey, how are you?

1:35:29

I'm good.

1:35:31

It's been a while. It has been a while since you were on the show. I was just saying, we do need to get you back on. I just had a couple of simple questions because, you know, I do a lot of interviews and Klana's always mentioned because I think the media has said that you like double down on AI then you reversed because it didn't work out. So I know I spoke to you a while ago and we exchanged a couple of DMs about it but that was more than a year ago now

1:35:54

so I just wanted to get an update on Klana's business, AI, agents and all of

1:35:58

that if possible. First and foremost we were early on released AI to support our customer service which had that initial benefit of more calls being dealt with by AI which customers liked because those calls or chat messages were much much faster and more qualitative. Then since then that has actually expanded slightly. What we did however try to communicate as well is that we believed in a world of where AI is cheap and available, the value of human interaction will be

1:36:30

regarded as higher. So the future of customer service VIP is a human. We have then hence doubled down on providing more of that. But at the same time the efficiency gains within the company has continued. I mean we used to be about 6,000 people and now we are less than 3,000 which is two, three years since we stopped recruiting and at the same time our revenue has doubled, right? So you can clearly see that AI has allowed us to do more with less people but we have avoided layoffs and instead relied on

1:37:05

natural attrition when people kind of move on to other jobs. I mean from my perspective we will continue to be very you know not really recruit much I mean we recruit a little bit here and there but we expect that kind of natural attrition of 10-15 percent per year to continue on to become fewer. I think the big breakthrough was really in November-December last year where even the kind of more most skeptical engineers who are like very well-renowned and

1:37:36

appreciated like the founder of Linux and stuff like that basically said that coding has now been resolved and hence is not you you know, you don't need to code anymore. And that was kind of a common sentiment. So I think in coding, that's definitely an engineering work that has been a tremendous shift in the last six months.

1:37:55

What do all these people go do, Sebastian?

1:37:58

I am optimistic. I mean, I think obviously people will have a lot of opinions about this topic, but I still believe that we are going to move towards a richer society. Now, in the short term, there could be more worry about what happens if people don't get a job and so forth, but I think in the longer term, I am optimistic what it means for society and humanity.

1:38:21

Thank you so much, Seb. I'll chat to you soon. Thank you for taking the time. I appreciate you, mate.

1:38:25

Thanks.

1:38:26

All right.

1:38:26

All right, see you.

1:38:27

Bye.

1:38:28

Bye. You know the little traditional SIM card that goes inside of our phones? They haven't changed at all since they were invented in the 90s. You have this physical piece of plastic

1:38:37

that means you're a border that carrier can start charging you whatever they want. But there are alternatives and today's sponsor Saley is one of them. It's an eSIM app that gives you a safe and secure data connection in over 200 destinations. All of their eSIMs have built-in cybersecurity which is great if you're traveling for work and looking at confidential material. I've been using Saley whenever I travel because the connection is always reliable and it saves me a ton of roaming fees.

1:39:06

It also means I don't have to deal with all of the faff that surrounds sorting out a sim everywhere I go. If you want to give it a try, download the Saley app from the App Store now and scan the QR code on screen. And if you want 15% off your first purchase, use my code DOAC when you get to checkout. That's DOAC for 15% off.

1:39:27

Keep that to yourself. This is something that I've made for you. I've realized that the Diverseo audience are strivers, whether it's in business or health, we all have big goals that we want to accomplish. And one of the things I've learned

1:39:39

is that when you aim at the big, big, big goal, it can feel incredibly psychologically uncomfortable because it's kind of like being stood at the foot of Mount Everest and looking upwards. The way to accomplish your goals is by breaking them down into tiny, small steps.

"Your service and product truly is the best and best value I have found after hours of searching."

Adrian, Johannesburg, South Africa

Want to transcribe your own content?

Get started free
1:39:55

And we call this in our team, the 1%. And actually this philosophy is highly responsible for much of our success here. So what we've done so that you at home can accomplish any big goal that you have is we've made these 1% diaries

1:40:08

and we released these last year and they all sold out. So I asked my team over and over again to bring the diaries back, but also to introduce some new colors and to make some minor tweaks to the diary. So now we have a better range for you.

1:40:22

So if you have a big goal in mind and you need a framework and a process and some motivation, then I highly recommend you get one of these diaries before they all sell out once again. And you can get yours at thediary.com. And if you want the link, the link is in the description below.

1:40:38

Any thoughts?

1:40:39

Well, I actually had thoughts on something that you said before he called, which is you were saying that the Gen Z years, like there's this trends that they're actually disconnecting from technology. So they're becoming more in person. And then there's this other class of workers that are actually leaning into the

1:40:55

technology, but then becoming more human because they're leaning into the technology because they're realizing that they should actually just be spending more time doing in-person to person interactions rather than staring at a spreadsheet. And so they're no longer doing the typing and whatever. I really want to go back to this New York magazine piece that just came out.

1:41:13

Because what you're describing is true for a very specific category of people, which is often like the business owners and leadership within companies that actually can make these decisions on how they spend their time and what they ultimately do with their time. now one of the top jobs on LinkedIn, by the way. Yeah. So LinkedIn had a report that showed the top 10 jobs with the highest growth in the last

1:41:57

year. And data annotation is on that list. And for anyone that doesn't know what data annotation is. Yeah, so data annotation is the process of teaching these chatbots or any AI system to do what they ultimately are able to do. So the fact that chatGBT can chat is because there were tens of thousands or hundreds of thousands of people that were literally typing into a large language model and showing it,

1:42:23

this is how you're supposed to then respond when a user types in a prompt like this. Before they did that work, chat GPT didn't exist. Like it just, it would just, you would prompt the model and the model would generate some texts that was not in dialogue with the person.

1:42:39

It would kind of generate something that was adjacently related.

1:42:42

Is this what they call reinforcement learning where you kind of, you give it like a-

1:42:45

It's a part of the process of reinforcement learning. So you do data annotation, which is literally showing lots of different, you know, examples of things that you want the model to know. And then reinforcement learning is getting the model

1:42:58

to then train on those examples iteratively in a way that then gives the model some of those capabilities. And what the New York Magazine piece highlighted is many, many of the people that are getting laid off now or are struggling to find work. And these are highly educated people.

1:43:16

They're college graduates, PhD graduates, law degree graduates, doctors, and again, like award-winning directors that are then struggling to find employment in the economy because the economy has been very much restructured by AI. They are then finding themselves being serving this industry. And the industry is designed in a way that is extremely inhumane because what the companies, the companies that use these data annotation services,

1:43:47

like there's these third-party providers that are data annotation firms. An OpenAI, a Grok, a Google, they will hire these firms to then find the workers to perform the data annotation tasks that they need. For these firms, these third-party firms,

1:44:04

they are incentivized to pit workers against each other because they want this data annotation to happen at speed and as cheaply as possible so that they can also compete with one another in this middle layer to get the contract from the client. And so all of these workers that were interviewed

1:44:25

for this New York Magazine story talk about how they actually no longer have an ability to be human because they are waiting at their laptop to be pinged on Slack for when a project is gonna open up for data annotation because they've tried job hunting,

1:44:42

they literally can't find anything else. This is the thing that's gonna help them put food on the table for their kids. And there was this one woman who said, like, I have so much anxiety about when the project is gonna come, when it's gonna leave,

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
1:44:54

that when the project came, it was right when my kid was coming off of school. And I just started tasking furiously because I don't know where it's gonna go and I need to earn as much money as possible in this window of opportunity. So then when my kid came home and tried to talk to me,

1:45:10

I screamed at my child for distracting me. And then she was like, I've become a monster and I'm not even allowed to go to the bathroom or take care of my kids, let alone myself, because this industry that is absorbing more and more of the workers that are being laid off is mechanizing my

1:45:33

life, atomizing my work, devaluing my expertise, and then harvesting it for the perpetuation of this machine that all of these AI executives are saying is then going to come for everyone else's jobs. And so what you were saying about this class of workers, the business owners that get to become more human because there are all of these AI models now doing the tasks that they don't have to do anymore, it is at the cost of the vast majority of people

1:45:57

who are not business owners that are struggling to find work and are struggling to find jobs. And so, I think that's a really important point that you made, because I think that's a really important point now doing the tasks that they don't have to do anymore. It is at the cost of the vast majority of people who are not business owners that are struggling to find work,

1:46:10

getting absorbed into the work of then providing these technologies that the business owners can use. And instead of becoming more human, they feel like their humanity has been squeezed and diminished, and they have no ability to have control, agency, and dignity in their lives anymore.

1:46:31

I think this is a big question that kind of pertains to this graph here, which is, you know, all of these people, if we believe anthropic's prediction of who will be disrupted, these people in these industries like arts and media, legal, life and social sciences, architecture and engineering, computer and maths, business and finance, and management,

1:46:53

and also office and admin. These people, if we believe this, would have to retrain at something else. And unlike the Industrial Revolution, where you might get 10, 20 years to retrain because factories take a long time to build,

1:47:04

the distribution layer that AI sits on top of is the open internet. So this is where Chachapiti can go, pop, and get hundreds of millions of users in no time at all and become the fastest growing company of all time. One of my fears is that this disruption takes place

1:47:18

at a speed where we can't transition.

1:47:21

And that was, you know, that, I think you said that sentence in the passive voice, the transition would happen at a speed, but who is driving that speed? It's the companies. The companies, yeah. And their race with one another.

1:47:36

Yeah. And so they are driving the transition to happen at a speed at which it would be really hard

1:47:44

to take care of all of the people it would be really hard

1:47:45

to take care of all of the people that would be bulldozed over by the advanced technology.

1:47:49

This is one of the crazy questions that no one can answer for me when I sit with these people that are AI CEOs. I go, so what happens to the people? If you agree that this is gonna happen at super speed, you know, I've spoke to that CEO of Uber, Dara,

1:48:00

who said very similar things to what you're saying is, you know, there'll be data labelling jobs, for example, for the drivers. But they can't all become data labellers. And there's a question around meaning and purpose and fulfilment, and that comes from losing your meaning in life. I also sit here with so many people

1:48:15

who talk about how their father lost their job in Iran or some other country and came to the United States and had to be a toilet cleaner, in one particular case, was a doctor in Iran, but came to the US and was a toilet cleaner, and had to deal with the sense of shame that that particular person felt,

1:48:32

and the lack of dignity that that caused, and how that made that person's self-esteem feel, and the depression, alcoholism that transpired from that. If this happens at a large scale across society, there's gonna be a ton of consequences like that.

1:48:45

I mean, this is like the core themes of my work. And the reason why I'm critical of these companies is that they are creating technologies in a way that creates the haves and have-nots in an extreme form that we have. It's exacerbating the inequality

"The accuracy (including various accents, including strong accents) and unlimited transcripts is what makes my heart sing."

Donni, Queensland, Australia

Want to transcribe your own content?

Get started free
1:49:01

that we already see in the world. Like the people who have things will have way more riches. They'll have way more free time. They'll be allowed to be more human. But the people who don't have things are being squeezed even more.

1:49:16

And it's not just from a work perspective. I mean, I talk in my book also about the environmental and public health crisis that these companies have created, where they are building these colossal supercomputer facilities and in communities all around the world. And they specifically pick some of the most vulnerable communities. We're sitting in Texas right now. OpenAI's largest, one of its largest data center projects

1:49:48

is being built in Abilene, Texas as part of the Stargate Initiative, which was an effort announced at the beginning of Trump's second administration to spend $500 billion on AI computing infrastructure. This facility consumes, when it's finished,

1:50:05

will consume more than a gigawatt of power, which is over 20%. Yeah, over 20%. So this is actually a little bit inaccurate now. This was something that circulated online for a while, but there's updated numbers.

1:50:19

Just for someone that can't see, because they're listening on Spotify or something, it's a picture of the size of this facility. So this is not the Abilene, Texas one.

1:50:27

This is a meta facility. So let's just talk about opening a facility in Texas. That one would be the size of Central Park and it would run a million computer chips and it would require the power of more than 20% of New York City.

1:50:47

Do you know one of the things, which I found confusing, so I'd like to alleviate the dissonance, is I thought you were saying earlier that you didn't think the job disruption promises were real.

1:50:55

No, what I was saying is that when we talk about what these executives predict about the future, we need to understand that they are ultimately trying to influence the public in a way that allows them to continue maintaining control over the technology.

1:51:13

So- But objectively, do you think that the job disruption that they talk about where-

1:51:16

Yeah, yeah, I mean, I-

1:51:17

You think this is real?

1:51:18

Well, I-

1:51:19

Not necessarily- I don't want to comment specifically on like this chart, but it's like, we've already seen in job reports that there is a restructuring of the economy happening right now.

1:51:27

Yeah.

1:51:28

But going back to the data center, so this supercomputer facility, it's a meta supercomputer facility, is being built in Louisiana, and it would be four times the size of the Abilene, Texas one and use half of the average power demand of New York City.

1:51:44

So it's one fifth the size of Manhattan. Abilene, Texas one, and use half of the average power demand of New York City. So it's one fifth the size of Manhattan. This makes it seem like almost all of Manhattan, but it would be one fifth the size of Manhattan. When these facilities go into these communities, what happens?

1:51:55

Power utility increases, grid reliability decreases. The facilities also need fresh water to generate the power for powering them, as well as fresh water to cool. And there have been lots of documented stories of communities that are already really constrained in their fresh water resource.

1:52:12

They're under a drought when a facility comes in. And then there are people, the community is actually like competing with this facility for fresh water. I talk about one of those communities in my book. And also, sometimes, these facilities, instead of connecting to the grid,

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
1:52:27

they instead, a power plant pops up next to it. So in Memphis, Tennessee, where Musk built Colossus, the supercomputer for training Grok, he used 35 methane gas turbines to power the facility. This is a working-class community, a black and brown community, a rural community,

1:52:46

that was not even told that they would be the hosts of this facility. And they discovered it because they literally smelled what seemed like a gas leak in all of their living rooms. And that's when they discovered that these methane gas turbines

1:53:02

were taking away their right to clean air. And this is a community that's already been facing a history of environmental racism. They had already had lots of struggles to access their right to clean air. And now there's this huge supercomputer

1:53:20

that's landed in their midst that is pumping thousands of tons of toxins into their air, exacerbating the asthmatic symptoms of the children, exacerbating the respiratory illnesses of other people. It's one of the communities that has the highest rates of lung cancer.

1:53:41

And so-

1:53:42

And that supercomputer is taking their jobs. And then they also have super computers taking their jobs.

1:53:46

So this is what I mean. It's like the haves and have-nots are fundamentally being pulled apart even further. Like if you in this version of Silicon Valley's future are in the misfortunate category of being a have-not. We are talking about you now getting a job that is way worse than what you had because you might be doing data annotation

1:54:13

and you might be treated as a machine rather than as a human to extract value, the value of your labor for perpetuating this labor automating machine that these people are building. You might be competing with these facilities

1:54:27

for fresh water resources. They're also polluting your air. Your bills have increased, so the affordability crisis is getting worse. Like, how is that making people able to be more human?

1:54:41

What do we do about it?

1:54:43

Yes. Okay. So one of the analogies that I always use

1:54:51

is AI is like the word transportation. Transportation can literally refer to everything from a bicycle to a rocket. And we have nuanced conversations about transportation where we always say we need to transition our transportation towards more sustainable options. We need to transition towards, you know,

1:55:07

public transport, electric vehicles. And we don't ever say everyone should get a rocket to do every, to serve all of their transportation needs. Like we're in Austin, if you use a rocket to fly from Dallas to Austin, like that would just make no sense.

1:55:24

It's just a disproportionate use of resources to get the benefit of getting from point A to point B. This is how we should think about AI. So all of the models that we've been talking about, I like to think of them as the rockets of AI. They use an extraordinary amount of resources

1:55:40

and they provide benefit, some dramatic benefit to some people, they're also exacting an extraordinary cost on a large swath of people because of the costs of developing this technology. Why don't we build more bicycles of AI? This is things like DeepMind's AlphaFold, which is a system that predicts how proteins will fold

1:56:07

based on amino acid sequences. It's really important for accelerating drug discovery, for understanding human disease, and it won the Nobel Prize in Chemistry in 2024. And the reason why it's a bicycle of AI is because you're using small curated data sets. You just have data that has amino acid sequences and protein folding. So that means you need significantly less

1:56:34

computational resources to develop the system, which means significantly less energy, which means less emission, so on and so forth. And you're providing enormous benefit to people.

1:56:43

It feels like the horse has left the stable in this regard because they've already taken people's IP, they've taken media, they train on this podcast. We know they do because it shows that they do. I think there's a button actually in the back end of YouTube now

"I'd definitely pay more for this as your audio transcription is miles ahead of the rest."

Dave, Leeds, United Kingdom

Want to transcribe your own content?

Get started free
1:56:57

that allows you just to click it and it says, we will train on your YouTube channel.

1:57:03

So the horse has kind of left the station. If the horse truly had left the stables, they wouldn't have to train on anything anymore. Why is it that their appetite for data has actually expanded? It's because in order to build the next generations of their technologies, in order to have the technologies continue to be relevant and continue to update with the pace of new knowledge creation and society's

1:57:27

evolvement, they need to train again and again and again and again. And why are they employing actually more and more and more data annotation workers over time? It's because they need more and more of that work over time. I mean, I've been reporting on data annotation work for over seven years now, and it's not gone down.

1:57:48

It's gone, it's increased.

1:57:50

Do you think there's any chance of it going down? Do you think there's any chance of this sort of brute force scaling approach where you take data, you take computational power, energy, and you have the data labelers and building out more and more parameters for the models, do you think there's any chance it's going to stop or go in a different direction other than

1:58:11

the one that's going in now? I would love to reframe the question and say, what should we be doing in this moment where it's not going down? Where we do recognize that actually these companies in this moment need continued resources, inputs, and labor to perpetuate what they are doing.

1:58:28

Yeah, because this sounds like stop. And I just feel like stop is like a hard, it feels like, I just think, you know, with the government in place, they're supporting these companies like crazy, globally, this is happening.

1:58:39

So I'm like, stop doesn't feel.

1:58:41

I always say, we need to break up the empire and we need to develop alternatives. And we are already seeing a flourishing of incredible grassroots movements that are applying an enormous amount of pressure to the way that the empire is trying to unfold its agenda.

1:58:58

80% of Americans in the most recent poll think that the AI industry need to be regulated. Yeah, I saw that. When was the last time that 80% of Americans were on the same side of an issue?

1:59:07

No, yeah. When I have these conversations on the podcast, the comment section are clear. Yeah. There's no disagreement. There's no one in there going,

1:59:13

oh no, I think they should crack on.

1:59:14

Yeah. Dozens of protests against data centers have broken out all around this country in the US, all around the world. So what do we do about it? So these are people that are doing something about it. They are actually reasserting their agency and exercising democratic contestation against the ways that the empires are going about their business.

1:59:36

What goal should we be aiming at? So if I said to my audience, Janet at home, because this is kind

1:59:40

of what I see in the comments, it's hopelessness. It's like, what can I do? I'm just a... Yeah, well, the goal is not that we completely get rid of this technology. The goal is that these companies need to stop being empires. And the way I define a typical business versus an empire is that the empires are predicated on this idea that they do not have to provide a fair exchange of value with the workers who work for them or the people who use them or all of the other people that are involved in the supply chain of producing and deploying these technologies. They can extract and exploit and extract and exploit and get more value than what they offer. Whereas typical businesses, there is a fair exchange. You buy a service, you feel like you

2:00:19

got the same amount of value as the service that you provided. But for these data annotation workers, for example, they do not feel in any way that they're being paid the same amount of value as the service that you provided. But for these data annotation workers, for example, they do not feel in any way that they're being paid the same value that they provide to these companies. So that's, for me, the North Star. We should be pushing back and holding

2:00:34

accountable these companies when they operate in an imperial way. And that's what we've seen with all of these people that are now literally protesting the streets against data centers and having an enormous effect, by the way, actually stalling data center projects and also completely banning data centers

2:00:52

from being developed in their localities. We're seeing that with artists and writers that are suing these companies for intellectual property infringement and creating a huge public conversation about what is it that we actually, how do we actually want to protect

99.9% Accurate90+ LanguagesInstant ResultsPrivate & Secure

Transcribe all your audio with Cockatoo

Get started free
2:01:06

our intellectual property? It's like, three weeks ago, I met Megan Garcia, who is the mother of Sewell Setzer III, who is the 14-year-old who died by suicide after being sexually groomed by a character he has trapped on.

2:01:23

And she, when that happened, I mean, obviously was incredibly devastated by what had happened to her son. She also decided to do something about it. She sued the companies and that lawsuit then sparked many other parents and families

2:01:41

who were actually experiencing similar things to sue these companies as well. That has created an enormous public conversation about what these companies are actually doing when they exploit and they extract. What is the cost to the lives of people around the world,

2:01:58

including children?

2:02:00

So what do you think my audience should do? If they agree with everything written in your book, Age Empire of AI, Dreams and Nightmares, and Sam Altman's Open AI, if they agree with everything said here, if they agree with everything we've discussed today,

2:02:13

they're concerned about their kids, they don't want everyone to become data labelers, they don't think that's a particularly great solution, what can they actually go and do?

2:02:21

When I was writing the book, the only discourse that was happening was, this is the best thing since sliced bread. Because of all of the actions of these people, like saying when they're not happy with the things that these companies are doing, we now have 80% of Americans

2:02:38

that want to regulate this industry. And so I would say to people, think about all of the ways that your life intersects with the resources that the AI industry needs to perpetuate what they do, and also the spaces that they would need

2:02:53

to deploy these technologies to continue having broad-based adoption in their work. So you're a data donor to these companies. You could withhold that data, and that's what those artists and writers are doing. Like they're suing these companies to withhold,

2:03:10

to try and create mechanisms by which that data would then be withheld. You probably have a data center popping up around you. If you're at a school environment or a company environment, you're probably having a discussion in those environments right now

2:03:24

about what should the AI adoption policy be. And these companies, they, like, I was talking with some open AI employees just the other day and they were telling me that it's understood internally that the revenue targets for the company are extraordinary and they need things to go flawlessly

2:03:45

for it to all work out. And so they would need every single person to adopt this, every single space to adopt this. They would need to be able to build their data centers at the speed that they're trying to build them. And so what I would say to every one of your viewers

2:04:01

is let's not make it go flawlessly if we don't agree with what they are doing.

2:04:06

Ah, okay, I got you.

2:04:08

And then let's build alternatives because the thing is, what I'm saying is not that these technologies don't have utility, it's that specifically the political economy that has emerged to support the production of these technologies right now

2:04:23

is exacting a lot of harm on people. But we have research that shows that the very same capabilities could be developed with much more efficient methods with much less resource consumption. And we have a lot of different other AI systems at our disposal that are like the bicycles of AI that we also know provide extraordinary benefit at very little cost. So let's break up the empire

2:04:48

and let's forge new paths of AI development that are broadly beneficial to everyone.

2:04:53

It's strange. I think I've trained myself to deal with dichotomies in my head. And this, for me, is a dichotomy, where I, as a CEO and as a founder, as an entrepreneur and someone that loves technology, I think it's incredible.

"99% accuracy and it switches languages, even though you choose one before you transcribe. Upload → Transcribe → Download and repeat!"

Ruben, Netherlands

Want to transcribe your own content?

Get started free
2:05:08

It's absolutely incredible, AI. It's just so amazing and incredible. The things it's enabled me to do and create.

2:05:13

Yeah, because it's designed to enable people like you.

2:05:16

And my car driving in the morning and being safer, incredible. I think, you know, the billion odd people that use AI tools or chat to be here, whatever it might be, they'd probably say that it's added value to their life. But, and this is the part that people find confusing

2:05:31

that you can, and I like, I invest in companies that are heavily using AI, but, and the big but is, is it possible to think that is true and also think that there are significant unintended consequences, which technology and the history of technology should have taught us to take a moment to pause to talk about.

2:05:48

Because-

2:05:48

I think this is absolutely, like you can have both of these things in your head. And what I'm saying is that this tension doesn't have to be a tension because we could actually preserve the utility and benefits of these technologies,

2:06:03

but actually develop and design them in a different way that doesn't have all of these unintended consequences.

2:06:08

Yes, and I think there needs to be a big social conversation, which is why I have so many conversations about AI in the show. There needs to be a big social conversation about being intentional about the social impact, the social and environmental impact.

2:06:21

And that conversation is not being had in government, from what I can see. The conversation takes place in the industry. And actually trying to pull it out of the industry and open people's minds to it is hopefully what we've been doing

2:06:34

over the last couple of months with this subject.

2:06:35

Because- I think it's actually been happening everywhere outside of the industry. And for local governments and state level governments, there have been huge conversations about this. Everywhere, like I've been on book tour,

2:06:48

I've been to dozens of cities around the world. People are having these crucial conversations everywhere. I have not gone to a single city.

2:06:57

Yes, everywhere.

2:06:58

Even here in South Byers.

2:06:59

Yeah, I haven't gone to a single city where the room is not packed and people are not wrestling with the same exact questions as every other person in every other room that I've been in.

2:07:08

Speaking of packed rooms, I know you've got to go. Because you've got a talk today. So we've got our last question, which is the closing tradition on this podcast. How would your advice to a friend with a terminal diagnosis differ from what you would do yourself?

2:07:23

That's a great question. Differ from what you would do yourself? Oh my God. I would tell them like, enjoy, like live life for yourself.

2:07:34

And take it easy. And yeah, I'm not taking it easy.

2:07:39

Well, I think it's a good thing you're not taking it easy because you're leading a conversation which is incredibly important. And I think that's a good thing you're not taking it easy because you're leading a conversation which is incredibly important. And I think that's the thing. I think the conversation is the important thing. And so, you know, because of algorithms and echo chambers,

2:07:50

it's so rare to have a conversation these days, especially a long form one like this. So I think that's so important. And your book is, for anyone that's curious about, I think a lot of people would have learned a lot of stuff today, because I sit here and interview AI people all the time, and I've learned so much today. From reading your book

2:08:07

and the extensive objective perspective that your book takes, you're able to unravel all of these stories that we sometimes see in tweets and we don't know if they're true or not, because you've gone and met the people

2:08:17

and you've done your research and you're an incredibly intelligent person who clearly has humanity's interests as your North Star. And that shows up in everything you do and everything you say. So, please continue to fight in the way that you are. Because it's an incredibly important one.

2:08:33

And it's people like you that are, I think, galvanizing the world to take the collective action that we're starting to see everywhere.

2:08:41

Yeah.

2:08:42

Empire of AI, Dreams and Nightmares in Sam Holtman's OpenAI by Karen Hao. I'll link it below for anyone that wants to read this book. I highly recommend you do. It's a New York Times bestseller for good reason. Karen, thank you.

2:08:53

Thank you so much, Stephen.

2:08:55

YouTube have this new crazy algorithm where they know exactly what video you would like to watch next based on AI and all of your viewing behavior. And the algorithm says that this video is the perfect video for you. It's different for everybody looking right now. It's different for everybody looking right now. Check this video out, I bet you, you might love it.

Get ultra fast and accurate AI transcription with Cockatoo

Get started free →

Cockatoo