Today on Decoder, Iâm talking to former President Barack Obama about AI, social networks, and how to think about democracy as both of those things collide.
Technology
Barack Obama on AI, free speech, and the future of the internet
Barack Obama joined Nilay Patel on Decoder to discuss President Joe Biden’s executive order on AI, tech regulation, and more.
I sat down with President Obama last week at his offices in Washington, DC, just hours after President Joe Biden signed a sweeping executive order about AI. That order covers quite a bit, from labeling AI-generated content to coming up with safety protocols for the companies working on the most advanced AI models.
Youâll hear Obama say heâs been talking to the Biden administration and leaders across the tech industry about AI and how best to regulate it. And the former president has a particularly unique experience here â heâs long been one of the most deepfaked people in the world.
Youâll also hear him say that he joined our show because he wanted to reach you, the Decoder audience, and get you all thinking about these problems. One of Obamaâs worries is that the government needs insight and expertise to properly regulate AI, and youâll hear him make a pitch for why people with that expertise should take a tour of duty in the government to make sure we get these things right.
My idea here was to talk to Obama the constitutional law professor more than Obama the politician, so this one got wonky fast. Youâll hear him mention Nazis in Skokie â thatâs a reference to a famous Supreme Court case from the â70s where the ACLU argued that banning a Nazi group from marching was a violation of the First Amendment.
Youâll hear me get excited about a case called Red Lion Broadcasting v. FCC, a 1969 Supreme Court decision that said the government could impose something called the Fairness Doctrine on radio and television broadcasters because the public owns the airwaves and can thus impose requirements on how theyâre used. Thereâs no similar framework for cable TV or the internet, which donât use public airwaves, and that makes them much harder, if not impossible, to regulate.
Obama says he disagrees with the idea that social networks are something called âcommon carriersâ that have to distribute all information equally. That idea has been floated most notably by Justice Clarence Thomas in a 2021 concurrence, and it forms the basis of laws regulating social media in Texas and Florida â laws that are currently headed for Supreme Court review.Â
Lastly, Obama says he talked to a tech executive who told him the best comparison to AIâs impact on the world would be electricity, and youâll hear me say that I have to guess who it is. So hereâs my guess: itâs Googleâs Sundar Pichai, who has been saying AI is more profound than electricity or fire since 2018. But thatâs my guess. You take a listen, and let me know who you think it is.Â
Oh, and one more thing: I definitely asked Obama what apps were on his iPhoneâs homescreen.
This transcript has been lightly edited for length and clarity.
President Barack Obama, youâre the 44th president of the United States. Weâre here at the Obama Foundation. Welcome to Decoder.
It is great to be here. Thank you for having me.
I am excited to talk to you â thereâs a lot to talk about.
We are here on the occasion of President Biden signing an executive order about AI. I would describe this order as âsweeping.â I think itâs over 100 pages long. There are a lot of ideas in it: everything from regulating biosynthesis with AI to safety regulations. It mandates red teaming and transparency. Watermarking. These feel like very new challenges for the governmentâs relationship with technology.
I want to start with a Decoder question: what is your framework for thinking about these challenges and how you evaluate them?
This is something that Iâve been interested in for a while. Back in 2015, 2016, as we were watching the landscape be transformed by social media and the information revolution impacting every aspect of our lives, I started getting in conversations about artificial intelligence and this next phase, this next wave, that might be coming. I think one of the lessons that we got from the transformation of our media landscape was that incredible innovation, incredible promise, incredible good can come out of it.
But there are a bunch of unintended consequences, and we have to be maybe a little more intentional about how our democracies interact with what is primarily being generated out of the private sector. What rules of the road are we setting up, and how can we make sure that we maximize the good and maybe minimize some of the bad?
So I commissioned my science guy, John Holdren, along with John Podesta, who had been a former chief of staff and worked on climate change issues, [and said], âLetâs pull together some experts to figure this out.â
We issued a big report in my last year [in office]. The interesting thing even then was people felt [AI] was enormously promising technology, but we may be overhyping how quick itâs going to come. As weâve seen just in the last year or two, even those who are developing these large language models, who are in the weeds with these programs, are starting to realize this thing is moving faster and is potentially even more powerful than we originally imagined.
âI donât believe that we should try to put the genie back in the bottle and be anti-tech because of all the enormous potential. But I think we should put some guardrails around some risks that we can anticipate.â
Now, in conversations with government officials, private sector, and academics, the framework I emerged from is that this is going to be a transformative technology that broadly [changes] the shape of our economy.
In some ways, even our search engines â basic stuff that we take for granted â are already operating under some AI principles, but this is going to be turbocharged. Itâs going to impact how we make stuff, how we deliver services, how we get information. And the potential for us to have enormous medical breakthroughs, the potential for us to be able to provide individualized tutoring for kids in remote areas, the potential for us to solve some of our energy challenges and deal with greenhouse gasses â this could unlock amazing innovation, but it can also do some harm.
We can end up with powerful AI models in the hands of somebody in a basement who develops a new smallpox variant, or non-state actors who suddenly, because of a powerful AI tool, can hack into critical infrastructure. Or maybe, less dramatically, AI infiltrating the lives of our children in ways that we didnât intend â in some cases, the way social media has.
So what that means then is I think the government as an expression of our democracy needs to be aware of whatâs going on. Those who are developing these frontier systems need to be transparent. I donât believe that we should try to put the genie back in the bottle and be anti-tech because of all the enormous potential. But I think we should put some guardrails around some risks that we can anticipate and have enough flexibility that [they] donât destroy innovation but are also guiding and steering this technology in a way that maximizes not just individual company profits, but also the public good.
Let me make the comparison for you: I would say that the problem in tech regulation for the past 15 years has been social media. How do we regulate social media? How do we get more good stuff, less bad stuff? Make sure that really bad stuff is illegal. You came to the presidency on the back of social media.
I was the first digital president.
You had a BlackBerry, I remember. People were very excited about your BlackBerry. I wrote a story about your iPad. That was transformative â young people are going to take to the political environment, theyâre going to use these tools, and weâre going to change America with it.
You can make an argument that I wouldnât have been elected had it not been for social networks.
Now weâre on the other side of that. There was another guy who got elected on the back of social networks. There was another movement in America that has been very negative on the back of that election.
We have basically failed to regulate social networks, Iâd say. Thereâs no comprehensive privacy bill, even.
Right.
There was already a framework for regulating media in this country. We could have applied a lot of what we knew about âshould we have good media?â to social networks. There are some First Amendment questions in there â important ones. But there was an existing framework.
With AI, itâs more, âWeâre going to tell computers to do stuff, and theyâre going to go do it.â
Right. We hope. [Laughs]
We have no framework for that.
We hope they do what we think weâre telling them to do.
We ask computers a question. They might just confidently lie to us or help us lie at scale. There is no framework for that. What do you think you can pull from the failure to regulate social media into this new environment, such that we get it right this time?Â
Well, this is part of the reason why I think what the Biden administration did today in putting out the EO is so important. Not because itâs the end point, but because itâs really the beginning of building out a framework.
When you mentioned how this executive order has a bunch of different stuff in it â what that reflects is that we donât know all the problems that are going to arise out of this. We donât know all the promising potential of AI, but weâre starting to put together the foundations for what we hope will be a smart framework for dealing with it.Â
In some cases, what AI is going to do is accelerate advances in, letâs say, medicine. Weâve already seen things like protein folding and the breakthroughs that would not have happened had it not been for some of these AI tools. We want to make sure that thatâs done safely. We want to make sure that itâs done responsibly, and it may be that we already have some laws in place that can manage that.
But there may be some novel developments in AI where an existing agency, an existing law, just doesnât work. If weâre dealing with the alignment problem, and we want to make sure that some of these large language models â where even the developers arenât entirely confident about what these models are doing, what the computerâs thinking or doing â in that case, weâre going to have to figure out: what is the red teaming? What are the testing regimens?
In talking to the companies themselves, they will acknowledge that their safety protocols and their testing regimens may not be where they need to be yet. I think itâs entirely appropriate for us to plant a flag and say, âAll right, frontier companies, you need to disclose what your safety protocols are to make sure that we donât have rogue programs going off and hacking into our financial system,â for example. Tell us what tests youâre using. Make sure that we have some independent verification that right now this stuff is working.
But that framework canât be a fixed framework. These models are developing so quickly that oversight and any regulatory framework is going to have to be flexible, and itâs going to have to be nimble. By the way, itâs also going to require some really smart people who understand how these programs and these models are working â not just in the companies themselves but also in the nonprofit sector and in government. Which is why I was glad to see that the Biden administrationâs executive order is specifically calling on a bunch of hotshot young people who are interested in AI to do a stint outside of the companies themselves and go work for government for a while. Go work with some of the research institutes that are popping up in places like the Harvard [Applied Social Media] Lab or the Stanford [Human-Centered] AI Center and some other nonprofits.
Weâre going to need to make sure that everybody can have confidence that whatever journey weâre on here with AI, that itâs not just being driven by a few people without any kind of interaction or voice from ordinary folks â the regular people who are going to be using these products and impacted by these products.
There are ordinary folks and there are the people who are building it who need to go help write regulations, and thereâs a split there.
The conventional wisdom in the Valley for years has been that the government is too slow. It doesnât understand technology. By the time it actually writes a functional rule, the technology it was aiming to regulate will be obsolete. This is markedly different, right? The AI doomers are the ones asking for regulation the most.
Yeah.
The big companies have asked for regulation. [OpenAI CEO] Sam Altman has toured the capitals of the world politely asking to be regulated. Why do you think thereâs such a fervor for that regulation? Is it just incumbents wanting to cement their position?
Youâre raising an important point. Rightly thereâs some suspicion, I think, among some people that these companies want regulation because they want to lock out competition. As you know, historically, a central principle of tech culture has been open source. We want everything out there. Everybodyâs able to play with models and applications and create new products, and thatâs how innovation happens.
Here, regulation starts looking like, well, maybe we start having closed systems and the big frontier companies â the Microsofts, the Googles, the OpenAIs, Anthropics â are going to somehow lock us out. But in my conversations with the tech leaders on this, I think there is, for the first time, some genuine humility because they are seeing the power that these models may have.
âBut in my conversations with the tech leaders on this, I think there is, for the first time, some genuine humility because they are seeing the power that these models may have.â
I talked to one executive â and look, thereâs no shortage of hyperbole in the tech world, right? But this is a pretty sober guy whoâs seen a bunch of these cycles and been through boom and bust. I asked him, âWell, when you say this technology you think is going to be transformative, give me some analogy.â He said, âI sat with my team, and we talked about it. After going around and around, we decided maybe the best analogy was electricity.â And I thought, âWell, yeah, electricity. That was a pretty big deal.â [Laughs]
If thatâs the case, I think they recognize that itâs in their own commercial self-interest that thereâs not some big screw-up on this. If, in fact, it is as transformative as they expect it to be, then having some rules and protections creates a competitive field that allows everybody to participate, come up with new products, compete on price, and compete on functionality, but [prevents us from] taking such big risks that the whole thing blows up in our faces.
I do think that there is sincere concern that if we just have an unfettered race to the bottom, that this could end up choking off the goose that might be laying a bunch of golden eggs.
There is the view in the Valley, though, that any constraint on technology is bad.
Yeah, and I disagree with that.
Any caution, any principle where you might slow down is the enemy of progress, and the net good is better if we just race ahead as fast as possible.
In fairness, thatâs not just in the Valley; thatâs in every business I know.
Itâs not like Wall Street loves regulation. Itâs not as if manufacturers are really keen for the government to micromanage how they produce goods. One of the things that weâve learned through the industrial age and the information age over the last century is that you can overregulate. You can have over-bureaucratized things.
But if you have smart regulations that set some basic goals and standards â making sure youâre not creating products that are unsafe to consumers; making sure that if youâre selling food, people who go in the grocery store can trust that theyâre not going to die from salmonella or E. coli; making sure that if somebody buys a car that the brakes work; making sure that if I take my electric whatever and I plug it into a socket anywhere, any place in the country, that itâs not going to shock me and blow up in my face â it turns out all those various rules and standards actually create marketplaces and are good for business, and innovation then develops around those rules.
I think part of what happens in the tech community is the sense that, âWeâre smarter than everybody else, and these people slowing us down are impeding rapid progress.â When you look at the history of innovation, it turns out that having some smart guideposts around which innovation takes place not only doesnât slow things down, but in some cases, it actually raises standards and accelerates progress.
There were a bunch of folks who said, âLook, youâre going to kill the automobile if you put airbags in there.â Well, it turns out actually people figured out, âYou know what? We can actually put airbags in there and make them safer. And over time, the costs go down and everybodyâs better off.â
Thereâs a really difficult part in the EO about provenance â watermarking content, making sure people can see itâs AI-generated. You are among the most deepfaked people in the world.Â
Oh, absolutely. Because what I realized is when I left office, Iâd probably been filmed and recorded more than any human in history just because I happened to be the first president when the smartphone came out.
Iâm assuming you have some very deep personal feelings about being deepfaked in this way. Thereâs a big First Amendment issue here, right?
Right.
I can use Photoshop one way, and the government doesnât say I have to put a label on it. I use it a slightly different way, the governmentâs going to show up and tell Adobe, âYouâve got to put a label on this.â How do you square that circle? It seems very challenging to me.
I think this is going to be an iterative process. I donât think youâre going to be able to create a blanket rule. But the truth is thatâs been how our governance of information, media, and speech has developed for a couple hundred years now. With each new technology, we have to adapt and figure out some new rules of the road.
So letâs take my example: a deepfake of me that is used for political satire or just somebody who doesnât like me and they want to deepfake me. I was the president of the United States. There are some pretty formidable rules that have been set up to protect people who make fun of public figures. Iâm a public figure, and what you are doing to me as a public figure is different than what you do to a 13-year-old girl, a freshman in high school. So weâre going to treat that differently, and thatâs okay. We should have different rules for public figures than we do for private citizens. We should have different rules for what is clearly political commentary and satire versus cyberbullying.
Where do you think those rules land? Do they land on individuals? Do they land on the people making the tools like Adobe or Google? Do they land on the distribution networks, like Facebook?
My suspicion is how responsibility is allocated â weâre going to have to sort out. Look, I taught constitutional law. Iâm close to a First Amendment absolutist in the sense that I generally donât believe that even offensive speech, mean speech, et cetera, should certainly not be regulated by the government. Iâm even game to argue that on social media platforms that the default position should be free speech rather than censorship. I agree with all that.
But keep in mind, weâve never had completely free speech, right? We have laws against child pornography. We have laws against human trafficking. We have laws against certain kinds of speech that we deem to be really harmful to the public health and welfare. The courts, when they evaluate that, they say, âHmm.â They come up with a whole bunch of time, place, and manner restrictions that may be acceptable in some cases but arenât acceptable in others. You get a bunch of case law that develops.
âI do believe that the platforms themselves are more than just common carriers like the phone company. Theyâre not passive. Thereâs always some content moderation taking place.â
There are arguments about it in the public square. We may disagree â should Nazis be able to protest in Skokie? Well, thatâs a tough one, but we can figure this out. That, I think, is how this is going to develop.
I do believe that the platforms themselves are more than just common carriers like the phone company. Theyâre not passive. Thereâs always some content moderation taking place. So once that line has been crossed, itâs perfectly reasonable for the broader society to say, well, we donât want to just leave that entirely to a private company.
I think we need to at least know how youâre making those decisions, what things you might be amplifying through your algorithm and what things you arenât. It may be that what youâre doing isnât illegal, but we should at least be able to know how some of these decisions are made. I think itâs going to be that kind of process that takes place. What I donât agree with is the large tech platform suggesting somehow that [they] want to be treated entirely as a common carrier, and [theyâre] just passive here.
Thatâs the Clarence Thomas view, right?
Yeah. But on the other hand, we know [theyâre] selling advertising based on the idea that youâre making a bunch of decisions about [their] products.
This is very challenging, right? If you say [social platforms] are common carriers, then you are, in fact, regulating them. Youâre saying you canât make any decisions. If you say you are exercising editorial control, they are protected by the First Amendment.Â
Yes.
Then regulations get very, very difficult. It feels like even with AI â when we talk about content generation with AI â or with social networks, we run right into the First Amendment over and over again. Most of our approaches â this is what I worry about â try to get around it so we can make some speech regulations without saying weâre going to make some speech regulations.
Copyright law is the most effective speech regulation on the internet because everyone will agree, âOkay, Disney owns that. Bring it down.â
Well, because thereâs property involved. Thereâs money involved.
Thereâs money. Maybe less property than money, but thereâs definitely money.
IP and hence, money. Yeah.
Do you worry that weâre making fake speech regulations without actually talking about the balance of equities that youâre describing here?
I think that we need to have â and AI I think is going to force this â a much more robust public conversation around these rules and agree to some broad principles to guide us. The problem is, right now, letâs face it, itâs gotten so caught up in partisanship â partly because of the last election, partly because of covid and vax and anti-vax proponents â that weâve lost sight of our ability to just come up with some principles that donât advantage one party or another, or one position or another, but do reflect our broad adherence to democracy.Â
But the point Iâm emphasizing here is this is not the first time weâve had to do this. We had to do this when radio emerged. We had to do this when television emerged. It was easier to do back then, in part because you had three or five companies, and the public through the government technically owned the airwaves, and you could make these arguments.
This is a square on my bingo card â if I could get to the Red Lion case with you, Iâve won. There was a framework [in that case] that said the government owns the airwaves, and itâs going to allocate them to people in some way, so we can make some decisions, and that is an effective and appropriate situation.
That was the hook.
Can you bring that to the internet?
I think you have to find a different kind of hook.
Sure.
But ultimately, even the idea that the public and the government own the airwaves â that was really just another way of saying, âThis affects everybody, so we should all have a say in how this operates, and we believe in capitalism, and we donât mind you making a bunch of money through the innovation and the products that youâre creating and the content that youâre putting out there. But we want to have some say in what our kids are watching or how things are being advertised.â
If you were the president now â I was with my family last night, and the idea that the Chinese TikTok teaches kids to be scientists and doctors, but in our TikTok, the algorithm is different, it came up. And the notion that we should have a regulation like China that teaches our kids to be doctors â all the parents around the table said, âYeah, weâre super into that. We should do that.â
How would you write a rule like that? Is it even possible with our First Amendment?
For a long time, letâs say under television, there were requirements around childrenâs television. It kept on getting watered down to the point where anything qualified as childrenâs television, right? We had a fairness doctrine that made sure that there was some balance in terms of how views were presented.
Iâm not arguing good or bad in either of those things. Iâm simply making the point that weâve done it before, and there was no sense that somehow that was anti-democratic or it was squashing innovation. It was just an understanding that we live in a democracy, so we set up rules so that we think that democracy works better rather than worse, and everybody has some say in it.
The idea behind the First Amendment is weâre going to have a marketplace of ideas, that these ideas battle themselves out, and ultimately, we can all judge better ideas versus worse ideas. I deeply believe in that core principle. We are going to have to adapt to the fact that now there is so much content, and there are so few regulators, everybody can throw up any idea out there, even if itâs sexist, racist, violent, etc., and that makes it a little bit harder than it did when we only had three TV stations or a handful of radio stations or what have you.
But the principle still applies, which is: how do we create a deliberative process where the average citizen can hear a bunch of different viewpoints and then say, âYou know what? Hereâs what I agree with, hereâs what I donât agree with.â Hopefully, through that process, we get better outcomes.
Let me crash the two themes of our conversations together: AI and the social platforms. Meta just had earnings. Mark Zuckerberg was on the earnings call, and he said, âFor our feed apps, I think that, over time, more of the content that people consume is either going to be generated or edited by AI.â So he envisions a world in which social networks are showing people perhaps exactly what they want to see inside of their preferences, much like advertising that keeps them engaged.
Should we regulate that away? Should we tell them to stop? Should we embrace this as a way to show people more content that theyâre willing to see that might expand their worldview?
This is something Iâve been wrestling with for a while.
I gave a speech about misinformation and our information silos at Stanford last year. I am concerned about business models that just feed people exactly what they already believe and agree with and are all designed to sell them stuff.
Do I think thatâs great for democracy? No.
Do I think thatâs something the government itself can regulate? Iâm skeptical that you can come up with perfect regulations there.
What I actually think needs to happen, though, is that we need to think about different platforms and different business models. It may be that Iâm perfectly happy to have AI mediate how I buy jeans online. That could be very efficient. Iâm perfectly happy with it. So if itâs a shopping app or thread, fine.
âCan we create other places for people to go that broaden their perspective and make them curious about how other people are seeing the world, so they actually learn something, as opposed to just reinforcing their existing biases?â
When weâre talking about political discourse, when weâre talking about culture, can we create other places for people to go that broaden their perspective and make them curious about how other people are seeing the world, so they actually learn something, as opposed to just reinforcing their existing biases?
I donât think thatâs something that government is going to be able to legislate. I think thatâs something that consumers interacting with companies are going to have to discover and find alternatives.
Look, Iâm obviously not 12 years old. I didnât grow up with my thumbs on these screens. Iâm an old-ass 62-year-old guy who sometimes canât really work all the apps on my phone, but I do have two daughters who are in their 20s. Itâs interesting the degree to which, at a certain point, they have found almost every social media app getting kind of boring after a while. It gets old, precisely because all itâs doing is telling [you] what you already know or what the program thinks you want to know or what you want to see. So youâre not surprised anymore. Youâre not discovering anything anymore. Youâre not learning anymore.
So I think thereâs a promise to how we can... thereâs a market, letâs put it that way. I think thereâs a market for products that donât just do that. Itâs the same reason why people have asked me around AI, âAre there going to still be artists around and singers and actors, or is it all going to be computer-generated stuff?â
My answer is, âFor elevator music, AI is going to work fine.â
A bunch of elevator musicians just freaked out, dude.
For the average even legal brief or letâs say a research memo in a law firm, AI can probably do as good a job as a second-year law associate.
Certainly as good a job as I ever did. [Laughs]
[Laughs] Exactly. But Bob Dylan or Stevie Wonder, that is different. The reason is because part of the human experience, part of the human genius is itâs almost a mutation. Itâs not predictable. Itâs messy, itâs new, itâs different, itâs rough, itâs weird. That is the stuff that ultimately taps into something deeper in us, and I think thereâs going to be a market for that.
In addition to being the former president, you are a bestselling author. You have a production company with your wife. Youâre in the IP business, which is why you think itâs property. Itâs good. I appreciate that.
The thing that will stop AI in its tracks in this moment is copyright lawsuits, right? You ask a generative AI model to spit out a Barack Obama speech, and it will do it to some level of passability. Probably C+. Thatâs my estimation, C+.
Itâd be one of my worst speeches, but it might sound sort ofâ
You fire a canon of C+ content at any business model on the internet, you upend it. But there are a lot of authors, musicians, and now artists suing the companies, saying, âThis is not fair use to train on our data â to just ingest all of it.â Where do you stand on that? As an author, do you think itâs appropriate for them to ingest this much content?
Set me aside for a second. Michelle and I, weâve already sold a lot of books, and weâre doing fine. So Iâm not overly stressed about it personally.
I do think President Bidenâs executive order speaks to â and thereâs a lot more work that has to be done on this â [the idea that] copyright is just one element.
If AI turns out to be as pervasive and as powerful as its proponents expect â and I have to say, the more I look into it, I think it is going to be that disruptive â we are going to have to think about not just about intellectual property. Weâre going to have to think about jobs and the economy differently. And not all these problems are going to be solved inside of industry.
What do I mean by that? I think with respect to copyright law, you will see people with legitimate claims financing lawsuits and litigation. Through the courts and various other regulatory mechanisms, the people who are creating content are going to figure out ways to get paid and to protect the stuff they create. It may impede the development of large language models for a while, but over the long term, thatâll just be a speed bump.
The broader question is going to be: what happens when 10 percent of existing jobs now definitively can be done better by some large language model or other variant of AI? Are we going to have to reexamine how we educate our kids, and what jobs are going to be available?
The truth of the matter is that during my presidency, there was a little bit of naiveté where people would say, âThe answer to lifting people out of poverty and making sure they have high enough wages is weâre going to retrain them. Weâre going to educate them, and they should all become coders because thatâs the future.â Well, if AI is coding better than all but the very best coders â if ChatGPT can generate a research memo better than the third- or fourth-year associate, maybe not the partner whoâs got a particular expertise or judgment â now what are you telling young people coming up?
âIf AI turns out to be as pervasive and as powerful as its proponents expect, we are going to have to think not just about intellectual property. Weâre going to have to think about jobs and the economy differently.â
I think weâre going to have to start having conversations about: how do we pay those jobs that canât be done by AI? How do we pay those better â healthcare, nursing, teaching, childcare, art, things that are really important to our lives but maybe commercially historically have not paid as well?
Are we going to have to think about the length of the workweek and how we share jobs? Are we going to have to think about the fact that more people [might] choose to operate like independent contractors â where are they getting their healthcare from, and where are they getting their retirement from? Those are the kinds of conversations that I think weâre going to have to start having to deal with, and thatâs why Iâm glad that President Bidenâs EO begins that conversation.
I canât emphasize [that] enough. I think youâll see some people saying, âWell, we still donât have tough regulations. Whereâs the teeth in this? Weâre not forcing these big companies to do X, Y, Z as quickly as we should.â
I think this administration understands, and Iâve certainly emphasized in conversations with them: this is just the start. This is going to unfold over the next two, three, four, five years. And by the way, itâs going to be unfolding internationally. Thereâs going to be a conference this week in England around international safety standards on AI. Vice President [Kamala] Harris is going to be attending. I think thatâs a good thing because part of the challenge here is weâre going to have to have some cross-border frameworks and regulations and standards and norms. Thatâs part of what makes this different and harder to manage than the advent of radio and television because the internet, by definition, is a worldwide phenomenon.
Have you used these tools? Have you had the âaha!â moment where the computerâs talking to you? Have you generated a picture of yourself?
I have used some of these tools during the course of these conversations and this research, and itâs fun.
Has Bing flirted with you yet? It flirts with everybody, I hear.
Bing didnât flirt with me [Laughs]. The way theyâre designed â and Iâve actually raised this with some of the designers â in some cases, theyâre designed to anthropomorphize, to make it feel like you are talking to a human. Itâs like, can we pass the Turing test? Thatâs a specific objective because it makes it seem more magical. And in some cases, it improves function. But in some cases, it just makes it cooler. So thereâs a little pizzazz there, and people are interested in it.
I have to tell you that generally speaking, the way I think about AI is as a tool, not a buddy. I think part of what weâre going to need to do as these models get more powerful â and this is where I do think government can help â is also just educating the public on what these models can do and what they canât do. These are really powerful extensions of yourself and tools but [they] are also reflections of yourself. So donât get confused and think that somehow what youâre seeing in the mirror is some other consciousness.
You just want Bing to flirt with you. This is what I felt personally, very deeply.Â
All right, last question. I need to know this. Itâs very important to me: what are the four apps in your iPhone dock?
Four apps at the bottom, Iâve got Safari.
Key.
Iâve got my texts, the green box.
Youâre a blue bubble. Do you give people any crap for being a green bubble?
No, no. Iâm okay.
All right.
Iâve got my email, and I have my music. Thatâs it.
The stock set. Pretty good.
If you asked the ones that I probably go to more than I should, I might have to put Words With Friends on there, where I think I waste a lot of time, and maybe my NBA League Pass.
Thatâs pretty good.
But I try not to overdo it on those.
League Pass is just one click above the dock. Thatâs what Iâm getting out of this.
Exactly.Â
President Obama, thank you so much for being on Decoder. I really appreciate this conversation.
I really enjoyed it. I want to emphasize once again because youâve got an audience that understands this stuff, cares about it, is involved in it, and working at it: if you are interested in helping to shape all these amazing questions that are going to be coming up, go to ai.gov and see if there are opportunities for you fresh out of school. Or you might be an experienced tech coder whoâs done fine, bought the house, got everything set up, and says, âYou know what? I want to do something for the common good.â Sign up. This is part of what we set up during my presidency, US Digital Service. Itâs remarkable how many really high-level folks decided that for six months, for a year, or for two years, devoting themselves to questions that are bigger than just what the latest app or video game was turned out to be really important to them and meaningful to them. Attracting that kind of talent into this field with that perspective, I think, is going to be vital.
Decoder with Nilay Patel /
A podcast about big ideas and other problems.