Audio

Episode Summary

We’ve been talking a lot here on Theory of Change about artificial intelligence, both the implications of it from a technological standpoint and also how it works.

This is something that is going to create a fundamental change in our society, and that’s irrespective of whatever improvements to the technologies will be developed in the future. Regardless of what you think about artificial general intelligence (AGI) or super intelligence, whatever you want to call it, the reality is that the technologies that exist in the here and now are going to have a huge impact on our economy and our individual lives.

Because of that, it’s important to consider some of the political implications of artificial intelligence and also to talk about the public opinion implications as well. What do people think about AI and how are they concerned about it? Who’s interested in it?

There are a lot of other things to talk about it, especially also in the context of how AI is a product of the technology industry and so-called Big Tech, as people have taken to call it recently. That’s shaping attitudes of public opinion on that as well.

So to talk about some of this with me today, we’re bringing in Steven Clermont. He’s the polling director at Change Research, which is a progressive polling organization, and he recently has done a survey about AI and public opinion. I couldn’t think of a better guest to come on and talk about this.


Video


Transcript

MATTHEW SHEFFIELD: Welcome to Theory of Change, Stephen.

CLERMONT: Thanks Matthew. Thank you for having bring us on, talking about our research and I’m excited for this conversation.

SHEFFIELD: All right. Yeah. Great to have you. So, we’ll start with the idea of who’s interested in artificial intelligence news.

You guys did a, a survey of this and there are some interesting numbers here. So, maybe give us the details on like who was this survey among? And then we’ll get into the table over on the, on the screen there.

CLERMONT: Thank you for that. Change research was founded in 2017 to do quality polling online to sort of fill the need of polling, sort of moving beyond phones and the traditional way that polling had been done.

So we’re able to go online and do high quality surveys at relatively affordable prices. And part of that allows us to fund our own surveys to really look at a topic in depth like we did for AI and technology. So this is a poll that we’re going to talk about is a poll of 1,300 American voters that was conducted nationally from February 22nd to 24th, and this was all done online.

We have a proprietary technology that allows us to send ads to find survey respondents and do it in a way that we’re meeting demographic targets on race, gender, age, ethnicity, and political preference. So what we’re presenting here is reflective of the American electorate overall, and the key findings from this at the beginning is, in terms of just AI is people’s familiarity with it is fairly low.

We asked a question about familiarity with ChatGPT and other text-based tools that are available online. Only 7% said they were very familiar with it. 38% familiar at any level, and we sort of asked your interest in it, the chart you have up on the screen about half of Americans are interested in, in AI.

It skews a little bit younger. People of color are as interested in it as white people, in fact a little bit more particularly among Hispanic men. And but only 70% say that they’ve used it. And it was interesting in some other research. Found that only 37% are comfortable with government using AI to make important decisions.

And I think when we’re sort of thinking about this topic and thinking about sort of the rollout of AI and how it’s going to be perceived is, it’s hard, like over the cross of all the polling that we’ve been doing in swing states in different, different levels of voter engagement. The general public right now is exceedingly negative on the direction of the country on sort of where they feel that whether their income is keeping up with the cost of living.

The first question we asked here was just how satisfied are you with the way things are going in America today? 77% are dissatisfied, 31%, which is, it’s a lower number, but still pretty high, are dissatisfied with the way things are going in their life. Every place that we’ve asked just give people the simple choice, is your income going up higher than the cost of living or your income?

Yeah. Is it about the same as the cost of living or falling behind the cost of living? About 70% of people, pretty much everywhere in the country, believe that their income has falling behind the cost of living. That’s gone up since since the 2020 election, since the beginning of Covid, but before that it was about 50%.

Two thirds of voters don’t feel like they matter in the United States, 38%, only 38% believe they have a fair shot to succeed. And the number that I’ve been surprised by the most is that 68% believe today’s children will grow up worse off than people are now. So it’s sort of like as this technology is being rolled out and it’s the people supporting it are saying how life changing it’s going to be and it’s going to make all this difference and be disruptive, given the level of insecurity people feel generally.

You’ll, as you can see in our research, there’s a lot of skepticism about the positive benefits of AI and anything that’s being labeled as disruptive and will change everything. We’ve heard that a lot over the last couple of decades, and the fact is, is like technology has improved our lives a lot.

The fact that we’re having this conversation now with me in Northern Virginia and you in California being able to look at each other, it definitely shows that it has. But in our poll, only 39% say that technology in the tech industry has improved the quality of their life. 34% say it’s made life worse and about more than a quarter believe that things have stayed about the same.

So there is sort of inherent skepticism about. Technology and like people are seeing the downsides more than the positive benefits. And that is going to be really for the enthusiasts and the people that are talking about this and tweeting about it and writing off ads about how this is going to change everything.

That message is not exactly going to resonate in the way that they think it is.

SHEFFIELD: Yeah. Well, and I think one of the things that you looked at is that when you divide it down to interest, just pure interest in AI, that for people who work on a computer, most of their work time, 65% of them say that they’re interested in it, whereas only 35% say they’re not.

But for people who do not work on a computer for their work, only 45%, so that’s a, a 20 point gap there. I think that that’s going to be something that’s going to be significant. I think that dichotomy, because, when you look at how a lot of companies are planning to use AI.

In many cases, as they currently exist right now they are in applications or jobs where there isn’t a lot of computing done by the human. So, like one example being and this is not I, I necessarily, but it’s automation like, grocery stores that we’ve got the proliferation of, of checkout, self-checkout stands where you can scan your own groceries.

And Amazon is trying to go much beyond that and completely eliminate clerks. So they’ve, they’ve had some experimental grocery stores in New York and Seattle and some other places where there are no employees other than stalkers occasionally. And you basically come in and the goal is that you take those stuff off the shelf and then it will automatically bill you when you leave.

And that’s definitely AI powered and so, but it does seem like from the research you guys have done and others have done that, I think a lot of people may not be grasping fully what’s in store in the new future.

CLERMONT: I would agree with that. And it’s the sense that they do, it’s, again, it’s sort of that level of skepticism.

One of the questions that we asked is, do you think companies that are employing. Deploying AI aggressively, do they think they’re doing it mostly to better serve their consumers or to eliminate jobs? And 60% believe eliminate jobs, 21% believe to serve their customers better. And when we ask sort of the impact that AI products and AI software will have on various jobs and professions, most people believe that the impact will be negative.

With the exception to computer programmers, people are split on healthcare. But on a lot of other things, it’s just when people are talking about like the benefits for these different industries, people do hear that this will negatively impact jobs. And it’s a few years ago I did some focus groups in Northern Virginia for a client that was looking to develop a progressive narrative.

And the thing that stuck out with me the most from that was one person saying because we were talking about growth and sort of like what growth means and sort of positives of it. And she said pretty clearly, it’s just like when I hear growth all I can think about is that means more traffic and it means housing prices will be unaffordable and growth means that essentially I will be trapped.

I can sell my house for more money than I paid for it. But where am I going to live? And there’s lots of positive impacts of growth. Growth is one of the things that drives our economy and sort of the measure of when the GDP grows, that’s always viewed and reported as a good thing.

But people don’t, with the level of insecurity people have, there’s an overall statistic that I like to show in presentations, right now the percent of income going to the bottom 90% is the lowest that’s been in a century, and I think crossed under 50%. So that level of stress creates the sort of like unstable political environment that we’re in and when in industry or there’s lots of news stories about some new product that’s going to change everything.

That’s the reaction people have. It’s just like people are actually seeing, it was like, well, this is going to hurt this. Another question we asked in the poll was, how safe do you feel? Would you feel on, would you feel driving on a road that has on the same road with automated cars and trucks and almost three quarters say that they would feel unsafe in that situation.

So it’s the thing, one of the things that I took out from this research if you’re thinking from the perspective of a technology company that’s trying to sell a new product it’s talking about it in ways that’s actually going to help people. Like being up pollster, it’s like the technology industry is saying to political candidates, it’s just like, you don’t even need to, you don’t need consultants, you don’t need people running.

You can have, AI will tell you what voters to target, what to say to them and go forward. That would be threatening to me in my job. But if it’s being talked about, like you can process and think through a lot more data, you can look at a thousand responses of people saying why they like someone and be able to summarize that quickly.

I think there’s ways you can talk about AI as a tool that’s going to help people and help people do their jobs better and that there will be benefits that come from that in a way that people are not instantly hearing. because 2020 per percent of the people employed in our poll said that they think they will lose their job in the next few years because of AI.

SHEFFIELD: Yeah. Although it was, yeah, I’m sorry it was, but it was a little bit contradictory from what they said because when you, when you ask them, will the, will AI have an impact on these various professions? You guys also ask them, well, what about on your job? And for, for them, 47%, the plurality there said it will have no.

Basically but then 34% said it would have a negative impact and 18% set a positive. So I thought that was a, that is an interesting dichotomy in the responses there. Did, do, have you thought about that at all?

CLERMONT: It’s always interesting to me how when you ask people, it was similar to what it earlier in the poll, dissatisfied with the way things going in the country, overwhelmingly negative, your own life.

People are mostly positive, and we ask how’s the nation’s economy going? It’ll be largely going in the wrong direction. If you ask how your personal finances are going, 50% or more will say it’s going in the right direction. So it’s like, I’m looking at a number like that. It’s just like, it doesn’t surprise me that more people will think it’s like, okay, this isn’t going to matter from 47% saying it’s not going to matter for me is it’s a high number.

The fact that a third of people would already admit that it’s going to have a negative impact is a very large number for me. People do believe that things outside of their own experience are bad and getting worse. The things to really worry, particularly as someone who works on campaigns is when people believe that they’re going to be negatively impacted by something, that’s when there’s going to be real instability.

And I would argue that that third, that people. Have a negative impact on my own job is like one of those red flag numbers. Mm-hmm. And the key would to the tech industry and people, AI companies, would be to recognize that and try to alleviate that immediately.

SHEFFIELD: Mm-hmm. Yeah. And, and to that point you guys also did look at and ask people about what they thought about whether Congress is currently, doing things well.

And unsurprisingly they said no in terms of regulation. But then you also did ask them should the, should US Congress create regulations around the safe use of AI that protects jobs, national security, and preventing fraud? And it was, it was interesting. I, I thought that it was 66% agreed with that and including a majority of Republicans.

And, to me this is, yet another example of how the Republican electorate is more, wants a more of a what Republicans would call a nanny state than the Republican elites do. And so, but for, for them, 51% of Republican respondents in your, in your survey said that they did want more regulations around that.

So, and I mean, do you think these numbers are going to increase? It seems like they probably will, especially if people start making this more of an issue.

CLERMONT: I think it really does depend on how this is rolled out, and it does seem to be some degree of pulling back. It’s like Italy has already announced today that they’re banning ChatGPT; that’s a little more extreme.

The example of sort of how not to do this is, well, we saw that with sort of dumping social media, like it was all sort of rolled out. There was no sort of government involvement. And as we can see now, like there, there’s lots of positive impacts on social media and there are a lot of negative ones.

And as we’ve seen like as different controversies as envelope Facebook and now TikTok, it’s, there’s always pledges to do better. But the challenge with AI really is just like, if, if it’s put out, people use it and people misuse it, the consequences are going to be really dire for people. And people are involved in different scams and there can be real harm and real societal.

So the need to go slow is large. And it’s not really surprising, like people, people want regulation of this technology, but I think like more important, like they want leadership and the leadership needs to come from the government. It needs to come from within these companies, and it needs to come from nonprofits and people outside of government as well as grassroots organizations.

And I think like there, there was an op-ed that was published yesterday in the Washington Post by the head of the Ford Foundation, which d basically was advocating for, let’s slow down with this. Let’s really understand what this technology is and the best way to roll it out and regulate it. And part of what they were doing is a commitment to hire and pay for technologists to work in congressional offices and to try to get members of Congress up to speed on these technological issues so that there isn’t quite the gap between the companies and the company leadership and member, much less tech literate members of Congress that it shows a degree of thinking from outside government and outside these companies on how to do this better than the examples that we’ve had before that.

Definitely worth thinking about, as well as also not having all the financial gains go to the companies that have created the software and are using it. And then be sort of brought back into society through, through philanthropy, but to do the whole rollout differently from a public private perspective than we’ve had before in other, other tech advancements that we’ve seen over the last several decades.

SHEFFIELD: Mm-hmm. Well, and, and that is kind of an interesting there is a dichotomy be between how the United States and most other industrialized countries are handling AI versus China, for instance. And, and, and that flows from a, the, a difference in how they handle technology generally. So, like for instance, if you look at the way that TikTok is presented outside of China versus how it’s presented inside of China, it’s, it’s, it’s quite a different body of, of, of content.

Yes. So in China it’s much more educational. It’s much less entertainment slash music slash you know, dancing or whatever. And they also seem to be, as a country, they are interested in the idea of an industrial policy and it seems like that that sentiment is kind of growing among Americans for a more of a public-spirited direction for business and for technology in particular.

Did you see any of that in your findings here?

CLERMONT: I think that the thing that is clearest to me is just the need for real leadership on this. From looking at some other polling, I mean, there’s majority support really for protecting consumers with privacy and data protection laws. I think the more that’s talked about and proposed, while we didn’t sort of test specific ones of those, it can generally, most things that relate to online privacy and privacy and data protection test at a 70% or higher level.

There was one poll where 55% support regulating AI, like the FDA regulates new drugs and medical devices and generally anything that is requiring greater transparency from government in corporations have wide support. I think that it’s sort of the thing that we did ask, it’s sort of like what is going to be the best thing to do, which is regulate the safe use of AI to protect jobs national security, prevent fraud, protect children, or basically leave entrepreneurs alone to do entrepreneurs of the best way to determine the best way to use products. 66% want regulations that do these things. And 18% basically say leave entrepreneurs alone.

But I think with this sort of the overall sort of level of uncertainty people have about what AI’s impact is going to be on the economy, on education, and then just sort of how that’s going to interact with other social media sites and issues involving mental health and excess screen time and less connection from human beings to other human beings.

Any proposal that’s going to address those things will be imminently popular. Like this sort of libertarian, just sort of leave everything alone, things will take care of itself, is just like we haven’t had that experience and it’s not really where to the degree the public has really thought about AI and even understands what ChatGPT and what sort of these very advanced algorithms can do.

And the ability again to create synthetic audio, synthetic video, synthetic pictures, like all of that stuff needs to be debated and worked through and be transparent, not just sort of put out there, and then we’ll sort of deal with it afterwards.

SHEFFIELD: Mm-hmm. And you guys did get into this idea of trying to tell if content is fake or real in some of your surveys. So actually that was one really interesting part of the survey and I’m glad you did it. So you actually asked people to watch a video of President Biden that was nonsensical. And we’re going to roll it here:

(Begin video clip)

FAKE JOE BIDEN: Where? Where is my script? Did I take my pills, honey? Do you have my pills up there?

FAKE JILL BIDEN: Yep.

FAKE JOE BIDEN: Don’t give me lip, girl. Papa’s not happy. Hey, don’t make me angry. You wouldn’t like to see me when I’m angry. That is a quote from the Mighty Ducks.

I’m hungry. Hey boy, give me some chicken nuggets. Thank you, sir. No need for thanks. Put me some fries too.

I wish I married that fine person, Nancy Pelosi. Oh my God. Oh Nancy, you are clearly the hottest girl in the world. Pull on your ears if you think I’m lying. If you have five friendships in a black mustache, the Ukrainian prophecy will probably die.

What would Jesus seriously do? He’s probably really busy. Let’s rob the Pope. Wonder Woman isn’t stopping by my store, sorry.

(End video clip)

SHEFFIELD: Tell us about the responses that you got on that.

CLERMONT: So basically only 28% thought that that was more funny and amusing than concerning. Like almost a majority, 45% believe it was more concerning than funny or amusing. And 26% say that’s not funny, amusing, or concerning. So over 70% basically, that’s not funny. But nearly half believed that that’s concerning.

And this was, this was clearly fake. This is, we could have done it a little more subtly training like an Obama or Biden voice on synthetic voice software and really made it hard to tell whether that’s real or not. I would assume most people would think it’s real unless it’s fully crazy.

But going into this, I thought that when you ask people how confident they would be that they could tell the difference between something that’s real or something that’s AI or fake, I thought that people would overrate their confidence in telling what’s real and what’s not.

And I think the most surprising thing to me was only 42% were confident that they could tell the difference between real and fake. And 46% said that they can’t, and about like equal percentages say that like they’re concerned about politicians lying to them versus so much fake content around politicians being put out there that they’re not going to tell, be able to tell the difference between what’s real and what’s fake.

It’s hard to overestimate like the level of concern people have over misinformation. What? And really not trusting what is real and what is not. We did some work for a private client in early 2020, which that was, give them a whole sort of list of things that to be concerned about as it relates to tech misinformation and was at the top of it, misinformation and political manipulation, and trying to think about AI from the perspective of a member of Congress.

Like there’s lots of things that can be good about that. I mean, members’ offices are notoriously understaffed. If you’re writing complex legislation, you have to do lots of history and bill drafting. If you can train AI to, it’s like, okay, I want to write this bill, this healthcare bill on this medical device can you start with this piece of legislation that was passed 10 years ago and write legislation that will allow acts, whatever the outcome is, and then instantly create like a piece of legislation, like, or be able to most members offices use form letters and communicate through, through correspondence that, that come in.

Like have like 15 or 20 or more different template. Of things to respond to. And people get back a form letter that’s signed autopen by a member of Congress. They know that that member of Congress did not write to them, but you could train AI to be, write very personal responses to situations that come in that sound like real letters.

But as we saw in the example of the, after the mass shooting at Michigan State University last month, they used AI to generate responses, sympathetic responses to students. And that was viewed very negatively in a lot of ways. Like using AI as a member of Congress to be more personal does contain more risks.

And that doesn’t even get into the other way that member of Congress can communicate, which is through telephone town halls where if you were being malicious, you could program a member of Congress’s voice. You can call thousands of numbers and say, connect to this telephone town hall, have an entirely synthetic discussion, and have people ask questions to this member of Congress’s voice. Have people respond to it directly, and then say weird things that undermine that member of Congress to that audience.

And no one would have any ability to know about that. The type of communication that’s done, that doesn’t get press, that doesn’t get earned media, that’s not publicly visible, you could seed things within that population to undermine faith in people using these tools and unless they’re regulated are going to have really negative impacts.

And just like showing a clearly obvious fake video of Joe Biden, where people are just like, okay, that’s really concerning, that’s not funny on something that was meant to be funny.

The tools are so good that the ethical questions as well as the political law enforcement questions really need to be thought through.

SHEFFIELD: And you did go the text version of this as well where you had written statements promoting AI, a human wrote some, and then you had ChatGPT write some.

CLERMONT: Yeah.

SHEFFIELD: And you asked people, so you asked them two questions. One, can you tell the difference between AI generated text or content? And then here’s some for you to look at. And what you found was that only 42% of people said they were confident they could tell the difference, which I thought was interesting. I thought it might be a little higher than that.

But then when you asked, okay, so who wrote this? Was it a human or a computer? And as you guys say in the report that there was no better than 50-50 whether the people could tell the difference, which, like right there, actually what you did Stephen is actually a polling version of the Turing Test which is this sort of iconic idea within the field of computing, which is to ask people, okay, so, can you tell the difference between these responses?

And if they couldn’t, then that program would be said to be capable of passing as intelligent. So you guys basically did a Turing Test, and it looks like ChatGPT passed the test.

CLERMONT: Yes, it did. Both the two ways, like one was, yeah, the statement that I wrote, maybe I’m just not as good writer as a computer, which may very well be the case.

20% said that that was written by AI. 5% say it wasn’t. 39% say they weren’t sure, and 36% didn’t care.

And then the one written by AI, 27% said it was written by AI, 4% not. The rest either not sure or not caring. Not sure, not caring was pretty strong.

The other, we tested some statements about sort of drawbacks of it. We asked ChatGPT to write something about the challenges of AI and it just wrote the statement: “As we continue to brace artificial intelligence in more areas of our life, it’s important to be aware of potential risks and drawbacks. Unregulated use of AI could lead to job displacement, biases, discrimination, and privacy concerns. Additionally, AI may not always be able to make ethical decisions or understand human values. We much proceed with caution and develop robust regulations to ensure that AI is used safely, ethically, and for the benefit of all.”

And that was written by AI and 85% agreed with that. 54% strongly agreed. So it’s again another example of how AI can automate me out of a job.

And then I tested a few other potential statements about challenges of AI. One just written from pure resentment, like no one cared when factories were closed so why should we care for lawyers, writers, and other people could lose their jobs. Only 38% agreed with that.

Asked about if technology companies, a statement of whether technology companies will use AI make things more productive, but these benefits go to business owners in Wall Street, 80% agreed with that.

And then basically, this is just something else from Silicon Valley, like cryptocurrency. Only 50% agreed with that. So in terms of writing a statement about challenges of AI, AI was able to write something that more people agreed with than any of the three different variations that I tried.

SHEFFIELD: Hah. Yeah. Well, that’s going to be revealing perhaps. But to that end, you guys also did actually ask it reminds me of another portion of the survey where when you were asking respondents about what they thought would be the personal or industry-wide impact of AI, and that 76% of them said that AI would have a negative impact on political campaigning.

Is that, would you agree with them in that regard? Tell me what you think.

CLERMONT: I mean, it’s hard. I don’t think most people would think that political campaigns are the most uplifting things to begin with. The fact that people believe that it’ll be made worse is a little bit challenging.

I think there’s going to be a lot of risk for politicians and campaign staff using a lot of AI in their content and basically treating, I think ultimately from what I’ve gotten from this research and what I’ve gotten from looking at a lot of other polls of AI—more than anything else is people want to be more connected to human beings than they currently are.

I mean, 81% don’t believe that AI will provide the same quality of service as humans. 71% believe that it won’t provide the same level of entertainment. People want more connections with other human beings. And I think that’s sort the other thing of the hesitancy about AI and the hesitancy. I mean as a smartphone addict myself and who—like if I’m not in front of a computer or like fully engaged, then it’s right back to the phone to look at Twitter.

Like all of this stuff really, like we know it is not healthy, I know that that’s not healthy. I think we all sort of know to some degree, people who use their phones a lot, it’s not particularly healthy. And the way to way to mitigate that is to have more human contact in our lives.

And I think in political campaigns, I think in a lot of ways, the biggest problem is not like candidates are going to use AI and it’s going to be more impersonal.

It really is sort of the progression of when you read about politics in the 19th century, it is all about human connection. It’s all about like the presidential candidates would basically stay at their homes. And people in various communities would rally for different candidates. People would show up to parades and be engaged with representatives on that campaign in a human level.

And I think as we’ve added the colder medium of television where we mostly communicate on high level campaigns via TV ads is made things more impersonal and it makes it easier to run negative ads. It makes it easier to run campaigns that are largely negative and they were in the 19th century as well.

But television inherently as a colder medium and can spread that more widely. I think there’s been lots of positive ways that computers have actually, and technology have helped campaigns. I think when you look at the early Obama campaigns and how they used Facebook to organize local events for people to meet one-on-one and Obama supporters to meet one-on-one, that was part of like the feeling that went into that campaign.

That’s sort of the first thing that clicked to me on Facebook and politics was I was tracking Obama events in New Hampshire, and they were having not just a candidate shows up for a fundraiser, but encouraging people in communities to come together for a yard sale or a street. And then take the proceeds and then give it to the Obama campaign.

But what it was doing was providing tools for people to meet together in real life. I think the worry in AI and political campaigns is that the more we retreat behind the computer and engage in politics through our computer, that’s ultimately going to be sort of more negative on our discourse and not really drive greater engagement, it’ll drive lesser engagement.

And if the tools allow greater human contact, that is the way to mitigate it. But ultimately, it’s like what people are looking for is greater human contact in their life. Not more content, more sort of contact through their computer. That their computer is gradually making more efficient and cutting out the need for other human beings.

SHEFFIELD: Mm-hmm. Well, and I think one thing that’s almost certainly going to happen and it already exists to some extent now, is that when you look — and this is particularly true with Republican political candidates, that a lot of them are using bots on Twitter to amplify their content.

So in in my former life as a Republican technology and media strategist, one of the things that my old firm did was that we actually ran the social media for the CPAC conference one year. And we were posting some lines from the speeches that everybody was saying.

And we noticed that after the conference was over, that some of the speakers, people were continuously retweeting the things that we had written. And Scott Walker was the number-one offender with us, the former Republican Wisconsin governor, that there were these bots that were retweeting us at like, 2:00 in the morning, 4:00 in the morning: ‘I gotta get this!’

And it was only Scott Walker, like, him—and there was one other person, but there was like, only those two people. And obviously no one, no human being was sitting there at 2:00 being like, ‘yes, there’s a line from Scott Walker was the best thing ever. I gotta put it out there again.’

Nobody was doing that in real life. And like that to me, I think you’re going to see a lot more of that, especially for Republicans because they don’t really care about getting majority support for their ideas, but they understand you have to at least pretend to have it.

And so that’s why it is a thing in Republican consulting to have bots that you offer to candidates. And so I think that now that ChatGPT for instance, has an application programming interface or API, there’s a lot of people out there that are going to say, okay, well now I’m going to use this API to look at my list of talking points here.

We’re going to train it on my campaign messaging, and then we’re just going to feed it out through Twitter or Facebook or wherever. I mean, I think that that’s absolutely going to happen if it, it isn’t already.

CLERMONT: I would agree with that. Although just to argue that, those bots we really could see how much they did for Scott Walker’s presidential campaign.

But yes, but also like putting my Democratic strategist hat on, the thing that I’m going to be looking for, and advising my clients to look for, is any hint that my opponent’s not being genuine and is using bots and AI, and what they’re doing is basically insulting the intelligence of the electorate.

Like some voters are not really going to care about that. Those aren’t reachable anyway. But I’d be looking for any hint that my opponent’s using AI and using AI in a way in those ways to dehumanize this process and basically dehumanize the campaign and call it out immediately. And really challenge them on that, their use of this technology in a way that is in treat treating human beings like they’re easily programmable robots.

That’s not how people want to view themselves as, but any hint of that would be used to call them out constantly and like both, like not from the scope perspective, but if they’re using bots in that way is to discover it, amplify it, and make fun of them for it. And really try ways to really try find ways to actually win on the ways that we actually want to, which is not to sound like pompous, liberal Democrat, but the whole point of this is to improve people’s lives, to make it easier for kids to get a high-quality education. Or for communities to have plentiful jobs and things to all the things that Democrats are campaigning on, everyone having healthcare being able to afford to go to college or afford job training.

Any effort that’s trying to undermine that is something that we want to call out. And this is just another fair game to know that there are risks of using this technology in a way to depress voter turnout, stifle engagement and use media the same way an authoritarian country would use it to basically concentrate power into the leader. And call people out on that the second that they’re veering into that territory.

SHEFFIELD: Mm-hmm. Well, I think potentially also, I mean, they might try to do it as kind of frame it as a joke that, ‘Here’s me spamming the internet with 2,000 videos of Joe Biden saying horrible things. Haha. Isn’t that funny?’

CLERMONT: Yeah.

SHEFFIELD: I mean, it may not happen in 2024, but probably it’s going to happen pretty, pretty rapidly.

I mean, we are now, so recently Google released a text to video AI generative system so that you literally just type in, in the demo that they had with it, that I saw that was kind of interesting, and we’ll put a link to it in here, in the show notes for people.

But it was a teddy bear, a teddy bear walking through New York City and it literally generated a video. And it was blocky and not fully— I mean you definitely knew it was fake, but the way that you can map if you are using a computerized mask system or something like that, you can definitely make convincing-looking fake videos of people.

And I think given what they’ve done with some of the pro-Trump online activism, they’re just going to say: ‘Oh, well, it’s just a joke. What, you take a joke, can you? What’s wrong with you?’

That’s going to be, I think that might be a little hard to push back on.

CLERMONT: Yeah. I think that’s right. It is worrisome. And my own personal perspective is that it’s going to be more worrisome if they do it effectively against Ron DeSantis and their opponents in the Republican primary. And that’s the same problem I had when I was laughing and enjoying Trump as he was eviscerating Republicans before he got the nomination, that is concerning.

And I think it gets in a way of regulations that can clearly define what is real and what is fake and that might not be able to happen. And yeah, that’s one of the existential crises that we face if people do recognize already that not being able to tell the difference between what is real and what is fake is a problem.

And there are ways that just with just with the technology that’s available out there now, audio and video is enough to be concerning. And quite frankly, there was video the other day of—or audio the other day of reporters hounding Kevin McCarthy over the Nashville School shooting where he is like, do you want to talk about movies, talk about anything other than this?

And there was something he said that was weird at the end, it’s like, come on guys, like the way that I was talked to you is like, is this really Kevin McCarthy? Like, or is this like clever audio of Kevin McCarthy layered on with reporter shouting at him? And I don’t—I mean, I think it is, I don’t know, because it is easy to fake.

And yeah, that is going to be one of the challenges of people not being able to know what is real and what is fake. And if anything already, like the highest premium a candidate has now is someone who is viewed as honest and authentic and can look straight in the camera and does not talk with talking points, and seems real and authentic.

I mean, I could say Sherrod Brown is able to do that. Bernie Sanders is able to do that. Joe Biden is able to do that. And the key is that authenticity is the one way to be able in real time to call out something that’s fake and then make fun of the people using it.

But there really isn’t going to be able to be laws and standards the way that things are now that’s really going to be able to enforce it. It’s going to have to come from people who built their career on not sounding scripted on talking like a human being on leadership and being an authentic and genuine leader who listens, is empathetic. Those are going to be the hardest to make these fake videos about.

But it’s just like the obvious fake, like the example that you showed earlier that we did, that’s clearly not real, it is really worrying—the things that do seem very real, but a little bit off that are going to be the biggest problems.

SHEFFIELD: Well, and I suspect that any of the regulations regarding the use of AI that we’re going to see, some of the earliest are going to be about whether you can make fake videos of a candidate. I mean, imagine if somebody had made a video pretending to be Joe Biden, secretly ordering a nuclear strike against China, or whatever it is.

Like, we’re going to have to have regulations on at least some of that stuff. And I imagine that’s going to be some of the earliest in addition to whatever other ones may come along.

CLERMONT: I mean, people freaked out in 1984 when Ronald Reagan at the beginning of his weekly radio address made a joke about how the nukes are about to launch on Russia, we’ve been attacked.

And he was just sort of laughing, making fun. That audio came out. It was like several days of really negative press. He had to apologize. It was just joking. And that was real. You’re a hundred percent right. If someone’s making synthetic audio that says that, people are going to overreact because even if you think it’s fake, what if it’s not?

And it was hard enough then for Reagan to be able to be able to sort of walk back from that sort of now very analog situation that he was in. But he did say that, even if he didn’t mean it, but the difference now is going to be like, is it possible, like sort of the Dick Cheney formulation of if a terrorist attack is a 1% chance of happening, we have to treat it like it’s a hundred percent certainty. If there’s a 1% chance that a fake video or fake audio might be real, I mean, do we respond to it like it’s real or not?

Those are going to be serious, serious challenges that we have going forward.

SHEFFIELD: Yeah, I think so. But I guess to approach this from another angle, is that one of the other debates that AI has, and I guess in particular ChatGPT brought out, is a discussion about whether there is bias, quote unquote, in what ChatGPT makes.

But it’s hearkening back to this larger sort of epistemological debate that the right and the left are having about, liberal media bias. I mean, it’s that same thing all over again, but in the AI sense.

You guys did also look at that in, in your survey. So you asked people, would AI systems be designed in a way that will be politically biased? And 53% of the respondents agreed with that. And then you broke it down further in terms of their political leaning. So, 64% of Republican-leaners and identifiers said that they would be biased toward liberals and then only 3% of Republicans said there would be no bias. And 14% said there would be bias for conservatives.

And then if we look at it on the Democratic side, 55%, that was the majority for, for Democrats said there that they didn’t know. And then only 25% said a bias to ward Republicans. And then when you look at it with the independents, there were 29% who said there it would be, AI would be biased toward liberals. And 14% said biased toward conservatives.

So this is really kind of a mirror of the way the polling breaks down when you ask people about the media, is it biased toward liberals or conservatives? The, the Republicans are convinced overwhelmingly that everything is against them.

And I will say for people who are just coming into this episode, that we are going to be talking about the sort of epistemic and philosophical implications of this. But we’ve also done some other episodes, which I will link to that you can check out about sort of this crisis of knowledge among Republicans.

So for people who tend to identify as Republican nowadays, they tend to be overwhelmingly motivated by religion. And even if they don’t go to church, religion is their identity. It is an identity politics for them. And we’re seeing it here, I think on your survey with this here, that they’re very concerned that everybody’s out to get them here. I think you can see that in this. I mean what’s your reaction?

CLERMONT: That’s about right. It is interesting to me that like only 8% said outright, no, there won’t be a bias. Like most people say they’re not sure. They don’t know. 14% of Republicans did believe that they think things would be biased in their direction. Which I always, I always think is interesting. But it mirrors like when you ask people like, which–

SHEFFIELD: Sorry, sorry. It was actually higher. So if you had, the Democrats said only 7% of Democrats said it would be biased toward them.

CLERMONT: Yeah.

SHEFFIELD: So it was double the amount, which it was interesting. I thought that was an interesting dichotomy there, but go ahead.

CLERMONT: Yeah, no, I think that’s interesting too. Democrats overall are less focused on, like the media’s biased. I mean, most people, most Democrats don’t necessarily think that it is, or that it’s that there’s some bias towards conservatives.

But yeah, like when you ask Republicans like, is the news media trustworthy? The answer is overwhelmingly no. But you ask like, is Fox News trustworthy and unbiased? And the answer is overwhelmingly yes.

So much of when we’ve asked people sort of detailed ‘what media do you consume, and do you trust the news media overall?’ And then ask trust questions for everything that they consume. Most people, Democrats and Republicans trust the media that they consume with the exception of Facebook. Like people don’t trust the information, the news that they see on Facebook, both Democrats and Republicans.

SHEFFIELD: Hey, good idea.

CLERMONT: Yeah, and I’ve not looked at it recently for Twitter since its ownership change. But I would note in our poll that the sort of overall impression of Twitter was fairly negative. But I think that’s right. I mean, people do believe that the other side will be unfairly advantaged or that there’s bias and that’s really been conditioned on the Republican side for generations, that the news media’s biased.

And so the expectation would be, especially with prominent Republican leaders like Donald Trump and Josh Hawley and Ted Cruz and others denouncing tech and the tech industry fairly consistently and fairly aggressively, that Republican voters would be the ones that are more likely to believe there’ll be political bias against them with AI software.

SHEFFIELD: Yeah.

CLERMONT: And there’s nothing comparable on the left or the Democratic side pushing that.

SHEFFIELD: Yeah. Which is really interesting to me. Because when we’re talking about what do people want to be done about AI in terms of regulation, it does fit into a larger technology policy area that the Democratic voter wants more leadership, wants more regulation, wants more concern, more discussion about these things.

But when you look at Republican elite rhetoric versus Democratic elite rhetoric, it’s Republican elites who are talking about this far, far more. And I think that not only does it fit within this media bias context, it also is a way for Republicans to try to portray themselves as populist because that is basically the way that Republicans have been able to win elections continuously since Ronald Reagan is to say: ‘Actually, yeah, even though the billionaires are the people who support us, we are the populist party. We’re the working party because we attack these Big Tech elites because we attack Big Hollywood or whatever.’

And that, it’s just literally a copy paste of that same earlier strategy. And yet the Democratic leadership class, let’s say President Biden or Chuck Schumer, they’re not, they don’t seem to be engaged and understanding that their voters want something else from them. They want them to step up on this topic. And also that it’s creating a real opening for Republicans.

CLERMONT: I think that’s fair.

The challenge that exists on this, and the relationship between the federal government and particularly the technology industry and technology companies, is that both parties are reliant on people in this field for campaign contributions. Ron DeSantis is going to the Bay Area relatively soon, it was reported in Puck Media and doing an event hosted by one of the founders of the data firm, Palantir.

And yeah, there’s levels of hypocrisy on ‘We’re going after tech. We’re going to be the party of the working man,’ and then using that as a wedge while at the same time raising money from their allies who are in this same industry.

This is a conversation that goes way beyond this topic about like money and politics, which is ultimately the issue that we need to address on the corruption side, and the way we finance our campaigns and who is financing them, and the campaign finance regime that the Supreme Court created.

So yeah, there, there’s a lot of political benefit in going after tech on the Republican side while at the same time fundraising from it.

There really isn’t the counterbalancing effort on the Democratic side to have a serious examination of the role of technology. What are the right regulations? What are the right things that we can do to benefit the most people while maintaining entrepreneurship and people investing in solutions that are going to make a difference in people’s lives?

There really isn’t that space right now in our politics for that type of discussion. Certainly on the national level, certainly that would be covered on cable news or in a political debate. And all the incentive is going to be as the primary season begins is for Republican candidates to do this dance of simultaneously raising money from people that work in the technology industry while bashing it, being the one who’s going to claim that they’re going to go after Big Tech the strongest.

Yeah, that’s going to be a key driver of the Trump campaign, the DeSantis campaign and others who get in this race.

SHEFFIELD: Yeah. Well, and it’s interesting further that when you actually look at the political contributions or the rhetoric of the people who are in particular AI focused, they do tend to be kind of stereotypical Silicon Valley libertarians, right-wing Ayn Rand fans like Peter Thiehl, the chief of Palantir.

Like Elon Musk, who gave millions of dollars to OpenAI but now says he wants to make another one that’s going to be anti-woke, quote unquote.

But then you’ve got other ones, like there’s this whole— and we’re going to do an episode about it on the show, but this whole idea of “Longtermism,” which is kind of like a blend of Ayn Rand plus Scientology, it’s a very bizarre right-wing pseudo-religion. And basically Sam Altman, who is the head of OpenAI seems to be aligned with some of these ideas.

And a bunch of other people in this field have this idea that basically they’re trying to get people to focus on, well, we need to think about computers that are going to destroy us all because they’ll be so smart, they’ll be like Skynet and Terminator, but don’t want people to focus on the here and now and the implications of the technologies that currently exist and the way that they’re being used and who’s using them. How the money’s getting distributed. They don’t want people to talk about that stuff.

They want them to think about this fanciful sci-fi concept instead. It’s really curious.

CLERMONT: Well, you were talking about something that I am not aware of until just now, but I will have to read more about it. That’ll be my weekend work.

When we’re thinking about responsible AI, which is like some, I mean some, we’ve always been operating in universes with AI on some level, like giving us recommendations on Netflix and Spotify music lists. Like I found out more about a lot of the bands that I’m listening to now are something that like I owe to the AI at Spotify and AI at Sirius XM radio.

But like the important thing, I think, at least for AI in a way that’s actually going to persuade voters that this is going to have a positive impact is spell out the benefits and worries for to be as specific as possible on what the benefits of the technology will be, and what they’re doing to protect consumers and the public.

Beforehand my parents lived in San Francisco and in 2018 when like the automated scooters were just being launched, the company just dumped scooters everywhere in San Francisco and it was chaos. They were on the streets and sidewalks. The government didn’t know what to do. I think there was some sort of collective that came and put them in a pile and burned them in front of the Facebook bus and sort of the equivalent of just putting stuff out there and then having the government catch up is just going to be too damaging.

And so, like at every possible level, we need to be having the discussion of what the benefits of the technology will be and how the consumers and the public are being protected. And honestly, the best way to sort of talk about this and understand it is just ask ChatGPT to say what are the challenges of AI.

It’s one thing that it, as one of your previous guests noted, it’s just predicting language so it knows what to say. But the people supporting it and promoting it need to actually spell that out and show the level of concern.

And this is going to have to be something that, again, Washington can’t just sort of put this on a plate with a bunch of other issues. Which is have many hearings on this topic. Bring in experts, take up the Ford Foundation and other nonprofits and advocacy groups that want to bring technologists and people who understand this technology into government and have a whole of government examination.

I think it’s incumbent to discuss this—and this is why this is great and thank you for having this show—I think it’s also up to all of us to understand these products better, both people that work in politics and work in media, but I think in general, it’s not just sort of wait for it to happen and sort of hear from friends what this does, but really try to understand this in a way that we didn’t necessarily for social media and didn’t for a lot of the other things that came like automation that we’ve been processing as a society for the last two decades.

This is too important. And part of this, part of this whole poll in doing this was allowing me to educate myself more on what people think and how this is going to be received politically. But I think we need, it’s going to be incumbent on everyone in media to really talk about this and understand it and have these types of serious, serious discussions that we’ve had in this last hour.

So I really thank you for that.

SHEFFIELD: All right. Yeah. Well, that’s definitely what we’re trying to do here, so I do appreciate you joining us as well. You are on Twitter still—at least until it burns to the ground,

CLERMONT: Until burns to the ground, even after it burns to the ground. SJClermont at Twitter, @sjclermont, and then @changepolls for Change Research and changeresearch.com.

We poll regularly on all sorts of different political topics and non-political topics. And yeah, engage with us any way that you can.

SHEFFIELD: Cool. Well, I’m glad to have you here.

So that’s our program for today. I appreciate everybody for watching or listening or reading. Thank you for joining us. And if you liked what we’re doing, I encourage you to go to patreon.com/discoverflux, where you can subscribe to the show and get full access to every single episode.

This one was a free one, but we do encourage everybody to get full access. You get audio and transcript and video of every episode if you do.

And then if you want to, if you’re a person who prefers Substack, you can go to theoryofchange.show, and there’s a Substack form that you can put your email in and subscribe and then get full access that way as well.

And then I also encourage everybody to go to flux.community to get more articles about politics, media, religion, and technology, and how they all interrelate.

These conversations that we’re bringing you that are in-depth about important subjects, you can’t get them on regular corporate media. They’re too in-depth. They’re not about the soundbite in the 15 seconds. So we need your help to continue doing these and to make the show sustainable.

And if you can’t subscribe, then please do share some of the episodes.

Give us a nice review on Apple Podcasts or wherever else you’re listening to podcasts. That actually is really helpful. It helps people find the show and let them know that it’s worth their time. So I do appreciate you doing that for us. Thanks very much.