Audio

Episode Summary

The internet is so commonplace that we rarely step back and think about how completely different today’s information climate is than even 30 years ago. News travels in seconds and oftentimes percolates even beforehand in the millions of tiny online communities that live on their own websites or within larger social media platforms like Facebook. Even elementary school children know how to easily sift through the world’s knowledge.

But as much as things have changed, things have also remained the same. Humans are still finite beings who don’t know everything, despite what you might hear on Reddit or Twitter.

Because our limitations still remain even as our technology has improved, there is the challenge of misinformation and disinformation, but are those even the right words to describe what we’re talking about? Is it possible to make algorithmic distinctions between innocuous errors and harmful delusions? Are content recommendation algorithms biased, and is it even possible to have unbiased ones? These are among the many critical questions that are now being asked by people across the political spectrum.

People are right to be concerned about the moderation choices made by giant platforms like Facebook, TikTok, and YouTube. Unfortunately, however, many of the most prominent voices in the discussion, including Twitter owner Elon Musk, appear to be acting out of partisan motives rather than concern about intellectual hygiene. This is a complex and fraught subject, one that is just as much about epistemology as it is about technology.

Joining the show to discuss all this is Renée DiResta, she is the research manager at the Stanford Internet Observatory, an organization that focuses on internet content and moderation.

The video of our June 9, 2023 conversation is below. A machine-generated transcript of the edited audio follows.


Theory of Change #074 Renée DiResta on the economics and epistemologies of content moderation


Transcript

MATTHEW SHEFFIELD: Welcome to Theory of Change, Renée.

RENEE DIRESTA: Thanks for having me.

SHEFFIELD: So let’s start there. What is the Stanford Internet Observatory for those who don’t know what it is?

DIRESTA: Yeah, so we are a research center in Stanford Cyber Policy Center. And we study use and abuse of current information technology.

To get specific, that means basically four buckets of stuff. The first is trust and safety. So sometimes that’s child safety. Sometimes that’s ways to think about mental health or suicide and self-harm in various ways that individuals experience the internet. Then there’s information integrity.

This is the kind of mis/disinformation stuff you were talking about. Then there is emerging tech. So we have a lot of work that we do on AI and understanding how that kind of changes the playing field. We tend to think about the internet very holistically and we’re very interested in understanding when something changes in one part of the system, what happens to the rest of it.

When a new feature is introduced, what happens after that? So that’s the that’s the kind of work we do here. And a lot of the outputs relate to policies. So sometimes we might say, this is something that we could change, and here is what a change might look like.

SHEFFIELD: Mm-hmm. Okay. And then the other thing about it is that this is actually a very new field, and you personally do not have any sort of connection, like academic certification because those things did not exist when you got into it. Tell us about your background in this space here.

DIRESTA: Yeah, so I studied computer science and political science undergrad. So I did I did CS [computer science] and I had a pretty big concentration in applied math, graph theory and things like that. But I graduated college in 2004, so I missed the entire social network experience, thank God, when I was an undergrad.

So as you note, I think for me, I got into it when Twitter became popular and when Facebook groups began to be a thing. And that was sort of for me those were the two moments when I felt like, gosh, the internet just got really interesting.

I was a new mom at the time. It was around 2013 and I started paying a lot more attention. I don’t, when you have a kid, I don’t know if this happened to you, but the entire internet changes, like all your ad targeting is different. All of a sudden, the groups that you’re recommended, change the kind of content that you see really shifts. Because all the algorithms recognize every time you post a picture of your baby, they can even intuit the age of your baby, right?

So there are ways in which your experience of the internet really foundationally changes at certain times. And for me it was around 2013 and I just started paying a lot more attention to what was the recommender system showing me? Why was it sending me to extremely crunchy parenting groups?

And then when I joined them, how did I get recommended into anti-vaccine parenting groups? And then if I join those, well shit, now my recommendation system is like, you might like flat earth stuff. And I was really interested in just like the kind of correlations and the network relationships that were implied by the stuff that I was seeing.

So that was kind of how I got into this.

SHEFFIELD: Okay. And it’s a very complicated topic because you know as much as it came into the forefront of the media and technology worlds—let’s say roughly, 2016, 2017, 2015—these are, as I said in the intro, these are challenges that we’ve always had as humans. So you’ve written a lot of different things on this topic of, information and propaganda. But one of those is an essay you published a few years ago in which you called “Mediating Consent.” Now that was a reference to an earlier work. Let’s discuss the context behind that essay and then what were you saying in there.

DIRESTA: Yeah, so the essay was part of a series that we were doing. Just Ribbon Farm is the blog that it’s on. And Ribbon Farm is a community of people who have an interest in technology and how it impacts society. And we’d actually started we had a little mini conference just kind of amongst the readers, Facebook group members, and Discord members, the people who were part of the community. And at the time it is the fourth essay in a series. What I was really interested in was the question of how had how had speech evolved on the internet and alongside speech, how had propaganda evolved?

And the reason I was interested in propaganda is that I had spent a large part of 2018 looking at this data set that the tech platforms had turned over to the Senate Intelligence Committee. And the data set was attributed to Russia. And so I was very interested in this idea of how state actor propaganda had changed through the work that I was doing, looking at how they had used the internet to, to run novel propaganda campaigns.

But I was also very interested in the fact that anybody could do this all of a sudden. And so manufacturing consent is a reference to Noam Chomsky, that’s how most people know it. It’s actually a reference to Walter Lipman kind of going further back in time. Chomsky used this phrase by Lipman as the title of his book, Lipman alludes to essentially the function of propaganda being to manufacture the consent of the governed.

And in 1922, he doesn’t mean this as a pejorative, he means it as it is actually the obligation of the government to make the citizenry come to some sort of consensus. So this is, again, a 1922 opinion. And then Chomsky takes that phrase into the modern era in 1980, and all of a sudden you have mass media now, right?

So you have television, radio, you had these technologies that were not very prevalent back then [during Lipmann’s time]. And so what, so I was interested in this as this question of, okay, now we have the internet, right? So now we have we’re one technological leap forward, 20 some odd, 40, almost 40 years in the future at that point.

And then we also have a democratized system, so anybody can do it now. So this is very, very interesting. And so the essay kind of explores this question of. When information control is no longer top down, when media is decentralized, when the public is fragmented, what does it mean to have a consensus making environment? The government is not going to do it for us. The public is fragmented. So what happens in the future? And that was the that was the point of that essay.

SHEFFIELD: Okay. And that was the point of it. But I mean, what did you say? What was your answer?

DIRESTA: Well, at the time, it’s funny to look back on it now. I think I think I gave a very dissatisfying answer. I think I said, well, we need a new digital federalism, right? Like, what does it look like? Because the point of consensus is that you have an ability to make decisions as a society collectively. And when that is no longer guided by a government that most people trust and a media that most people trust, when that has all fragmented, how do you achieve consensus?

And so I thought about federalism as an interesting model for that because that’s a, that’s kind of like the Great American experiment, right? Are there certain things that you that you make the realm of the individual state or community or is, are there things that have to kind of sit up at the top level?

And at the time, decentralization on the internet in 2019? It was not something that was very much a part of the popular kind of conversation. Now, I think, again, kind of like four years in the future, I would possibly answer differently. And I would talk about federalism, but I would talk about actually I think that now there really is momentum towards adopting a digital federalism in a way that in 2019 with, at the kind of peak of big tech would’ve seemed inconceivable.

And so my sort of failure of imagination then was not realizing that actually decentralization at a much more structural level was going to be the answer to that question as opposed to what I still thought, which is that everyone would still be on the same platforms, and we would find some way to coexist.

I think now it is actually going to be much more of that structural evolution into people moving into federated environments with their own moderation structure. And I don’t think that’s necessarily a good thing, but I think that that’s where we’re headed.

SHEFFIELD: Well, and why do you say that’s not a good thing?

DIRESTA: Because it feels like a capitulation actually, because it feels like we’re saying, okay, we can’t all coexist on these platforms with approximately the same type of centralized moderation. We all need our own things in order to, maybe this is very American perception of it, but I feel like it is this it’s, I feel it feels much more like giving up to me in some ways.

I don’t think it’s bad. Right. I really enjoy Blue Sky. I really enjoy Mastodon. I think even I met on Mastodon, if I’m not mistaken. So I, I think that it’s not bad. It’s just kind of like we do still have to actually come to consensus. There’s a I described it at this ribbon form conference as like tornado reality, right?

Which is there has to be a way that you can come to consensus about. Massive natural disasters and things like this. Climate change is an example that people point to a lot, but where it doesn’t matter if you think a certain way and somebody else thinks a certain way, I think Benjamin Bratton put it as like the revenge of the real, right?

It doesn’t matter. Like there, there are going to be these moments where reality doesn’t give a shit if you believe in it or not, or what kind of consensus you want to come to about it. There’s just going to be these sorts of forcing functions where those are the moments where you need functional institutions.

Those are the moments where you need kind of mass consensus. And so even as we retreat into our more federated universe, and that creates more pleasant days for us in the day-to-day, it does sort of feel like we’re losing our capacity for those for those big overarching consensus reactions or consensus capacity.

SHEFFIELD: Well, I mean, I, in one sense I agree with you that that might be lamentable. But on the other hand, is it that, that we were expecting technology to do something that it never was capable of. In other words, that consensus in a democratic republic is kind of, supposed to be an electoral thing.

And rather than a media thing,

DIRESTA: I think this was the this was the great like kind of Lipmann-Dewey debate, right? This was the whole to what extent does the public exist? To what extent does the public really drive democracy? Is there a public, or is the public fragmented now? I feel like the answer is kind of unequivocally yes.

The public is very, very fragmented, in fact. But there was this idea that you could have. That that media’s role, in fact was to serve as that consensus maker. And that’s what Chomsky highlights throughout all of manufacturing consent. He argues that the incentives of media are crappy, right?

That the incentives of media lead to bad outputs, but he doesn’t argue against media. That’s the thing where I think popular perception of the book versus like the structures he actually kind of lays out in there. And I think, I was I’m writing a book myself, so I’ve spent a ton of time just on these like random interviews other people have done with him.

And he does regularly articulate that he didn’t intend it to be a critique of a critique suggesting that there should not be journalism or should not be media. He was articulating that the incentives were bad, and I think in some ways the incentives actually got worse.

SHEFFIELD: Yeah. They seem definitely to have gotten worse.

And I mean, so I think you’re right that, people. It may be hard to see, I think for a lot of people who aren’t into this stuff on a daily basis like you and I are, that the move is toward some sort of in information decentralization. But I think that’s right. That, that we’re headed that way.

And, but I mean, there’s a lot of different ways of doing that. I mean, so is it simply that that. People like, I mean, like recommendation algorithms for instance, like that is a form of centralization. So that in one sense, p it is the case that you know, these as currently construed the algorithms on Facebook, TikTok, Twitter, wherever, everywhere are literally constructed to keep you on the platform more.

But then on the other hand, if you remove the idea of recommending content to you and just made it some sort of, I don’t know, tag-based, purely tag-based and not audience-based or it was time-based, reverse chronological feed, you still have, you’re going to have different challenges from that algorithm.

Yeah. So like, it’s almost like you’re damned if you do and damned if you don’t. Right.

DIRESTA: That I think is I think that’s really the, there’s always going to be pros and cons and then, the thing that kind of captivated me about manufacturing consent. Even I wrote another essay kind of about it like last week or something like that.

Is that what he keeps going back to is like, whatever your incentives are that’s going to shape your outputs and that’s not crazy idea, right? That’s true. Basically every single major business industry, all of it. It’s just thinking about media in those terms, I think is not an innate thing.

And by highlighting how some of the incentives work. Like a lot of my work over the years, I was ranting about recommender systems back when, the back when I was describing with like, you might like this flat earth group. I was like, come on. Like that one was funny, right? That flat earth is funny.

Like it was pushing me backyard chickens for a while when I had a baby. because obviously you have a baby and then you get your chickens, right. You just become like, but the thing that was so funny about it to me was I was like it’s very revealing in terms it’s like revealing other people’s preferences and the recommender system is showing it to you because it’s saying you are statistically similar to these people.

So I am in some way similar to the backyard chickens people. But then when you accept the nudge, it gives you more and then more and then more. And that I think with the recommender system, as funny as it was in 2013, by 2015 it was Pizzagate. By 2017 it was QAnon. And so this was where you did start to see the extent to which the incentive of the platform literally reshaped human society from, in like a network, from a network standpoint.

So instead of me going on Facebook to see my friends and talk to my friends, I went on and it gave me new interests, if that makes sense. Right. Things I had never thought, I had, never would’ve occurred to me to type in backyard chickens. Right. That’s entirely a function of a nudge. And I do think helping people understand the.

Kind of impetus behind those nudges and to maybe be a little bit more attuned to why you might be seeing something or how you react when you do is the kind of media literacy that we need in this particular environment.

SHEFFIELD: Yeah. Well, and actually it might be nice to actually have some public disclosure of on these nudges, that says, yeah, of people who looked at this content.

45% also were interested in this. And I think that that might make a lot of people, horrified to some degree when they see this stuff. Because like, I mean, cause one of the other ki really maybe less perceived aspects of this nudge system and recommendation system is that because everybody gets different recommendations if you are somebody who is getting, might be vulnerable to online radicalization of one thing or another your friends or family members, they’re not seeing the same internet as you.

And so as a result you also don’t know that you, what you’re seeing is not, the same thing as everyone else, but any but the point being though that people can’t really help you because they don’t know what you’re seeing.

DIRESTA: I think this was very interesting for me during the pandemic in particular when I think most of us were probably on our phones constantly.

I had a new baby, so I was, I was always on my phone, always on clubhouse, like just trying to get through the first couple months of being up at odd hours and things. And I would be in some of like the small chat groups where there were a bunch of us who with very diverse political opinions, and we were all just in these very small groups, not public at all, but the things that they would share versus the things that I would share were just foundationally different.

We just had completely different experiences and the kinds of content that were, the kinds of conversations that were taking place about lockdowns, about masks, about vaccines. And then what I did start to notice, right, what I really did begin to feel was that there was, this was this rift that just continued to grow.

And I remember one day. Somebody throwing a post into the group, and I was kind of like the only liberal in that group and saying like, aren’t you mad about this? And I was like, I don’t even see it. I don’t think you understand. Like this is literally never in my feed this outrage issue that you are seeing that you are absolutely immersed in, that you are there constantly just like stewing over this topic.

I never see it and no, I’m not outraged by it, but also, I never see it, and that’s actually the more important thing. I think unless you have those conversations with people, unless you have people in your life who are, very close friends or family who can where you get some little glimpses into it people don’t realize just how divergent the kind of bespoke realities are at this point.

SHEFFIELD: Yeah. Yeah. And it’s a problem because like, there I think a lot of people, when you try to broach the idea of, misinformation on internet to them. It just, it’s not real. Like they don’t see it in their own lives. And so they have trouble believing that it could be this big problem.

I mean, like, for instance, it was, I think there were a lot of people who were shocked. For instance there was a Harvard Berkman Center study after the 2016 election, which showed that, the far-right blog Gateway Pundit was much more popular on Facebook than even Fox News was.

And that came as a huge shock to a lot of people. But the reality is like almost no one is aware of that. In fact, even now to this day, they don’t know that and so it does make it a little bit more of a challenge to, to try to educate people about this because we all do live in a different internet from each other.

DIRESTA: And that I think is that consensus problem, right? If you see foundationally different things, if you. If the things that are pushed to you that it is, that you are supposed to be outraged about, because that’s a lot of the incentive structure for the new media is to just keep people perpetually outraged.

It’s not everybody blames it on the tech algorithms. It’s not only the tech algorithms, it’s the content creators too, right? Because that’s how they produce things that the algorithm will boost, right? So this is like the kind of, this like media theory 1 0 1, right? Where the structure of the system influences the substance that moves across it.

And you create things, particularly if you’re a small, kind of like media of one or like an influencer or something like that. You create things because your job. Is to grow an audience, like that’s actually your job, because you need to grow the audience in order to monetize it. It’s not that you’re a talking head on a broadcast channel that already has an audience.

Your job is that audience growth. And so you are incentivized to produce content that an algorithm will boost. And my favorite example of this that is completely nonpartisan is actually those horrific Facebook cooking videos which are now like a whole class of content on TikTok, but you might remember this.

It’s like, two white ladies and a can of SpaghettiOs and a pie crust on a counter. It is totally fucking gross. Or like rainbow sorbet in a toilet. And they’re all tied to this magician named Rick Lax, who’s out in la. Massive network of creators. But one day I started seeing these things on the Facebook watch tab, and it was constantly pushed to me, and I was captivated by the comments, right?

Because everybody was commenting, which tells the algorithm to give you more of it. And so Rick Lax really just nailed this like provocation. And then he turned into, when people got kind of sick of the cooking videos, he would put out a video and the title would be when she saw it, when he opened the door, right?

And it was just the one short snippet. And actually you can actually kind of download a whole data set and see him evolve this over time. And you realize that the creator is creating content, not for the audience, but for the combination of the audience in the algorithm, right? They’re creating content kind of like for the feedback loop.

And that’s because very few people followed these pages, right? That the creators, their actual like actor pages or whatever. But because the Facebook watch tab was putting it out, these videos had tens of millions of views. So again, as you can monetize on social media. So if you can make the content for the algorithm, that feedback loop is what actually is really the thing that, that is the incentive that kind of relationship.

You’re making content for the algorithm and the crowd, and these two things also kind of feed each other.

SHEFFIELD: Yeah. Well, and to that point there’s been so much buzz and about and hype about chat, G p t and generative AI as well, what are, they’re going to set loose take over the internet.

But I think the reality is that we’ve already been on an AI dominated internet for quite some time. It’s just that most of it was on the backend so people didn’t see it. And then it was always there on the fringe. Like you were talking about some of these, strangely generated videos.

I mean, it was a thing on YouTube for a number of years that people were making AI generated videos for babies. Those were,

DIRESTA: I remember those.

SHEFFIELD: Yeah, that’s right. And so, so in that sense, it’s almost like, people talking about the impact of AI at this point. It’s like, guys, we’ve already been there.

You just weren’t watching baby videos.

DIRESTA: Remember like the Mickey Mouse snuff films? I mean, you had to really watch your kid until I remember like, I remember finding Blippy because I thought, oh my God, this man makes so much content. I can just turn on a Blippy stream and I can go take a shower. And I trust that there will be no weird like Mickey Mouse snuff film that will like pop into this if I just play this whole channel.

So anyway, yeah, it was very different between 2013 and then and then now that was in part, actually though you might recall, because these moments that people who were like extremely online would see, would occasionally leak into the Normy sphere when something truly gruesome would get surfaced. And then somebody would write an article about it and like Slate would write an article about it, or it would be on like, good Morning America, like Mom reports, child sees Mickey Mouse get beheaded.

And that would be the headline. And so the platforms did begin to try to create these more sanitized spaces because again, in response to kind of public outcry about some of the real bad stuff. But I think the other thing is that generative ai, so social media democratized dissemination, right?

Democratized the access to an audience. But the new tools that are out now and I’ve had G P T three access for a couple years because we had a research account. So I’ve been messing with it for a while. But ChatGPT I think really made it obvious to complete normie, so to speak. Like, oh my God, look at what this can do. And then the same thing with Midjourney, right? And so all of a sudden it democratized creation because those weird AI videos did still require some kind of technological capacity, right? I wouldn’t have known how to go make a creepy Mickey Mouse video.

My assumption for a long time was that there was some like weird shop and, some weird content farm in China or something like that, that was doing it. But the what the current crop of AI tools do is they democratize creation. So dissemination was like the last 10 years. And creation is that’s the kind of inflection point now.

SHEFFIELD: Hmm. Well, and I, the other thing also, I mean, People talk about moderation on social media a lot, but I mean, the reality is that 99% of it is non-human. And it has been that way for a long time. I mean, that’s, like, I don’t think people are aware of that. Do you think people are aware of that?

DIRESTA: I think, again, I think there are these stories that break through every now and then. I still get asked a lot about bots, which is sort of funny. Because I think bots are actually going to have kind of a resurgence. I think there’ll be a second bot version 2 is kind of on its way. But in 2015 when it began to become obvious that bots were what made things trend on Twitter, and any activist in 2015 knew that, right?

That automation was what people used to drive trends. Twitter cracked down on that, right? Again because all of these things were trending all the time, and it wasn’t that they were suppressing the trend. What they started to do was crack down on low quality accounts in the trend and were low quality at the time in 2015, referred very specifically to these automated accounts.

So I think a lot of it, and this you might recall also, this was around the time when it began to be kind of turned into some kind of partisan thing. My trend was suppressed and what Twitter was trying to do was move to this more behavioral model of trying to prevent trends from being a thing that was gameable if you had the best automation.

But I think that after learning that automation was a thing, and after hearing about things like the Russian bots really began to captivate the public in a pretty remarkable way to the point where what I started to see, was people accusing other people they didn’t like on the internet of being bots.

Right? If like somebody had an opposing political point of view and was kind of a troll, they would accuse each other of being bots as opposed to recognizing that this was just like, this sort of two factions had just kind of encountered each other and a contentious hashtag. So you would instead see people accusing the other of being some sort of auto automaton.

And now ironically, we are actually at an inflection point where bots can be phenomenally effective because of things like textual generative AI.

SHEFFIELD: It does remain to be seen how much the, that of an impact this will have because I mean, it is clear that They can impersonate a human, pretty well. Yeah. In fact. And so, we’ll ha the only way to really stop it actually is through automation, but versus bot is actually how it’s going to be.

DIRESTA: I did kind of wonder about, and you have things that are trained on the open web Right? Or trained on corpuses of Twitter data or, corpuses of ostensibly human created text. What happens when the kind of next generation and next generation, eventually I am kind of curious to see as the kind of overwhelming prevalence gets higher and higher.

You have kind of bots trained on the content from prior bots, so it’ll be, you know Little weird shift.

SHEFFIELD: Yeah. Yeah. Well, and I think at that point, it will become a thing to say on your website, we do not have any bot generated content on this website.

DIRESTA: Right.

SHEFFIELD: And of course I don’t know how anybody would verify that, but maybe that maybe people could do that. Who knows. But yeah, I think it, it will in some ways actually make things be more human and more authentic because we will have come full circle in a sense. Or I hope that way.

Alright, well, so let’s, I mean, the thing is though, I mean like moderation AI is necessary for moderation and, but yeah, the reality is of course the way that you do it, the way you train it and how you do it, like it does create all sorts of problems. And some of them are political and epistemic but others of them are technological.

So, like, for instance, Gmail. Google has put in a thing that where if you, if their users do not read messages from somebody or a, an organization, Google treats it as if it’s spam. Even though like a lot of people will go and subscribe to something or like they’ll have a server or send them a notice or something.

Like, they want the notice from their server. They don’t always have to look at it. They want it to be there every day though, because you know there’s going to be a problem if it’s not there. And, but Google doesn’t have any sort of metric to say that, oh, well, and I, I personally have been hit by that because, like I had a somebody subscribe to a feed of Twitter post that she was looking at first.

Then she stopped looking at it, but it was being forwarded from my email server. So then Google started thinking that my email server was spam because she never read her notices.

DIRESTA: This was it’s funny you mentioned it a politically, this was like a, this was, there was a whole battle over this. Do you remember with the I think the Republican party, if I’m not mistaken, that went first that was complaining that like, candidate fundraising emails we’re going to spam, and I was like, heck, they kind of belong there.

I mark the Democrat one spam when I get them, and I didn’t subscribe. Like I, I, if somebody’s selling your email address and hitting you up for money, like actually spam is probably the like, that’s a system working as it should make your emails with. So, no, you do see these unintended—again it’s what is in your feed. What do you see email you’ve opted in, right? So I think what they’re largely trying to do is keep out things that they think someone has sent you more involuntarily. Maybe this is the kind of thing where, you know, you click to indicate that you’re serious or something like that to activate or something along those lines.

Yeah, but it is but I mean, but

SHEFFIELD: sorry, but it, I mean, just like, imagine somebody is trying to, they’re college student, they’re about to graduate. They didn’t get an internship for whatever reason, and they’re out there thinking, oh shit, I got to get a job. Let me send resumes to companies.

And, and people and I, they don’t know who I am. And basically the way that email is now, none of those messages is going to reach anyone more than likely. And that’s a problem. And I don’t think that that’s something I have ever really seen discussed anywhere. Have you

DIRESTA: not, not much. I think with the, I think again, as things as AI improves, you’ll start to see those kinds of, like, those conversations become more possible.

I mean, the there are certain things where you just kind of, like, every now and then in my work, I’m like, wow, I really thought that would be low hanging fruit. And yet here it is. We published a study on child safety this week and recommender systems, were connecting accounts for child exploitation.

This is the sort of thing where you’re like, all right, you kind of think that certain things—

SHEFFIELD: Like they wouldn’t be doing that.

DIRESTA: Yeah. Right. And then you’re like, no, here it is. All right. So I think the AI can be incredibly useful or I think will be incredibly useful in very, very interesting ways, which is, there’s a lot.

So, content moderation. Has a lot of problems with things like false positives, where the system is not contextually sophisticated. So if I used the word bitch, for example I might be using it to insult somebody, or I might be using it because I refer to my friend that way as a term of endearment.

And then there are certain words where like certain communities will use them very positively, kind of like in group language, whereas other communities will use them as epithets or insults. And so I do think with things like generative with things like, like large language models you can have a much more sophisticated nuanced capacity for thinking about what how certain communities use language.

So NLP I think is a, there’s certain ways in which. We can do a lot more to avoid some of the cruder problems, like keyword-based lists to try to stop harassment. So I think that there’s ways in which it’ll make things a lot more sophisticated, but then of course, you’ll see whole new any, anytime you change something, there’s like a cascading series of events and unintended consequences.

That’s kind of whys I exists, because we try to think about, That, I think that’s the most interesting part of my job is thinking about it. You make one small change to a system. What are the, what are the shocks several degrees away? So this is where I am very interested in seeing how the AI stuff continues to shape up.

SHEFFIELD: Yeah. Well, and I guess, away from AI though, a lot of these problems are right, well, let’s say problems. A lot of these sort of, Disagreements, we can say can, the, these are ones that we’ve had as humans, literally for thousands of years. And like for instance in ancient Greece and Rome, there, there were entire schools of thought that were devoted to saying that nothing is real actually. How do we deal with that? What do we deal and one of them ironically was a medical school of thought. The pyrrhonians [pronounced pi-rone-ians. You want to talk about that a little bit, as far as it relates to what you’ve been thinking about?

DIRESTA: I feel like, so you and I kind of, talked about this on, maybe it was Mastodon or something. I actually kind of discovered which I thought was pronounced pyrrhonism [pronounced peer-on-ism], but I realize now that I don’t think I’ve ever heard it said out loud. But I was like, my God I was watching some of these circling the drain conversations about: ‘But who can possibly know what is, how can we know this is true? But who decides? But the media lies, but this person lies. That person lies.’

And I was like, oh, Jesus Christ. These conversations are so tedious. Like what even is a fact? Jonas Scona just wrote a thing called like, oh, I forget the title, but it was really great. It was in the New Atlantis, and it was about like RIP to the fact basically.

But with pyrrhonism, it was the idea that it’s just too difficult to know what’s true. And in order to achieve happiness, you should simply suspend judgment about the truth of all beliefs, because you’ll be a much happier person in life if you stop trying to figure out what is true or false. And so they would go through this process where they basically just kind of suspended judgment or considered the opposite. So this kind of like infinite regress basically.

And then they would hit a point where you had to suspend judgment because ‘who is to know,’ and ‘who is to say.’ And I just thought, my God, this basically just describes half the conversations on the internet today.

SHEFFIELD: Yeah. Well, and it’s interesting also that to a certain extent that actually is true that—of course what is truth, right? (laughter)

But you know, that really is the difference, is that getting people to understand that there are some things that it’s okay to disagree on, to dispute and that there are other things that maybe it’s not okay.

DIRESTA: Well this, this was the tornado reality thing, right?

SHEFFIELD: That’s right. That’s right.

DIRESTA: And the tornado doesn’t give a shit if you believe, right. You can question the tornado, but it’s still going to take down your house, so there are in fact facts, there is objective reality and so where that line is, I think is, is one of the one of the interesting questions.

I was never much for social constructionism. I always just kind of felt like it gives you a headache after a while, but I think you did philosophy. Maybe you feel differently about this.

SHEFFIELD: Well, I mean, I think it’s because what’s tricky is basically that, and this is what is kind of like the sort of elephant in the room of content moderation, which is that epistemology is actually the real challenge here in the space.

DIRESTA: Yeah, you’re right.

SHEFFIELD: A lot of our systems, so like, most modern-day university systems, governmental systems, media systems, corporate systems, all of them were created under an economic framework. That these are our facts, these are our numbers, our balance sheet and or these are the bad guys’ balance sheet and here’s how we respond to that.

But now we’re in a place where that’s no longer the imperative. Especially because we have managed to achieve some level of economic plenty in the industrialized world that because poverty is much less grinding than it used to be, and obviously still exists, I’m not going to say it doesn’t, but we have moved up the level in the Mazlow hierarchy of needs as a society. And the higher you go, the more epistemology matters. And it’s tricky because, well, I think you’ve had some people who don’t like that you have brushed up against some of these things. Would that be fair to say, Renée?

DIRESTA: In which specifically? Like in the partisan stuff or like the content moderation work?

SHEFFIELD: Well, I mean, they’re linked, because, yeah, it is absolutely the case, no matter how upset this makes people, that Donald Trump lies a lot more than literally any other politician at the national level in either party. And accepting that means that if you lie more, you will get fact-checked more. And in my case, I came out of right-wing media and running sort of, going against ‘liberal media bias’ in my writing. That was kind of the basis of how I got into all this stuff professionally. And I used to hate fact checkers, and I thought they were biased against Republicans, but then I had the thought, well, what if Republicans are just more wrong? And it was not a good feeling at first.

DIRESTA: I think there is, I have I’ve been on the receiving end of people being mad at me about issues both actually from the far left and the and the right one of the things that I do find interesting, so first of all, it is very easy to discredit the work if you can discredit the person doing the work.

And so I absolutely understand why the attacks begin to focus on me personally, or whoever is writing the thing, right? This is as old as it gets, if you want to discredit an inconvenient finding, you discredit the person who found it. And in a highly polarized political environment. One way to do that is to insinuate that they are in some way a partisan.

So when I did the work for the Senate on the Russia investigation in 2018, they, somebody first of all, it was really funny because I did not make any claim that said that the election had been swung. In fact I laid out over and over and over again, one, this has nothing to do with the collusion investigation.

Two, I don’t think this swung the election, but three, there is no data in here with which I could make a judgment either way. And this is, it’s in the report, you can go read it, but nobody goes and reads the report. Instead, they read the article that someone puts out, or they say, Clinton operative Renée DiResta, who donated to Hillary Clinton in the primary, right?

And that is like, they go, and they scour your political donations and so then they can turn you into a partisan actor. And naturally, if your report says the Russians supported Donald Trump’s candidacy and actively denigrated Secretary Clinton’s candidacy, which are two things that we support over and over and over and over again, 60, like 60 different types of analysis can make that same point.

But if people don’t want to believe it and they hear Renée DiResta donated to Hillary Clinton in 2015, then they can write it off. Because I have been “discredited” as a biased partisan, ergo, the finding the inconvenient finding is something that, that you can ignore. And then this just happens at a more systemic level.

And so with content moderation, what you see is the liberals of Silicon Valley are censoring conservative speech. And again, there’s, I want to like kind of hold aside one way that we could analyze this, which is the transparency point that, that I’ll come back to. But a lot of these, a lot of what happens is they become very, very good at taking an isolated incident and then it particularly incidents where they legitimately made a bad call.

Right? And when you legitimately make a bad call, again, you can point back to that call forever to discredit the entire enterprise. Hunter Biden’s laptop, terrible call. Does Hunter Biden’s laptop mean that all other calls are biased? No, it does not. But if you are a person who believes that there was a concerted effort to make that bad call for partisan reasons, and then those same people are making all the other calls, then it is plausible to think if you don’t know the ways that content moderation works or the various systems in place to try to prevent that kind of bias from creeping in you might just wholly disregard the enterprise of content moderation as biased.

And as one of the things that happens is the right wing realizes that this kind of ref working is very, very effective for them. And as early as 20 17, 20 18, you start to see Donald Trump begin to put out things like, have you been censored by big tech? Tell us your story here. Right? And they put up a web form.

And so over the years that perception is reinforced even as using the limited data available academic researchers, not me. Other people go and find that. In fact, a lot of these conservative outlets have extraordinary reach on social media because they are so good at connecting with audiences in ways that mainstream and liberal media often just are not.

So, they are getting, in fact, more lift. They’re showing up at the top of Facebook kind of best performance of the week kind of charts. They are getting disproportionate amplification relative to the size of the network on Twitter and study after study after study reinforces this. But if you have a personal experience of having some tweet get moderated or your friend does, then you begin to believe that all of those much more kind of quantitative analysis is all just like liberal lefty academic bullshit, whereas your lived experience tells you this other thing.

SHEFFIELD: Yeah. Well, and I mean to the point about Hunter Biden’s laptop, so I actually was there when that story broke.

DIRESTA:  Oh, where were you?

SHEFFIELD: Well, not in the shop, but I mean when it was percolating. So I knew about Hunter Biden’s laptop before it was a public story. And I can tell you that there were a ton of people who I know who were also there looking at it. And we all wanted that story to, we were all interested in it, but none of us ended up writing that story, because it was Rudy Giuliani who was pushing it.

And at that point in time, Rudy Giuliani explicitly refused to let anyone look at the actual data, to look at the raw data. And so in my own journalism, I had for instance, I was the first reporter to use DKIM, which is email verification for the audience, but basically every email server has kind of like a fingerprint and you can use it to verify whether the message that was claimed to be sent by that server was actually sent by that server. So I used DKIM to verify that some of the WikiLeaks posted emails that showed Donna Brazile had, in fact, secretly sent debate questions to Hillary Clinton.

DIRESTA: Oh, I remember that, yeah.

SHEFFIELD: Yeah. And so, because I had done that earlier, I was like, well, all right, hey, let’s look at this Hunter Biden stuff. Let me see the data. And I told somebody who had a direct connection to Giuliani, give me the data. Let me see the metadata. Where is it? No, you can’t have it. And I said, well, why won’t you give it to me? Well, because he thinks the story is true on its face and you should just trust him. And I was like, Rudy Giuliani is an alcoholic moron. I’m not going to trust anything this guy is telling me. I’m not going to trust him as far as I can throw him.

And I’ll tell you, that calculation was the exact same one that any news organization was going to make, any social media organization was going to make. We made the right call in the sense of not running un-validated information. And it was actually Rudy’s fault that story didn’t get out.

Now I think that people like Twitter and Facebook went too far in terms of cracking down on what was there. But the reality is there was no evidence presented in the moment. And Rudy actually admitted that later when somebody asked him about email headers and text message headers, which he had, because they were real as it turned out, and he actually dismissed it as “pettifogging nonsense” to ask him what for that information.

So anyway, that’s a little sidetrack, but I wanted to add that there in case some of your friends had never heard of it before.

DIRESTA: No that’s such interesting insight. I remember that Fox News passed on the story, but again, I had nothing. I was not in any way involved in any laptop conversations, but so I just watched it and they passed for that reason. Yeah, yeah. No, right. That, that, that makes sense in light of in light of what you’re saying.

I was curious about how platforms moderated, because I did, because I also did work for the Senate on the G R U, which was Russian military Intelligence, which was the other piece of the 2016 interference, which was the hack and leak operation. And one of the things that that I looked at in that other work was how do you launder stolen documents into a, into the mainstream.

And the Russians didn’t do this once. It wasn’t just the Clinton emails. It was like the anti-doping association got hacked. A bunch of athlete related emails got hacked.

SHEFFIELD: Macron was another.

DIRESTA: Yeah. So there were a number of these instances. And with the athletes, one in particular, when I was writing up the whole analysis of that, I went, and I read all of the materials all the open-source reporting and things that had been done on it in addition to the data set that I had.

And you do see that they actually go, and they change things, right? So they’re editing the individual files. And so there is interesting. Implications there as well, which is, when you get hacked material or leaked material for that matter, as a journalist, you’re a journalist. There’s, there’s like, what is the incentive of your source?

What is the incentive of this person putting it out? And what was really interesting in looking at how the Russians tried to put it out on Facebook was that prior to reaching out to reporters individually, they had tried just doing some dumps, right? And they got no pickup because they had no distribution. Because this is different than the internet research agency. The GRU hackers didn’t have established personas by which they could use trusted personas where people would retweet and stuff like that. So it was very interesting to see, how do you get information into the mainstream? So as I watched this story play out, I thought Facebook kind of throttled in the short term to try to figure out what happened.

And I imagine somebody behind the scenes is like calling media or calling government, trying to make sense of what happened. Whereas Twitter blocks the link to the story, like just blocks it, you can’t share it, you can’t dam it. And so it’s a different degree of moderation. Like Facebook says, okay, we’re not going to let this trend, we’re not going to give this the sort of full volume of amplification while we try to figure out what’s happened.

But Twitter just kind of summarily bans it. So my feeling was that Facebook had actually made the better call. I think again, content moderation is, people making choices with the best information in the moment. So I don’t know what you would’ve like done there in that situation, but it did feel to me like Facebook tried to kind of thread the needle.

And I don’t think that they made the wrong call, whereas the Twitter ban was extremely heavy handed.

SHEFFIELD: Yeah. Yeah, Twitter definitely went too far in my opinion. But I mean, the reality is that it did not, not allowing that trend for one day did, had no real impact on the election. Because again, like, and it’s funny because people have, somebody did a poll asking, do you, would you have voted differently if you had known about Hunter Biden’s laptop and, and of course whatever people say, four years after the fact, but.

That’s not relevant at all. And the reality is like that stuff’s still out there and people don’t really care about it without any real, I mean, like that’s the thing. This information is now out there. You can actually find the files. And some of them appear to be, have been tampered with. But like, let’s say, it’s still there.

You can, if you know where to look, you look hard enough, you can find it. It’s not really dispositive of anything. And that’s why it hasn’t had that much of an impact contrary to what some people may imagine.

DIRESTA: I was going to say it did strike me as one of these it was going to be salacious and fascinating for people who cared. And then for most people on the center left, it wasn’t going to make a difference at all. And so I was like, who is the intended audience for, of all the oppo dumps. Like, I get it. It’s definitely very, very, very salacious. But they emphasized like the dick pics, right? That was what was on Twitter also, there were no interesting documents that kind of tied the candidate to it in the way that the story was put forth.

So the entire thing, like the space that it occupies, I think in the conversation now is very interesting given that, what we’ve kind of just talked about, like there was definitely a moderation fuck up. I don’t know how many people would dispute that. I think even Twitter in the kind of hearings that have happened since has not disputed it. I think Jack Dorsey apologized, within a few hours of that call being made. But it really does continue to serve this role as a this egregiously bad call by which all other calls we can cast aspersions on now. And so I think in the content moderation conversation, that is the impact that it’s had.

SHEFFIELD: Yeah. I think that’s right, but it’s another example that the people who criticize content moderation actually are completely acting in bad faith, it seems. Because I mean, you’ve dealt with that yourself, that you had asked Michael Shellenberger, who was one of the Twitter files writers at Elon Musk’s behest, you had asked him, you had had a number of private conversations in the email and phone with him, and you asked him, okay, well, so what kind of standard do you want? And you never got a reply, did you?

DIRESTA: No, there was no answer to that. We had this conversation publicly, in fact, on Sam Harris. So even if people don’t want to read my emails, it was Sam Harris, and Bari Weis and I, and Michael Shellenberger were all in this conversation, because I think that is the question, right? There are some people who have a very well thought-out set of principled arguments about content, moderation derived from a set of values that they hold, that I think are absolutely worth debating and arguing over. And I really think that this is a conversation. I mean, I met Mike Masnick, God, back in maybe 2013 2014 timeframes, because we had a bunch of fights over CDA 230 and at the time I was–

SHEFFIELD: Sorry, hold on. Oh, I’m sorry. You’ve got to define that.

DIRESTA: Super, super jargony. Sorry about that, yeah.

SHEFFIELD: No, you can mention it, just say what it stands for.

DIRESTA: So, Communications Decency Act Section 230, which is the section of the Communications Decency Act that says that platforms have the right, but not the obligation to moderate content. So sometimes it’s referred to as the sword in the shield, you can moderate, you cannot moderate, but you can’t be held responsible for the user generated content on the platform.

So it falls under this type of law, known as intermediary liability. Right. And this is a huge topic in content moderation. Because both on the left and on the right, there are constantly arguments that CDA two 30 protections, right? Which are the things that protect the platforms from liability. Should they host egregious user generated content, that those protections should be revoked.

There is just a Supreme Court case on this where the content in question concerned ISIS, I believe it was recruitment if I’m not mistaken, but ISIS content, right? That was that had, arguments that these people, I think their child had been killed in a terrorist attack. And there were questions about are the platforms liable for the ISIS content that was posted and the Supreme Court I don’t want to like butcher the decision, but I believe I believe they actually kind of side stepped the CDA two 30 question entirely.

But the thing that was very interesting was this, this debate about what liability should platforms bear and. Right now, just to kind of finish the thought on the, the current conversation, the Democrats want more things taken down. The Republicans want more things left up. Roughly speaking different types of things kind of fall into that, but each thinks that you should lose your CDA two 30 protection and less certain conditions or criteria are met that are criteria favored by the base of, of their party.

So this is one reason why no regulation has actually occurred because they can’t agree on the specifics. But where I was going with this was actually that back in maybe 2013, 2014 CDA 230 just seemed like an egregious gift to the platforms that weren’t moderating a damn thing. Because this was during the ISIS timeframe actually, that I was kind of paying attention to this.

And I was relatively new to the space, and I did feel like in conversation with people where we did kind of fight about it on Twitter and have just foundationally different views. There were opportunities to kind of hear different perspectives and learn and, and kind of come to a, for me, I think, more of an appreciation of the value of certain types of technology or technology policies that I had sort of an innate dislike or distrust for, before I spent like, eight years in this field now, nine years.

So that was I think my, my big gripe with the Twitter Files people. And those reports, even before I became the main character, I wrote multiple times, multiple articles, arguing that it was really too bad that people who actually understood content moderation hadn’t been given the files, because then you’d get a much more nuanced view of what was happening in there. Much less sensationalized, cherry-picked view, and also the kind of–

SHEFFIELD: And less partisan.

DIRESTA: Far less partisan. But there also could have been more of a question of what should be done instead. Okay, so these people were shadow banned. That’s an interesting finding. How should we think about shadow banning? Is it innately terrible? There’s a guy that I talked to on Twitter periodically who passionately believes that shadow banning is absolutely terrible. And he is frequently in my replies, telling me that. And I don’t think it is a hundred percent terrible, but we did kind of come to some agreement that you could shadow ban somebody, but you could make it transparent so that they could figure out that they’ve been shadow banned and then that needs sort of transparency thresholds, and they can appeal and that meets certain thresholds.

But shadow banning is not inherently a terrible tool in content moderation. So these are the sorts of things where you—

SHEFFIELD: Can you define shadow banning though?

DIRESTA: Oh, yeah, yeah.

SHEFFIELD: Sorry. Because a lot of people have different views. Well, no, and it’s not just that it’s a jargon term, Renée, it’s also that I feel like a lot of people have very different definitions of what that term means. So I want to hear yours.

DIRESTA: It evolved over time. I think also, because there’s definitely tweets that I made in the past where I was like, that’s not what shadow banning is. They’re not shadow banning, which under the kind of old definition which I have a slight migraine, so my recall is crappy today, but the way that we talked about it in the olden days was like if you were shadow banned, nobody would see anything. Right? So, it wasn’t that if somebody went to your, like if somebody went to your page, they would not see it. It was just kind of like you would type, but nobody would see it anywhere. Just like wasn’t making it, in any way out to the public. And what shadow banned came to refer to was it wasn’t curated into other people’s feeds. And this was kind of like the way that you would on a message board not having something show up as different versus in a curated feed where it still lives on your page and people who go to your page can see it.

So if I wanted to see your content and you were shadow banned, but I went to Theory of Change or Matthew’s Twitter handle or Mastodon or whatever handle, I would be able to see it there. And so for me, this was very interesting because I felt like all of a sudden, the argument was that you had a right to curation. You had a right to be amplified, to–

SHEFFIELD: Promotion.

DIRESTA: Yeah. Right To promotion, to algorithmic promotion. And that was where I thought, this is a really interesting right that people feel that they have. And I would ask people around this time, I would say, why do you think that you’re shadow banned? And sometimes people, completely ordinary people where they would have absolutely no reason to be shadow banned, like there was no, they just weren’t big accounts. They didn’t have a lot of followers. They didn’t have many very, very good tweets. And so they felt that because they weren’t getting engagement, it must be because they were shadow banned. So also, again, this is one of these things like we were talking about where the knowledge that the thing exists, like bots or shadow banning, then people begin to feel that that must be happening to them.

And this is where I do think the transparency element around shadow banning, if your account has been de-boosted or deprioritized, I do think it actually is reasonable to make that a discoverable thing for somebody on their profile. Because then it does kind of prevent this whole conspiracy in which I must be shadow banned because my tweet didn’t get the engagement I feel that it was owed.

SHEFFIELD: Yeah, well, and the way that Elon Musk kind of open sourced some of the Twitter promotion algorithms, there were some interesting findings that were, people had with that. And one of them was that if people generally were just posting a link to content, it was actually not going to get promoted.

DIRESTA: Yeah.

SHEFFIELD: But what you just said I think is so important, because it’s not very nice to let people know to some degree, but maybe you just suck at Twitter, and maybe you post boring tweets and that’s why you don’t have engagement. It’s not because people think you’re a bot, or you’re politically oppressed or something.

DIRESTA: Right. But it’s also, I will say, and this is where my, my, my friend in the replies convinced me if you are, if you do see that you’ve been shadow banned and you have in fact like a platform has decided that your content is not something that it wants to push into the feed, maybe that tells you something about your content.

Maybe you’re like, hmm, okay, maybe I’ve been an asshole lately. Maybe my language isn’t great. Maybe I’m. Unnecessarily hostile. Like, maybe I can change the way that I behave in this community space. And I’m sure that there will be plenty of people who will screenshot it and just tweet it out. And again, as it alleged that it’s some sort of viewpoint-based depression.

But there might be people who, maybe we have to reset norms in some way. You’re not going to moderate good norms into existence. So maybe seeing like, that little popup that Twitter gives you when you’re writing, like, fuck off. And it’s like, most people on Twitter don’t tweet this way.

And I’m like, well, now they do.

SHEFFIELD: Well they should. No. Yeah. Well, yeah. I mean, and, but I mean, Lu himself though has been, I think one of the foremost practitioners of this. Sort of nihilistic view about moderation that, he, and, and, and you mentioned Mike Masnick, who’s also a friend of mine as well.

So he, for those who don’t know, Mike is the publisher editor impresario of a blog called Tech Dirt. So you should, you can check him out on there. But anyway, but one of the things that, that Mike has said repeatedly is that Elon Musk is having to sort of speed run the entire field of content moderation, which has taken, 10 years of 2010 to 15 years, people have had these conversations and he didn’t pay attention to them.

And now he’s having to learn them from nothing. And, and in your case, he kind of borrowed one of your phrases, didn’t he?

DIRESTA: That was really, that was, yeah, I know. I’m like the, the source of all things evil and sinor at Twitter, but also like, yeah, the, the stuff I write became the moderation policy back in 2018.

SHEFFIELD: Oh, but you got to say what it is.

DIRESTA: I will. No, no, but well, I don’t want to full credit for it because Aza Raskin was, was instrumental. Okay. Okay. He was his, like, he was the one who put the words on the whiteboard. We were trying to think, so he was a, he’s, he is at Center for Humane Technology with Tristan Harris.

And back in 2018, we were asking questions like, what belongs in a recommender system? Right? What is the difference between backyard chickens and QAnon? And it’s, it’s a real, it’s a real question, right? What are the, what is the ethical framework if one exists, that would differentiate between these two types of content?

And what would entitle one to be in a recommender system and not another? And so, so we spent a lot of time, actually, again, as looking at what came before. because this is what you do if you’re trying to develop an actual theory of something. And so Google had this thing called your money or your life, right?

And this is how search engines thought about it, which is if you have a new cancer diagnosis and you type in the name of your cancer, and the only thing that pops up or the thing on the first page of results is like juice fast, vitamin C like, you don’t really have cancer. Chemotherapy is a government hoax or whatever kind of stuff.

Well maybe that’s not great for your actual life, right? And so Google begins to come up with this idea of harms and how do we think about what a harm is? Is that the motivating factor behind recommender systems? Or behind algorithmic curation or amplification. What are, what are the principles that should go into making these decisions?

And as we were trying to think about what does that, what does that ideal, what does that framework look like? What does a good, a good system look like? The conversation was already beginning to move into the realm of partisanship. And so the freedom of speech, not freedom of reach argument was actually made in large part to begin to argue that your right to say something on a platform should be protected.

If you want to go and put, anti-vaxxers were very much in the news at the time. The measles outbreaks had started. Again, if you wanted to go put anti-vaccine content up on Facebook, by all means go put it up on Facebook. But should the Facebook recommender system boost your group that says that MMR causes autism?

Is that a kind of harm that the group should not promote? And this was where that, that distinction between speech and reach came in. And I think, I don’t, I don’t work on hate speech or that side of content moderation. It’s just never a thing that that I’ve, that I’ve gotten professionally involved in.

But that question of, I. Your right to post versus your right to be amplified or curated is actually a different thing. And that’s because there is no neutral in a curated feed. Even chronological order is privileging time. Right? But in a curated feed and like a for you type page, which is the thing that, that Twitter has what gets surfaced there is, is there is no neutral, there is no quote right answer.

So what are the things that are encoded in what the platform pushes out and in what gets reach? And so, Elon did actually in his speed run come upon that in whatever capacity. And now it is the title of Twitter’s Moderation policy. So that was sort of a, it’s always weird to see where your stuff winds up, like, five years later or whatever, but mm-hmm.

SHEFFIELD: Yeah. Well, yeah, and to what you’re saying about like to there is, you can’t say I’m not going to make a choice.

DIRESTA: Right.

SHEFFIELD: By doing that, you have made a choice. Whatever you do is a choice.

DIRESTA: Yeah.

SHEFFIELD And that’s connected again to this idea of different types of epistemology. So, people, they, they want to think that there is a marketplace of ideas that that’s what social media should be. That’s what the internet should be. But then they never actually stop to think about how marketplaces of goods and services work. They actually have rules and regulations on them.

I can’t go into the farmer’s market and take a megaphone and start saying: ‘Hey, I’ve got penis pills here. Who wants some?’ You can’t do that. And by the same token, somebody couldn’t open up a stall and be like: ‘OK, I’m taking my clothes off now. We’re doing a strip show here.’ Like, you can’t do that.

So there are rules in a marketplace of services and goods, and there will be rules in a marketplace of ideas, otherwise they can’t function.

DIRESTA: I think the thing that I tried writing about this last year sometime when the Elon free speech kind of conversation, the question of is it a public square. This was another thing where I’m pretty sure I’ve got a bunch of things calling it a public square from back in 2018, and then decided that I actually didn’t really like that metaphor anymore. But the thing with public squares even if you’re going to use the metaphor, there are time, place, and manner restrictions, right?

You don’t get to literally bring your pitchforks and like 50 people like chase one guy around the square, screaming obscenities. The police will intervene, so there are these things where the way in which certain norms get poured into the online world. That’s how we used to think about it.

Now, instead, what we have is like the norms of the online world are kind of like making their way to things like city council meetings, and school board meetings, and stuff like that. And actually some of that more hostile, caustic kind of mob-y type content that is so common on social media, I think is actually kind of moving in the other direction.

But it’s been, I think, the normalization of the brigading and harassment has been one of the worst aspects of the way that we’ve kind of jointly used social media.

SHEFFIELD: Well, and I think it, it has to be said that there has been an abdication of standardization of on the political right. They do not want to talk about any of this stuff. And some of them are mad at Elon Musk for deciding that reach was not a right. They don’t want to have any sort of rules, and they don’t understand that if you don’t place rules on a social media environments, well, it’s going to turn into 8chan. That’s pretty much inevitable.

And then they also don’t realize that in their own spaces they have rules. So like, I can’t go into Townhall.com, which is a Christian conservative website. If I go into their comment section and I start posting about how the Bible is all bullshit, and how Jesus was imaginary, you’re going to get banned in five seconds for that.

And you know what? That’s right. If that’s the kind of space they want to have, that they don’t want to have people debating that stuff, hey, more power to them. But understand that the majority of people don’t want to debate whether the earth is flat or whether women should have the right to wear pants.

DIRESTA: I think Reddit does this really well. And in 2015, I would’ve been shocked to, kind of hear myself saying that. But Reddit moved into this system of community, the, the community mods have authority kind of within the subreddit. So then there is this top-level thing where Reddit says, okay, like certain categories of either speech or content are egregious and, and banned, right?

And so there’s that, there, there’s that distinction. And you get these really interesting communities that come out where there’s just certain norms within the community. The community decides, In our, cat community. We don’t want pictures of dogs. So this is, there’s one that I think is called like cats standing up or something like that.

And like, literally it is just photos of cats standing up. And if you post a photo of a cat that is not standing up, well, that kind of violates the spirit of, so there’s ways that you can there’s ways that you can think about it. And I think you’d be hard pressed to say like, my sitting cat picture is censored.

Like, this is just the norms of the community that you’ve chosen to come into. And I, I think my gripe on the public square metaphor as it’s evolved over the years is that these are actually platforms where the decisions are made by unaccountable private power. This is not public at all, and the norms of the public aren’t there.

The influence of the public in shaping them is not shaping the rules and laws is not there. And so it is in fact kind of the whims and incentives of the company that set the rules. But by using phrases like public square, there’s a sense that that, there’s, there’s a sense of and more of an argument in which free expression becomes not only of value, but these sorts of conversations around it become a little bit more tied to some of the to some like kind of partisan fights or people forget that this is not an internet.

That, that there is no international First amendment. There are, different countries have different values on these things. And so you see some of these content moderation fights that were in the US now beginning to spread into other countries. And people beginning to say like, well, why is this platform moderating like this when my country says it should be moderated like that?

So the sort of, I think you asked me at one point earlier, like, are our platform’s just too big or is the problem centralization? And in some ways, you, you are trying to create some kind of universal rules system and so someone somewhere is going to be unhappy with the decision that you’ve made.

SHEFFIELD: Yeah. Well, and the other problem of this idea of the, of a, of calling these privately owned spaces, a public square, is that you are, you’re importing civil rights discourse in thought into something that is not a governmental entity.

DIRESTA: That was what I was trying to say.

SHEFFIELD: Okay. Well, yeah.

And, and, and, and, and, and, and it makes people not understand that these are different things. But, on, on the other hand, the bigness, it can be, it can, be a de facto public square. And I think that’s a terrible thing because like you do have city councils and school boards that are, they’re only posting their notifications on Facebook.

Well, guess what? A lot of people don’t use Facebook. Yeah. And so, you shouldn’t pretend that it’s a public square and, and like, and, and, and that ultimately is why I do, I have been glad to see the, expansion of interest still much smaller than everywhere else of, of in the Feder verse most prominently Macedon.

But yeah, you, I mean, you yourself have kind of pulled away from Twitter. What, tell, let’s talk about that.

DIRESTA: Yeah. I, so I loved Twitter, but I also, excuse me, I don’t know if you have kids also, you, you start to notice like when your default is to like, pick up a thing, go to the app.

There’s nothing interesting there, close, but then you like, literally are there like six seconds later. It’s almost like, it’s like a de, it’s like a habit. It becomes like a oh, I open my phone, I pull up Twitter, and then I would just catch myself going back into it after just closing it and thinking like, Ooh, this is bad.

Like, it’s not even conscious at that point. And I see like my kid, I’m constantly fighting with him on things like YouTube or, or Minecraft or the same, come on Xander, this is like not something you should be doing right now. Like, go read a book, go outside. Your friends are outside.

Like engage with people. And then feeling like, God, what a hypocrite I am. Cause here I am, like always on Twitter. And then so that, so there was that. And then there was also that I, I started feeling like it just got. So fraught. Like the only thing we were talking about for a while there was Elon Musk in moderation.

And I thought like, all right, like, where are all the other interesting conversations that that I used to find on here where the interesting people, and then the algorithm changes began and I stopped seeing them, right? And again, this is a thing where I can make a big deal of it, right? Oh my God, they changed the algorithm.

This is horrible. Or you just say like, it’s a, it’s a, it’s a private platform, the guy decided he’s going to optimize for different things as he tries to make back his like 44 billion or whatever, and so I, and then around the same time, there was the whole there was the drama with the blue checks and.

And then when it became clear that you just weren’t going to get very much reach unless you were paying for it, and then the kind of proliferation of trolling, like it just started to feel different somehow. So the combination of feeling like my past guilt for using it too much combined with my distaste for what it was becoming actually made it fairly easy to uninstall.

And then, I can still pop in on the website if I want to. But I also then started feeling like, oh, the mass, I joined MAs on, I joined Blue Sky and T2 also. Cause like one of the nice things about my job is I do see things pretty early and so I can kind of go and I actually think it’s really fun to watch new platforms emerge or get popular.

Like you can watch the, I remember joining Blue Sky and there were like maybe a couple hundred people on it at the time. But they already had their norms, right? Like, this is a place where you post like AI, art or selfies, and you talk about your life, not work, not tech, not news, like your life. And I thought like, man, this is so weird.

I don’t know how to do that anymore. I was like, shit, what should I post? I was like, made this craft with my kids today, which is normally something I would maybe reserve for like my friends on Instagram. And I just felt like I had somehow lost the capacity for engaging with social media as a social community as opposed to a place where we would go and talk about news or the, the main character or whatever the hell else it was.

Mask it on. I felt like people were sharing news, but it was a very pleasant space, and, and people would actually engage with you about the article you shared with like their esoteric thought from their discipline on it. And so I started feeling like I just had very different experiences in these places.

And it is largely, again, a function of. What is the network that you plug into, and then what is the network that you find? And in both instances, you have to do more work to find that network. Blue Sky has become at this point now that they’ve opened up a lot, much more of a recreation of what I would call my kind of center left plus Tech plus Journalism Plus Center, right?

People are much more what, so Blue Sky feels like kind of a recreation of Twitter at this point, whereas Mastodon still feels like a, a very different type of community where I’m going to have maybe more in-depth academic conversations. The, the Postle also I think kind of shapes the content and the style of what you put out.

And kind of ties back to what we were saying in the beginning, right? Like, you change your content, you change your attitude, your behavior in response to the to the structures and the norms that you’re, that you’re given. And these things all mutually shape each other. Yeah. Yeah.

SHEFFIELD: All right.

Well, last topic here. So I think, we, we, we, we’ve talked about kind of, epistemology a number of different ways here, but I think one of the cores of this de discussion or debate about content moderation some of it I think does go to the fact that the, we have a lot of people in our society who do not actually believe in deductive reasoning.

They believe in inductive reasoning. So in other words, reason for them is I believe x. And I will find evidence that X is true. That’s their, and whereas deductive reasoning is you, you actually induction is your starting point. I believe X is true, and I’m going to see if it’s true. But that, so there’s that second step and the thing that is worrying to me is that in the old days before proliferation of mass communication for the masses, if you had unsound ideas, such as the earth is flat or that vaccines cause autism, your mental processes kind of limited your ability to find an audience and to disseminate your ideas. But that’s not the case anymore.

DIRESTA: I think there’s The kind of like, I think about it maybe as like nitrification, if you will, which I know is not a real word, but it’s this is the incentive of the platform system, right? That really, that really kind of created that dynamic I think in a lot of ways in that you could, most of the new media that came about the sort of social media first, new media, the influencers too the, the fastest way to do your job, which was again, to grow an audience.

Was to find some targetable group of people. And so entrenching identities actually, really continuing to feed a community content that you knew it was predisposed to, and then doubling down, doubling down, doubling down. I think that audience capture phenomenon also goes both ways. You do see influencers who kind of maybe luck into a controversy or fall into a controversy, if you will and then get a whole new following from that and then really be, you can see their kind of content change to drive.

SHEFFIELD: I think that happened with Donald Trump.

DIRESTA: Oh interesting.

SHEFFIELD: Oh, yeah, yeah.

DIRESTA: So yeah, you’re probably, you’re probably right. I’ve never thought about it from the standpoint of someone quite so massive as that, but you mean his transition from like kind of New York rich guy to kind of populist figurehead, right?

SHEFFIELD: Yeah.

DIRESTA: Yeah. I think it’s that capacity to not only find the audience but activate it. One thing that I do think about a lot is, that is how the new propaganda works, right? And, and that was the thing for me, that, that the, the most interesting thing about the Russia data set was that they barely bothered trying to persuade anyone of anything, right?

They, they, the one thing they tried to persuade was they tried to persuade people to support Rand Paul. That was the first thing in the, in the early primaries, the Russian pages wanted the right wing to support Rand Paul. And again, like reality, kind of like his poor performance his, skill or lack thereof as a politician I think they quickly adapted, right?

So they so you see them begin to promote Donald Trump next, but the, the most persuasive style of propaganda content in there is them trying to convince Ted Cruz and Marco Rubio supporters. Like, I know we’ve been fans of Ted for a while, guys, but I really feel like this new Donald Trump guy might be the way that we need to go.

Almost everything else in the data set is just entrenching people in an identity they already hold and then explaining why all the other identities are bad, right? So it just turns into this sort of factionalized identitarian model where the best way to gain engagement, grow content, maintain, and then actually shape society, is to constantly be doubling down on those very niche identities.

And so realizing that, sort of ‘media of one entrepreneurs’ do it, and then also actual propagandists, are doing that also. I, I just thought it’s really interesting. Propaganda doesn’t even bother to persuade anymore. It’s all just about like activating what’s there and that’s the that’s the system that we’ve created for ourselves.

SHEFFIELD: Yeah. Well, and yeah, I mean, in terms of, so it isn’t about reason or persuasion, it is just simply about belief. And Steven Colbert had a, a way of saying it. I, and I, I’m trying to remember if you quoted that in one of your pieces or if I just saw it somewhere where he said, and this was back when he was his Bill O’Reilly character.

He said that what’s it, it’s not just that I feel this is true, it’s that I feel that it’s true.

DIRESTA: Yeah. Yes, indeed. We put out this child safety report that I mentioned, and one of the most interesting reactions to it was that the Wall Street Journal covered it. And they mentioned that some of these some of these buyers of, of child exploitation materials, the pedophiles actually used the pizza emoji, right.

And the most remarkable reaction I saw was like the pizza gate, sort of the early, early pizza gate influencers falling all over themselves to declare Pizza Gate had just been vindicated. Right? It was true. They knew it all along. And here was this report from Stanford and the covered in the Wall Street Journal that verified that it happened.

And I was, I thought like, oh my gosh, this is absolutely fascinating because it was like this Mountain Bailey, right where they, they’re, they’re saying that like, it was like they rec to the entire conspiracy theory of Pizza Gate to be about do, do pedophiles use the pizza emoji as opposed to it actually being about Hillary Clinton and this ring of people drinking blood of babies in a, in a pizza shop in Washington dc which is what the actual theory was.

But all of a sudden, we had just kind of like receded the entire thing and it was like the influencers themselves who were doing it. And I look at these moments sometimes and I just think like, do they know or do they really believe this? And I actually oftentimes, like, I feel like that’s one of my, my greatest feelings in this space is I really can’t answer that question a lot of the time at this point.

So.

SHEFFIELD: Yeah. Well, and that is why you say there’s not really a point to distinguishing between miss information and disinformation. Because yeah. I mean, we are in a moment now where the, the line between malice and delusion is an illusion.

DIRESTA: There’s a word We used to have rumors, right? Rumors. I started feeling like mis mis disk was not, misinformation in particular was bad, bad characterization maybe like maybe three, four years ago.

Because it misses what’s happening, right? It’s that people are spreading information because they believe it, but it’s not that it’s, they don’t care if the fact is true or false, right? Misinformation implies that, you say the earth is flat, you hear that it’s round and you say, oh, whoops, I got that wrong.

The earth is round as opposed to like, no, NASA has been lying to you, and the government is colluding and they are covering it up, which is motivated reasoning, which is, or which is like just an entrenched an entrenched belief. And we’ve, we’ve moved so much into like my identity requires that I believe this and that I think is, is is particularly terrible, but on social media, so many of the incentives, not only in terms of networks and sensationalism reinforces, but it’s also that like the community around you reinforces it also.

And that’s where I think, if you, if you deviate from that kind of niche norm, if you will, or that niche belief you’re going to experience some ostracization from your community. So you are just, it has constantly reinforced you at all times that your very homogenous group thinks this way and so should you.

Yeah.

SHEFFIELD: Well, and hopefully, I, I mean, the only answer I feel like is, what, what you were saying, that kind of removing things as decent, decent realization is, is important, but also realizing that we shouldn’t get, we can’t have technology solved problems that are human problems, that are social problems.

And, and, and you shouldn’t want to. All right, well we could go on forever, but let’s, let’s not. Let me put up on the screen your Mastodon handle. You are over there. So it’s no upside and I think everybody can get the reason you use that as your handle by this point, hopefully.

DIRESTA: Finance, there’s a finance joke. I used to be a trader at Jane Street and they really didn’t like Twitter and they didn’t want us having Twitter accounts. And I made this account called No Upside because they said there’s no upside to you having a Twitter account. And it was at a point when I had kind of decided I wanted to leave, and I needed to meet people. And so it was sort of like a ‘Well if you find it and fire me, there’s no upside.’

SHEFFIELD: Yeah. But it’s actually very apropos for content moderation as well.

DIRESTA: Anyway, I keep that, I keep that handle, I feel like I was of this generation when everybody had handles. And now I do sometimes wish I’d just gone with my name, but that ship sailed.

SHEFFIELD: Okay. All right. So it’s @noupside@saturation social, and you are on Twitter under the NoUpside handle, but not using it much.

DIRESTA: Actually not, but I’m on Blue Sky there if anybody’s on Blue Sky, hopefully you’ll be there.

SHEFFIELD: Okay, cool. Well, great having you here today and we’ll wish you the best of luck in finding an upside.

DIRESTA: Have an awesome rest of the night.

SHEFFIELD: All right, so that is the program for today. I appreciate everybody for joining the show. And remember, you can go to theoryofchange.show to get more episodes. And if you are a paid subscriber member, you can get every single episode, the video and audio and transcript. So go to theoryofchange.show and you can subscribe on Substack or on Patreon, whatever your preference is.