Podcast Episode: Who Should Control Online Speech?

Fight Censorship, Share This Post!

Episode 103 of EFF’s How to Fix the Internet

The bots that try to moderate speech online are doing a terrible job, and the humans in charge of the biggest tech companies aren’t doing any better. The internet’s promise was as a space where everyone could have their say. But today, just a few platforms decide what billions of people see and say online. 

Join EFF’s Cindy Cohn and Danny O’Brien as they talk to Stanford’s Daphne Keller about why the current approach to content moderation is failing, and how a better online conversation is possible. 

Click below to listen to the episode now, or choose your podcast player:

play

%3Ciframe%20height%3D%22200px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F47068a45-5ee2-406d-976e-c02cf50c9080%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E

Privacy info.
This embed will serve content from simplecast.com

Listen on Google Podcasts badge  Listen on Apple Podcasts Badge
Listen on Spotify Podcasts Badge  Subscribe via RSS badge

More than ever before, societies and governments are requiring a small handful of companies, including Google, Facebook, and Twitter, to control the speech that they host online. But that comes with a great cost in both directions — marginalized communities are too often silenced and powerful voices pushing misinformation are too often amplified.

Keller talks with us about some ideas on how to get us out of this trap and back to a more distributed internet, where communities and people decide what kind of content moderation we should see—rather than tech billionaires who track us for profit or top-down dictates from governments. 

When the same image appears in a terrorist recruitment context, but also appears in counter speech, the machines can’t tell the difference.

You can also find the MP3 of this episode on the Internet Archive.

In this episode you’ll learn about: 

  • Why giant platforms do a poor job of moderating content and likely always will
  • What competitive compatibility (ComCom) is, and how it’s a vital part of the solution to our content moderation puzzle, but also requires us to solve some issues too
  • Why machine learning algorithms won’t be able to figure out who or what a “terrorist” is, and who it’s likely to catch instead
  • What is the debate over “amplification” of speech, and is it any different than our debate over speech itself? 
  • Why international voices need to be included in discussion about content moderation—and the problems that occur when they’re not
  • How we could shift towards “bottom-up” content moderation rather than a concentration of power 

Daphne Keller directs the Program on Platform Regulation at Stanford’s Cyber Policy Center. She’s a former Associate General Counsel at Google, where she worked on groundbreaking litigation and legislation around internet platform liability. You can find her on twitter @daphnehkKeller’s most recent paper is “Amplification and its Discontents,” which talks about the consequences of governments getting into the business of regulating online speech, and the algorithms that spread them. 

If you have any feedback on this episode, please email [email protected].

Below, you’ll find legal resources – including links to important cases, books, and briefs discussed in the podcast – as well a full transcript of the audio.

Resources 

Content Moderation:

AI/Algorithms:

Takedown and Must-Carry Laws:

Adversarial Interoperability:

Transcript of Episode 103: Putting People in Control of Online Speech

Daphne: Even if you try to deploy automated systems to figure out which speech is allowed and disallowed under that law, bots and automation and AI and other robot magic, they fail in big ways consistently.

Cindy: That’s Daphne Keller, and she’s our guest today. Daphne works out of the Stanford Centre for Internet and Society and is one of the best thinkers about the complexities of today’s social media landscape and the consequences of these corporate 

Danny: Welcome to how to fix the internet with the electronic frontier foundation. The podcast that explores some of the biggest problems we face online right now: problems whose source and solution is often buried in the obscure twists of technological development, societal change and the subtle details of internet law. 

Cindy: Hi everyone I’m Cindy Cohn and I’m the Executive Director of the Electronic Frontier Foundation. 

Danny: And I’m Danny O’Brien, special advisor to the Electronic Frontier Foundation.

Cindy: I’m so excited to talk to Daphne Keller because she’s worked for many years as a lawyer defending online speech. She knows all about how platforms like Facebook, TikTok, and Twitter crack down on controversial discussions and how they so often get it wrong. 

Hi Daphne, thank you for coming. 

Daphne: First, thank you so much for having me here. I am super excited. 

Cindy: So tell: me how did the internet become a place where just a few platforms get to decide what billions of people get to see and not see, and why do they do it so badly?  

Daphne: If you rewind twenty, twenty-five years, you have an internet of widely distributed nodes of speech. There wasn’t a point of centralized control, and many people saw that as a very good thing. At the same time the internet was used by a relatively privileged slice of society, and so what we’ve seen change since then, first, is that more and more of society has moved online So that’s one big shift, is the world moved online—the world and all its problems. The other big shift is really consolidation of power and control on the internet. Even 15 years ago much more of what was happening online was on individual blogs  distributed on webpages and now so much of our communication, where we go to learn things, is controlled by a pretty small handful of companies, including my former employer Google, and Facebook and Twitter.  And that’s a huge shift particularly since we as a society are asking those companies to control speech more and more, and maybe not grappling with what the consequences will be of our asking them to do that. 

Danny: Our model of how content moderation should work, where you have people looking at the comments that somebody has made and then picking and choosing, was really developed in an era where you assumed that the person making the decision was a little bit closer to you—that it was the person running your your neighborhood discussion forum or you’re just editing comments on their blog. 

Daphne: The sheer scale of moderation on a Facebook for example means that they have to adopt the most reductive, non-nuanced rules they can in order to communicate them to a distributed global workforce. And that distributed global workforce inevitably is going to interpret things differently and have inconsistent outcomes. And then having the central decision-maker sitting in Palo Alto or Mountain View in the US subject to a lot of pressure from say, whoever sits in the White House, or from advertisers, means that there’s both a huge room for error in content moderation, and inevitably policies will be adopted that 50% of the population thinks are the wrong policies. 

Danny: So when we see the platforms of Mark Zuckerberg go before the American Congress and answer questions from senators, one of the things that I hear them say again and again is that, we have algorithms that sort through our feeds. We’re developing AI that can identify nuances in human communication, why does it appear that they failed so badly to kind of create a bot that reads every post and then picks and chooses which are the bad ones and then throw them off?

Daphne: Of course the starting point is that we don’t agree on what the good ones are and what the bad ones are, but even if we could agree, even if you’re talking about a bot that’s supposed to enforce a speech law, a speech law which is something democratically enacted, and presumably has the most consensus behind it. And the crispest definition they fail in big ways consistently. You know they set out to take down ISIS and instead they take down the Syrian archive which exists to document war crimes for a future prosecution. The machines make mistakes a lot, and those mistakes are not evenly distributed, we have an increasing body of research showing disparate impact for example on speaker speakers of African-American English, and so there are just a number of errors that hit not just on free expression values but also on equality values  There’s there’s a whole bunch of societal concerns that are impacted when we try to have private companies deploy machines to police our speech. 

Danny: What kind of errors do we see machine learning making particularly in the example of like tackling terrorist content? 

Daphne: So I think the answers are slightly different depending which technologies we’re talking about. A lot of the technologies that get deployed to detect things like terrorist content are really about duplicate detection. And the problems with those systems are that they can’t take context into account. So when the same image appears in a terrorist recruitment context but also appears in counter speech the machines can’t tell the difference.

Danny: And when you say counter-speech, you are referring to the many ways that people speak out against hate speech.

Daphne: They’re not good at understanding things like hate speech because the ways in which humans are terrible to each other using language evolves so rapidly and so are the ways that people try to respond to that, and undermine it and reclaim terminology. I would also add most of the companies that we’re talking about are in the business of selling things like targeted advertisements and so they very much want to promote a narrative that they have technology that can understand content, that can understand what you want, that can understand what this video is and how it matches with this advertisement and so forth. 

Cindy: I think you’re getting at one of the underlying problems we have which is the lack of transparency by these companies and the lack of due process when they do the take-down, seem to me to be pretty major pieces of why the companies not only get it wrong but then double down on getting it wrong. There have also been proposals to put in strict rules in places like Europe so that if a platform takes something down, they have to be transparent and offer the user an opportunity to appeal. Let’s talk about that piece. 

Daphne: So those are all great developments, but I’m a contrarian. So now that I’ve got what I’ve been asking for for years I have problems with, my biggest problem really, has to do with competition. Because I think the kinds of more cumbersome processes that we absolutely should ask for from the biggest platforms can themselves become a huge competitive advantage for the incumbents if they are things that the incumbents can afford to do and smaller platforms can’t.  And so the question of who should get what obligations is a really hard one and I don’t think I have the answer. Like I think you need some economists thinking about it, talking to content moderation experts. But I think if we invest too hard in saying every platform has to have the maximum possible due process and the best possible transparency we actually run into a conflict with competition goals and and we need to think harder about how to navigate those two things.

Cindy: Oh I think that’s a tremendously important point. It’s always a balancing thing especially around regulation of online activities, because we want to protect the open source folks and the people who are just getting started or somebody who has a new idea. At the same time, with great power comes great responsibility, and we want to make sure that the big guys are really doing the right thing, and we also really do want the little guys to do the right thing too. I don’t want to let them entirely off the hook but finding that scale is going to be tremendously important.  

Danny: One of the concerns that is expressed is less about the particular content of speech, more how false speech or hateful speech tends to spread more quickly than truthful or calming speech. So you see a bunch of laws or a bunch of technical proposals around the world trying to mess around with that aspect and to give something specific. There’s been pressure on group chats like WhatsApp in India and Brazil and other countries to limit how easy it is to forward messages or have some way of the government being able to see messages that are being forwarded a great deal. Is that kind of regulatory tweak that you’re happy with or is that going too far? 

Daphne: Well I think there may be two things to distinguish here: one is when WhatsApp limits how many people you can share a message with or add to a group. They don’t know what the message is because it is encrypted and so they’re imposing this purely quantitative limit on how widely people can share things. What we see more and more in the US discussion is a focus on telling platforms that they should look at what content is and then change what they recommend or what they prioritize in a newsfeed based on what the person is saying. For example,  there’s been a lot of discussion in the past couple of years about whether YouTube recommendation algorithm is radicalizing. You know, if you search for vegetarian recipes will it push you to vegan recipes or as much more sinister versions of that problem. I think it’s extremely productive for platforms themselves to look at that question to say, hey wait what is our amplification algorithm doing? Are there things we want to tweak so that we are not constantly rewarding our users worst instincts? What I see that troubles me, and that I wrote a paper on recently called Amplification and its Discontents, is this growing idea that this is also a good thing for governments to do. That we can have the law say, Hey platforms, amplify this, and don’t amplify that. This is an appealing idea to a lot of people because they think maybe platforms aren’t responsible for what their users say but they are responsible for what they themselves chose to amplify with an algorithm.  

All the problems that we see with content moderation are the exact same problems we would see if we applied the same obligations to what they amplify. The point isn’t you can never regulate any of these things, we do in fact regulate those things. US law says if platforms see child sexual abuse material for example they have to take it down. We have a notice and take down system for a copyright. It’s not that we live in a world where laws never can have platforms take things down, but those laws run into this very known set of problems about over removal, disparate impact, invasion of privacy and so forth. And you get those exact same problems with amplification laws.

Danny: We’ve spent some time talking about the problems with moderation, competition, and we know there are legal and regulatory options around what goes on social media that are being applied now and figured out for the future. Daphne, can we move on to how it’s being regulated now? 

Daphne: Right now we are seeing, we’re going from zero government guidelines on how any of this happens to government guidelines so detailed that they take 25 pages to read and understand, and plus there will be additional regulatory guidance later. I think we may come to regret that, going from having zero experience with trying to set these rules to making up what sounds right in the abstract based on the little that we know now, with inadequate transparency and inadequate basis to really make these judgment calls. I think we’re likely to make a lot of mistakes but put them in laws that are really hard to change.

Cindy: Where on the other hand, you don’t want to stand for no change, because the current situation isn’t all that great either. This is a place where perhaps a balance between the way the Europeans think about things which is often more highly regulatory and the American let the companies do what they want strategy. Like we kind of need to chart a middle path.

Danny: Yeah, and I think this raises another issue which of course, every country is struggling with this problem, which means that every country is thinking of passing rules about what should happen to speech. But it’s the nature of the internet and it’s one of its advantages, well it should be, is that everyone can talk to one another. What happens when this speech in one country that is being listened to in another with two different jurisdictional rules? Is that a resolvable problem?

Daphne: So there are a couple of versions of that problem. The one that we’ve had for years is what if I say something that’s legal to say in the United States but illegal to say in Canada or Austria or Brazil? And so we’ve had a trickle of cases, and more recently some more important ones, with courts trying to answer that question and mostly saying, yeah I do have the power to order global take-downs, but don’t worry, I’ll only do it when it’s really appropriate to do that. And I think we don’t have a good answer. We have some bad answers coming out of those cases, like hell yeah, I can take down whatever I want around the world, but part of the reason we don’t have a good answer is because this isn’t something courts should be resolving. The newer thing that’s coming, it’s like kind of mind blowing you guys, which is we’re going to have situations where one country says you must take this down and the other country says you cannot take that down, you’ll be breaking the law if you do. 

Danny: Oh…and I think it’s kind of counter intuitive sometimes to see who is making those claims. So for instance I remember there being a huge furor in the United States about when Donald Trump was taken off Twitter by Twitter, and in Europe it was fascinating, because most of the politicians there who were quite critical of Donald Trump were all expressing some concern that a big tech company could silence a politician, even though it was a politician that they opposed. And I think the traditional idea of Europe is that they would not want the kind of content that Donald Trump emits on something like Twitter.

Cindy: I think this is one of the areas where it’s not just national, the kind of global split between that’s happening in our society plays out in some really funny ways….because there are, as you said, these, we call these kind of must carry laws. There was one in Florida as well, and EFF participated, in, at least getting an injunction against that one. Must carry laws are what we call a set of laws that require social media companies to keep something up and give them penalties if they take something down. This is a direct flip of some of the things that people are talking about around hate speech and other things that require companies to take things down and penalize them if they don’t.

Daphne: I don’t want to geek out on the law too much here, but it feels to me like a moment when a lot of settled First Amendment doctrine could become shiftable very quickly, given things that we’re hearing, for example, from Clarence Thomas who issued a concurrence in another case saying, Hey, I don’t like the current state of affairs and maybe these platforms should have to carry things they don’t want to.

Cindy: I would be remiss if I didn’t point out I think this is completely true as a policy matter, it’s also the case as a First Amendment matter, that this distinction between the speech and regulating the amplification is something that the Supreme Court has looked at a lot of times and basically said it’s the same thing. I think the fact that it’s causing the same problems shows that this isn’t just kind of a First Amendment doctrine hanging out there in the air, the lack of a distinction in the law between whether you can say it or whether it can be amplified comes because they really do cause the same kinds of societal problems that free speech doctrine is trying to make sure don’t happen in our world. 

Danny: I was talking to a couple of Kenyan activists last week. And one of the things that they noted is while the EU and the United States fighting over what kind of amplification controls are lawful and would work, they’re facing the situation where any law about amplification in their own country is going to silence the political opposition because of course politics is all about amplification. Politics, good politics, is about taking a voice of a minority and making sure that everybody knows that something bad is happening to them. So I think that sometimes we get a little bit stuck in debating things from an EU angle or US legal angle and we forget about the rest of the world.

Daphne: I think we systematically make mistakes if we don’t have voices from the rest of the world in the room to say, hey wait, this is how this is going to play out in Egypt or this is how we’ve seen this work in in Colombia. In the same way that, to take it back to content moderation generally, that in-house content moderation teams make a bunch of really predictable mistakes if they’re not diverse. If they are a bunch of college educated white people making a lot of money and living in the Bay area there are issues they will not spot and that you need people with more diverse backgrounds and experience to recognize and plan around. 

Danny: Also by contrast if they’re incredibly underpaid people who are doing this in a call center and have to hit ridiculous numbers and being traumatized by the fact that they’re getting to filter through the worst garbage on the internet, I think that’s a problem too.

Cindy: My conclusion from this conversation so far is just having a couple large platforms try to regulate and control all the speech in the world is basically destined to failure and it’s destined to failure in a whole bunch of different directions. But the focus of our podcast is not merely to name all the things broken with modern Internet policy, but to draw attention to practical and even idealistic solutions. Let’s turn to that.

Cindy: So you have dived deep into what we at EFF call adversarial interoperability or ComCom. This is the idea that users can have systems that operate across platforms, so for example you could use a social network of your choosing to communicate with your friends on Facebook without you having to join Facebook yourself. How do you think about this possible answer as a way to kind of make Facebook not the decider of everybody’s speech?  

Daphne: I love it and I want it to work, and I see a bunch of problems with it. But, but I mean, part of, part of why I love it is because I’m old and I love the distributed internet where there weren’t these sort of choke hold points of power over online discourse. And so I love the idea of getting back to something more like that.

Cindy: Yeah. 

Daphne: You know, as a first amendment lawyer, I see it as a way forward in a neighborhood that is full of constitutional dead ends. You know, we don’t have a bunch of solutions to choose from that involve the government coming in and telling platforms what to do with more speech. Especially the kinds of speech that people consider harmful or dangerous, but that are definitely protected by the first amendment. And so the government can’t pass laws about it. So getting away from solutions that involve top-down dictates about speech towards solutions that involve bottom up choices by speakers and by listeners and by community is about what kind of content moderation they want to see, seems really promising.

Cindy:  What does that look like from a practical perspective? 

Daphne: And there are a bunch of models of this that you can envision this as what they call a federated system, like the Mastodon social network where each node has its own rules. Or you can say, oh, you know, that goes too far, I do want someone in the middle who is able to honor copyright take down requests or police child, sexual abuse material, be a point of control, for things that society decides should be controlled.

You know, then you do something like what I’ve called magic APIs or what my Stanford colleague Francis Fukuyama has called middleware, where the idea is Facebook is still operating, but you can choose not to have their ranking or their content moderation rules, or maybe even their user interface and you can opt to have the version, from ESPN that prioritizes sports or from a Black Lives Matter affiliated group that prioritizes racial justice issues.

So you bring in competition in the content moderation layer, while leaving this underlying, like treasure trove of everything we’ve ever done, instead on the internet sitting with today’s incumbents.

Danny: What are some of your concerns about this approach? 

Daphne: I have four big practical problems. The first is does the technology really work? Can you really have APIs that make all of this organization of massive amounts of data happen instantaneously in distributed ways. The second is about money and who gets paid. And the last two are things I do know more about. One is about content moderation costs and one is about privacy.  I unpack all of this in a recent short piece in the Journal of Democracy if people want to nerd out on this. But the content moderation costs piece is, you’re never going to have all of these little distributed content moderators all have Chechen speakers and Arabic speakers and Spanish speakers and Japanese speakers. You know, so there’s just a redundancy problem, where if you have all of them have to have all of the language capabilities to assess all of the content, that becomes inefficient. Or you know you’re you’re never going to have somebody who is enough of an expert in say American extremist groups to know what a Hawaiian shirt means this month you know versus what it meant last month.  

Cindy: Yeah.

Daphne: Can I just raise one more problem with competitive compatibility or adversarial interoperability? And I raise this because I’ve just been in a lot of conversations with smart people who I respect who really get stuck on this problem, which is aren’t you just creating a bunch of echo chambers where people will further self isolate and listen to the lies or the hate speech. Doesn’t this further undermine our ability to have any kind of shared consensus reality and a functioning democracy? 

Cindy: I think that some of the early predictions about this haven’t really come to pass in the way that we’re concerned about. I also think there’s a lot of fears that are not really grounded in empirical evidence about where people get their information and how they share it, and that need to be brought into play here before we decide that we’re just stuck with Facebook and that our only real goal here is to shake our fist at Mark Zuckerberg or write laws that will make sure that he protects a speech I like and takes down the speech I don’t like, because other people are too stupid to know the difference. 

Daphne: If we want to avoid this echo chamber problem is it worth the trade-off of preserving these incredibly concentrated systems of power over speech? Do we think nothing’s going to go wrong with that? Do we think we have a good future with greatly concentrated power over speech by companies that are vulnerable to pressure from say governments that control access to lucrative markets like China, which has gotten American companies to take down lawful speech? Companies that are vulnerable to commercial pressures from their advertisers which are always going to be at best majoritarian. Companies that faced a lot of pressure from the previous administration and will so from this and future administrations to do what politicians want. The worst case scenario to me of having a continued extremely concentrated power over speech looks really scary and so as I weigh the trade-offs, that weighs very heavily, but it kind of goes to almost questions you want to ask a historian or a sociologist or a political scientist or Max Weber.

Danny: When I talk to my friends or my wider circle of friends on the internet it really feels like things are just about to veer into an argument at every point. I see this in Facebook comments where someone will say something fairly innocuous and we’re all friends, but like someone will say something and then it will spiral out of control. And I think about how rare that is when I’m talking to my friends in real life. There are enough cues there that people know if we talk about this then so-and-so is going to go on a big tirade, and I think that’s a combination of coming up with new technologies, new ways of dealing with stuff, on the internet, and also as you say, better research, better understanding about what makes things spiral off in that way. And the best thing we can fix really is to change the incentives, because I think one of the reasons why we’ve hit what we’re hitting right now is that we do have a handful of companies and they all have very similar incentives to do the same kind of thing. 

Daphne: Yeah I think that is absolutely valid. I start my internet law class at Stanford every year by having people read Larry Lessig. He lays out this premise that what truly shapes people’s behavior is not just laws, as lawyers tend to assume. It’s a mix of four things, what he calls Norms, the social norms that you’re talking about, markets, economic pressure, and architecture, by which he means software and the way that systems are designed to make things possible or impossible or easy or hard. What we might think of as product design on Facebook or Twitter today. And I think those of us who are lawyers and sit in the legal silo tend to hear ideas that only use one of those levers. They use the lever of changing the law, or maybe they add a changing technology, but it’s very rare to see more systemic thinking that looks at all four of those levers, and how they have worked in combination to create problems that we’ve seen, like there are not enough social norms to keep us from being terrible to each other on the internet but also how those levers might be useful in proposals and ideas to fix things going forward.

Cindy: We need to create the conditions in which people can try a bunch of different ideas, and we as a society can try to figure out which ones are working and which ones aren’t. We have some good examples. We know that Reddit for instance made some great strides in turning that place to something that has a lot more accountability. Part of what is exciting to me about ComCom and this middleware idea is not that they have the answer, but that they may open up the door to a bunch of things, some of which are going to be not good, but a couple of which might help us point the way forward towards a better internet that serves us. We may need to think about the next set of places where we go to speak as maybe not needing to be quite as profitable. I think we’re doing this in the media space right now, where we’re recognizing that maybe we don’t need one or two giant media chains to present all the information to us. Maybe it’s okay to have a local newspaper or a local blog that gives us the local news and that provides a reasonable living for the people who are doing it but isn’t going to attract Wall Street money and investment. I think that one of the the keys to this is to move away from this idea that five big platforms make this tremendous amount of money. Let’s spread that money around by giving other people a chance to offer services. 

Daphne: I mean VCs may not like it but as a consumer I love it.

Cindy: And one of the ideas about fixing the internet around content moderation, hate speech, and these must carry laws, is really to try to to create more spaces where people can speak that are a little smaller and shrink the content moderation problem down to a size where we may still have problems but they’re not so pervasive.   

Daphne: And on sites where social norms matter more.  You know where that lever, the thing that stops you from saying horrible racist things in a bar or at church or to your girlfriend or at the dinner table, if those sorts of the norms element of public discourse becomes more important online, by shrinking things down into manageable communities where you know the people around you, that might be an important way forward.

Danny: Yeah, I’m not an ass in social interactions not because there’s a law against being an ass but because there’s this huge social pressure and there’s a way of conveying that social pressure in the real world and I think we can do that. 

Cindy: Thank you so much for all that insight Daphne and for breaking down some of these difficult problems into kind of manageable chunks we can begin to address directly. 

Daphne: Thank you so much for having me.

Danny: So Cindy, having heard all of that from Daphne, are you more or less optimistic about social media companies making good decisions about what we see online? 

Cindy: So I think if we’re talking about today’s social media companies and the giant platforms, making good decisions, I’m probably just as pessimistic as I was when we started. If not more so. You know, Daphne really brought home how many of the problems we’re facing in content moderation in speech these days are the result of the consolidation of power and control of the internet in the hands of a few tech giants. And how the business models of these giants play into this in ways that are not good.

Danny: Yeah. And I think that like the menu, the palette of potential solutions in this situation is not great either. Like, I think the other thing that came up is, is, you watch governments all around the world, recognize this as a problem, try and come in to fix the companies rather than fix the ecosystem. And then you end up with these very clumsy rules. Like I thought the must carry laws where you go to a handful of companies and say, you absolutely have to keep this content up is such a weird fix. When you start thinking about it. 

Cindy: Yeah. And of course it’s just as weird and problematic as  you must take this down, immediately. Neither of these directions are good ones. The other thing that I really liked was how she talked about the problems with this idea that AI and bots could solve the problem.

DANNY: And I think part of the challenge here is that we have this big blob of problems, right? Lots of articles written about, oh, the terrible world of social media and we need an instant one off solution and Mark Zuckerberg is the person to do it. And I think that the very nature of conversation, the very nature of sociality is that it’s, it is a small scale, right? It is at the level of a local cafe.

Cindy: And of course, it leads us to the the fixing part that we liked a lot, which is this idea that we try to figure out how do we redistribute the internet and redistribute these places so that we have a lot more local cafes or even town squares. 

The other insight I really appreciate is kind of taking us back to, you know, the foundational thinking that our friend Larry Lessig did about how we have to think, not just about law as a fix, and not just about code, how do you build this thing as a fix, but we have to look at all four things. The law. Code, social norms, and markets as leverage that we have to try to make things better online.

Danny: Yeah. And I think it comes back to this idea that we have, like this big stockpile of all the world’s conversations and we have to like crack it open and redirect it to these, these smaller experiments. And I think that comes back to this idea of interoperability, right? There’s been such an attempt, a reasonable commercial attempt by these companies to create what the venture capitalists call a moat, right? Like this, this space between you and your potential competition. Well, we have to breach those modes and bridging them involves either by regulation or just by people building the right tools, having interoperability between the past, of social media giants and the future of millions and millions of individual social media places. 

Cindy: Thank you to Daphne Keller for joining us today. 

Danny: And thank you for joining us. If you have any feedback on this episode please email [email protected]. We read every email. 

Music for the show is by Nat Keefe and Reed Mathis of BeatMower. 

“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. 

I’m Danny O’Brien.

 And I’m Cindy Cohn. Thank you for listening, until next time. 


Fight Censorship, Share This Post!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.