Credit: Annie Spratt (via Unsplash)

Can we build a better content ecosystem without making big tech the discourse police? [Interview with Jillian York]

The apparent simplicity of our digital lives, governed by a revolving series of platforms on which the same content continually circulates in slightly different contexts, has ironically created a considerable degree of complexity.

These large quasi-public (cyber)spaces, filled with quasi-individuals who are sometimes real, sometimes pseudonymous, and who are sometimes earnest but often poisoned by irony, have turned communication into something of an amusement park. They are there for us to ostensibly express ourselves (what else?!) – but it’s easy to forget the fact that they are there to extract profit from attention.

It’s the profit motive that we need to keep in mind when discussing issues around content moderation and freedom of expression: when you build a platform that effectively monetizes the flow of ideas and information, what that information is is secondary to everything else.

Is the medium really the message?

In his book What Tech Calls Thinking, Adrian Daub notes the influence of Marshall McLuhan on big tech. McLuhan’s tiresome dictum that “the medium is the message,” Daub argues, sets the template for Silicon Valley thinking: “To create content is to be distracted. To create the platform is to focus on the true nature of reality” Or, in other words, “the medium is for ones who get it. The content is for idiots, naifs, sheep.”

But in teaching its users that the medium is the message, these platforms have ironically become publishers by stealth. It’s almost as if we all learned that content was trivial and unimportant, precisely because that’s what social media taught us: when it turned out that wasn’t the case, our collective response has been to turn to those platforms and ask them to sort things out.

“It’s not that we should do nothing [about content moderation]” Lizzie O’Shea writes in a piece for The Baffler, “but we should be careful about demanding that companies be charged with this duty. Applying automated processes to define the limits and substance of what we see in our digital lives is not a neutral process, and rarely is it benign.”

One conclusion to O’Shea’s piece is simply that we need to give content more respect. We need to stop treating it as little more than a byproduct of these increasingly pervasive technologies that provide us with some vague sense of connection, and treat it instead as the philosophical and political question it is, one that gets right to the heart of what civic society should look like.

These questions are certainly complex, and perhaps one of the reasons we look to big tech platforms is precisely so we don’t have to try and answer them. Like many other technologies, our platform ecosystem has allowed us to outsource our thinking.

Things fall apart

But it would seem that this outsourcing isn’t working. Events at the start of the year – in which Twitter and Facebook chose to ban Donald Trump – have given the debate around content moderation specifically and public speech in general even greater urgency. It was for this reason that I spoke to Jillian York, who is the Director of International Freedom of Expression at the Electronic Frontier Foundation (EFF).

I had a number of questions that I’ve been working through for some time (and, to some extent, still am), but she provided some much needed insight to better understand how we might move forward.

These issues aren’t new, but they’ve taken on a new sense of urgency after Twitter and Facebook chose to ban Donald Trump in the wake of the riots at the Capitol. At the time of writing, Facebook’s Oversight Board is preparing to discuss whether the platform should allow Trump to return.

Plenty has been written about the events at the start of the year. Indeed, I argued at the time that we should focus on political energies as much as we should social media companies.

Since our conversation, York’s new book – Silicon Values: The Future of Free Speech Under Surveillance Capitalism – was published. As we move deeper into 2021, it’s a good way to reflect on the issues at hand in depth – hopefully this piece can provide a tiny bit of further thinking for anyone that has already, or is about to read it.

Do we really want Facebook to be the discourse police?

One of the key questions that’s been lingering since the start of the year is about the extent to which we want social media platforms to police our discourse. In an era where the violence of institutions of law and order have come under such scrutiny, I’ve often felt uneasy about calls for more policing – especially when it’s done by private companies.

York admitted to me that this is “something [she has] been thinking about for quite some time” as well. “There’s no easy answer but there are a few things that civil society has been pushing for for a long time,” she said.

Fundamentally, York explained, it’s about “transparency within policy making and in policy implementation.” In practice this means things like “being transparent to the public over error rates, and about why certain things are removed, as well as being transparent in providing notice to the user over what rule they violated.”

Facebook might counter that they are transparent. Initiatives like the Facebook Oversight Board are presented as vital innovations that demonstrate Facebook’s willingness to engage with the wider world about issues of content moderation.

Nick Clegg’s intervention

Moreover, a recent blog post by Nick Clegg, Facebook’s Vice-President for Global Affairs and Communications, also emphasised how transparency has been a guiding principle for a number of product updates. “You should be able to better understand how the ranking algorithms work and why they make particular decisions, and you should have more control over the content that is shown to you” Clegg writes. “You should be able to talk back to the algorithm and consciously adjust or ignore the predictions it makes — to alter your personal algorithm in the cold light of day, through breathing spaces built into the design of the platform.”

However, the idea that we might be able to simply discard the algorithm doesn’t appear to have occurred to Clegg. And as for the Facebook Oversight Board, it’s hard to see it as anything more than another locked box; a group of people coming together to deliberate on issues without a clear and consistent framework to guide them.

The Santa Clara Principles

There has, however, been considerable work already done on potential frameworks for content moderation. The best known example are the Santa Clara Principles, established in May 2018 by a number of organizations (including the Electronic Frontier Foundation), and academics, including as Dr. Sarah T. Roberts of UCLA. They propose three things:

  1. Platforms must be transparent about numbers – from flags to suspensions, through to post and account removal, there needs to be clarity about what’s actually happening.
  2. Platforms should give clear and transparent notice to users about what content is being removed/why their account is being suspended, with violations explicitly stated.
  3. Individuals should be able to appeal decisions.

If you’re reading this for the first time you’re probably thinking that none of this sounds particularly cutting-edge or provocative. And you’re right – in fact, most major tech companies would agree with you. “Most of the major companies have actually endorsed these principles but only one has implemented them” York tells me (If you’re interested, the one that has is Reddit).

The Santa Clara Principles are in the process of being revised, York says. She highlights that although they “all serve as an excellent example for how we can push companies to be accountable… they’re clearly not everything.” The revised principles will likely be more comprehensive and go into greater detail about what needs to be done in the industry.

Although the Santa Clara Principles are a collaborative project, York offers her own perspective on what else can be done. “Companies need users to regularly consent to the rules and be transparent about the rules up front,” she argues. We can’t, she adds, “have them changing constantly under our feet.”

Section 230

It’s impossible to talk about platforms and the discourse police without discussing Section 230. §230 is a law that protects platforms and websites from being held responsible for content on their sites. Essentially, it’s a way of recognising the interactive nature of the modern internet – the fact that people comment, post, share content on sites that they don’t own. It has become a focal point for many critics of big tech from across the political spectrum as it ostensibly absolves social media platforms from any responsibility for the content people post on them.

It’s easy to see this as a commonsense perspective: why should huge corporations dodge accountability for misleading or hateful content?

Except it isn’t that simple. At a basic level it doesn’t really tackle hate speech. “If people want to take it up with anyone, they need to take it up with the constitution in the first amendment” York highlights.

Even more pointedly, it’s worth noting Facebook has actually made supportive noises regarding the repeal of §230. “That to me… should give anyone pause – if the biggest, most monopolised platform thinks this law is fine, clearly it would help them.”

York explains Facebook’s position, saying “the reason is that Facebook can afford to be liable they can afford to put whatever’s necessary in place – be it lawyers, paralegals, content moderators…”

Repealing §230, then, would only further entrench the power of platforms like Facebook. It would increase the sort of centralization that has arguably been one of the main causes of the problems we’ve seen over the last decade.

“It’s the smaller platforms that will suffer” York points out. “If we want to see a plurality of platforms thrive – and that is what I want – whether it’s big unmoderated platforms or platforms like Reddit that allow user moderation at the sub reddit level – as well as niche platforms like Ravelry – the niche platform that banned political speech last year – if we want to see all of those thrive, then repealing §230 is not the way to do [it].”

Accountability across the stack

Although it can be difficult to untie the competing interests and the democratic pitfalls that come with digital accountability and content moderation, the increased awareness on the issues at stake is to be welcomed.

However, it also raises further questions about the logical end point of accountability across technical infrastructure. In particularly polarized times – and the start of 2021 was perhaps the apotheosis of political polarization – politicizing the decisions of vendors that (literally) support different platforms and services feels inevitable.

This is something I’ve been thinking about a lot since reading Ben Thompson’s piece A Framework for Moderation. It provides some careful thinking on how we should see each layer of digital infrastructure. However, I wanted to get York’s perspective for a more left-leaning view (Thompson is decidedly more liberal in his outlook).

York added a caveat before answering, telling me that this wasn’t her area of expertise (she mentioned Joan Donovan – who I contacted but didn’t receive a response – as an example of someone doing good work in this space). However, her thoughts were instructive.

“We need to think harder about where different types of companies lie in the infrastructure stack,” she says. This doesn’t mean, however, that we should politicize every aspect of infrastructure: “I don’t think that politicizing services at every level should happen. Do I think it’s inevitable? I’m an optimist so no, but I do worry that it’s happening.”

However, there is one misconception that she is eager to put right. “There’s a lot of hullabaloo (for lack of a better term) around Amazon kicking off Parler, but Parler’s not the best example,” she says. Instead, “we should be thinking about things like WikiLeaks: AWS kicked off WikiLeaks a long time ago… after… there was [apparently] some pressure from the US government… Whether or not that pressure existed, it was certainly in line with what state power at the time wanted.”

To York, the WikiLeaks story is “more troubling than booting Parler off” precisely because WikiLeaks was a direct threat to the power of the state. So, yes, although condoning AWS’s removal of Parler tacitly condones the action taken against WikiLeaks, it’s important to recognise the fallacy of the slippery slope argument that people often make.

“The slippery slope actually runs in the opposite direction – WikiLeaks was kicked off first. The more vulnerable user that was more vulnerable to state power was kicked off before [Parler].”

The profit motive

It’s clear that the profit motive plays a part in entrenching these systems that allow hate speech and misinformation. Moreover, it’s also the profit motive that marginalizes content moderation as an activity. If your business model is built on attention, why invest in things that could bring additional friction to the sprawling attention machine you’ve built?

York mentions the idea that freedom of speech is not freedom of reach (“I don’t know to who it’s attributable exactly” she says), and goes on to say that one of the central problems with our existing information ecosystem is that we’ve essentially lost the ability to participate in building those systems of reach.

This isn’t to say traditional media is without its issues, but the editorial work that takes place, whether in broadcasting or publishing, means some degree of accountability (even if it’s just nominal) is there. However, with the work of editorial teams automated away by editorial teams, that accountability practically disappears. And sure, Facebook will try to save face with new initiatives like the oversight board, but the algorithms still remain central. “The reason that those are put there in the first place is because it is profitable,” York says.

Content regulation in Europe

The content moderation conversation unfortunately centres on the U.S. This is partly to be expected, but it also means that perspectives are limited, framed by the norms and politics of the country.

However, like me, York is based in Europe (Berlin to be precise). I asked her how she sees the attitude to regulation differ across the Atlantic.

There is, she says, “a divide in regulatory thinking between the US and europe. While the U.S. is “focused on antitrust as well as 230 and liability, in Europe [while] there’s definitely a focus on those things as well… There’s also, in the upcoming Digital Services Act (DSA), the other really great stuff that the US is missing.” This includes things “like interoperability user controls, putting control back in the hands of users, transparency and accountability” York explains.

Interestingly, York contrasts some differences between the approach of national governments and the EU. “At the EU level some of the best stuff is happening, like the DSA. York says. However, she continues by saying “at the national level I’m a little bit more concerned.” She cites the German Network Enforcement Act (sometimes called the Facebook Act) which, she argues, “has gone back to that liability model but in a really troubling way, where it places the onus on companies to remove content within a very short period of time.”

Moreover, she worries that the law is being copied by “much less democratic nations, including most recently Turkey… I think we’ve got reason to be wary there” she says.

Related: Uber’s EU lobbying efforts could step up a gear in 2021

Thinking about content moderation beyond the USA and Europe

Although it’s hard to take the conversation wider than the US, York also cautions against limiting our perspective to just the U.S. and Europe. “Part of the problem is seeing this as a Europe/U.S. divide in the first place. The rest of the world has a role to play in this and I think we need to look to and include the rest of the world… I don’t think we’re going to get the right answers if we rely
on the United States and Europe to solve these problems.”

It’s a point that’s hard to disagree with. If we see centralization and entrenched power as one of the reasons for the challenges in content moderation, we can’t replicate that centralization in our thinking. The impact of social media platforms like Facebook has been particularly acute in other parts of the world – such as Myanmar – it’s impossible to really reckon with these issues without recognising the colonialism that’s implicit in both the way the platforms operate and even in many of the ways they are critiqued.

The message? The medium sucks

Talking about content moderation is complex as it takes in such a wide range of problems and issues. One of the most useful things in talking to York was that it helped me to begin to compartmentalize a number of things – while it’s always important to see how issues connect and how they’re linked, in the context of content moderation, misinformation, hate speech, it’s useful to define our terms and delineate different problems from one another.

However, the one thing that remains clear to me is that while many of the issues require us to think carefully about infrastructure, automation, and product design, we ignore content at our peril. If social media has taught us anything, it’s that content has an allure and an attractiveness that’s making it hard to wrestle it out of its new found prominence in our everyday lives. And indeed, maybe we shouldn’t try to – perhaps we just need to find a more democratic and participatory way for us to create, curate, and consume content in a way that’s both safe and, maybe, just maybe – more meaningful than what we have now.

Follow Jillian York on Twitter: @jilliancyork
Buy Silicon Values from Verso Books