This may not be the Internet safety look-back on 2018 you’d expect. With all the news about data breaches, “fake news,” “tech addiction,” algorithmic bias, election manipulation, hate speech, etc., etc…
It’s not a pretty picture. But it’s also not the whole picture. By definition, the news reports airline crashes, not safe landings. Even if 2018 really was unique, though, with bad news the rule not the exception, positive developments really are news, then, right? So here are some digital safety developments worth noting:
An important book on cyberbullying vs. dignity: In “Protecting Children Online?” (MIT Press, 2018), author and researcher Tijana Milosevic for the first time places the subject of cyberbullying where it belongs: in the framework (and slowly growing public discussion) of dignity. Why there and not in “Internet safety”? Because “dignity is what is violated when bullying and cyberbullying take place—when a child or a teen is ridiculed because of who or what they are,” Dr. Milosevic writes. “Dignity is sometimes conceptualized as the absence of humiliation,” and – though it can be private or 1:1 like bullying – cyberbullying, because media-based, takes the form of public humiliation almost by definition. Dignity is particularly effective as an antidote to social aggression because it removes the differentiations and imbalances, such as social comparison, social positioning and power imbalances, that fuel it.
“Dignity is an inalienable right, which, unlike respect, does not have to be deserved or earned,” according to Milosevic, citing the work of scholars and practitioners from the fields of political science, education, conflict resolution and clinical psychology. This cross-disciplinary thinking is a major step forward for Internet safety for the very reason that what happens online can’t be separated out from bullying, harassment and hate speech offline and is primarily about our humanity and sociality, rather than our technology.
Real “screen time” clarity, finally: Screen time is not a thing. It’s many things, researchers tell us, which contrasts pretty significantly with lots of scary headlines and many parents’ harsh inner (parenting) critic. Here’s a headline actually drawn from academic research: “We’ve got the screen time debate all wrong. Let’s fix it.” As Wired reported, “time spent playing Fortnite ≠ time spent socializing on Snapchat ≠ time spent responding to your colleague’s Slack messages.” See also “Why the very idea of screen time is muddled and misguided” and “The trouble with ‘screen time rules’” from researchers in the Parenting for a Digital Future blog.
Safety innovation in social norms: A powerful tool for social-emotional safety and civility that humans have shaped for thousands of years, social norms are just beginning to be associated with safety in communities, from schools (see this from Prof. Sameer Hinduja) to online communities. And now this tool is being deployed by some social platforms for their users’ safety (examples here). It’s about platforms giving users more control not ceding responsibility. Some platforms, such as giant Facebook and startup Yubo, are deleting more harmful content than ever proactively rather than only in response to users’ requests. We can contribute to that trend’s momentum by encouraging our students to report content that disturbs or hurts them – showing them they’re part of the solution. We know they are not passive consumers online; they have agency and intelligence, and one way they can exercise their rights of participation is in protecting their own and their peers’ safety in the apps they use. Equipping them for this is part of social-emotional learning. It’s another “tool” that has made real headway in adoption by schools in many states this past year, and it’s being discussed more and more in other countries as well. SEL teaches skills that support children’s empathy development, good social decision-making and recognition of their own and their peers’ dignity and perspectives.
Unprecedented multi-perspective discussion – even in policymakers’ hearings. The first-ever formal House of Commons committee hearing outside the UK, there was grandstanding, sure, but also truly substantive testimony from a rich range of views and expertise; those of scholars, news executives and reporters, as well as platform executives. We will not move the needle in making this new media environment truly work for us until we get all stakeholders at the table talking rationally and respectfully. Old-school shaming, fear-mongering and adversarial approaches will not serve us.
An important new book on content moderation. The ability to get harmful online content deleted has long been the main focus of “online safety.” This was the year it became clear that content moderation is both less and more than our source of online safety – and that we need it but certainly shouldn’t completely rely on it. One person’s “free speech” is another’s harm. It’s highly contextual. “It is essential, constitutional, definitional,” writes Tarleton Gillespie in his important new book Custodians of the Internet. “Moderation is in many ways the commodity that platforms offer.” It defines a platform, our experience of it and even the nature of our media environment. And it defines even more: “We have handed over the power to set and enforce the boundaries of appropriate public speech to private companies,” writes Dr. Gillespie, a principal researcher at Microsoft Research New England, in the Georgetown Law Technology Review. And we’re talking about “appropriate public speech” in every society on the planet. These are not just platforms or Internet companies, they’re social institutions, a point made by scholar Claire Wardell and journalist Anna Wiener in The New Yorker. That fact calls for new, not more – new forms of risk mitigation and regulation.
Platforms discussing content moderation themselves – publicly. Another first this year was the rich, cross-sector discussion about this on both coasts this year. At two conferences called “CoMo at Scale,” one at Santa Clara University in California, the other in Washington, social media platform executives gathered with scholars, user advocates and the news media and discussed their content moderation tools and operations publicly for the first time. “One of the great things about attending these events is that it demonstrated how each internet platform is experimenting in very different ways on how to tackle these problems,” TechDirt reported. “Some are much more proactive, others are reactive. And out of all that experimentation, even if mistakes are being made, we’re finally starting to get some ideas on things that work for this community or that community.”
Platforms’ improved transparency. There’s a long way to go, but they’re investing in it. This year they put out increasingly granular numbers on what content is coming down. That’s partly due to laws like Germany’s just-enacted anti-online hate law NetzDG (though that too is not all good news, according to The Atlantic. What’s different now is that Facebook now includes numbers on proactive deletions vs. reactive ones, and Twitter includes deletions in response to users’ requests, not just governments. Also for the first time this year, Facebook included data on bullying and harassment violations, saying that in the third quarter (the first time it provided numbers for this category), it took down 2.1 million pieces of such content, 85.1% of it reported by users, demonstrating the importance of users making use of abuse reporting tools (here are Facebook’s and Twitter’s transparency reports). This greater transparency is so important. But it’s not the ultimate goal, right? It’s a diagnostic tool that gets us to a better treatment plan – where the treatment demands a range of skills and actions both human and technological behind the platforms and in society. Safety in this user-driven media environment is a distributed responsibility. When platforms say this, it’s seen as self-serving, but it’s simply a fact of our new media environment. The platforms have their responsibility, on both the prevention and intervention sides of the equation. But there’s a limit to what they can do, and transparency allows users and policymakers to find and fill the gaps and figure out solutions that work for the media environmental conditions we’re only just beginning to get used to.
So that’s it – not for this series, just for 2018. These bright spots are by no means comprehensive; they’re just the developments that stood out the most this year. What’s exciting is, they come with some really interesting ideas for developing solutions to the problems that got so much scrutiny and news coverage this year. That’s what’s coming up next, first thing in 2019: some creative ideas that have surfaced for a safer Internet going forward.
Happy New Year!!