• Skip to main content
  • Skip to footer

Social Media Helpline

Lessons from Piloting a Social Media Helpline

  • About
  • Key Takeaways
  • For the Future
  • Contact
  • SOCIALMEDIAHELPLINE.COM

Internet helpline

Digital safety, wellbeing: 2018 highlights

January 10, 2019 By ICanHelpline

This may not be the Internet safety look-back on 2018 you’d expect. With all the news about data breaches, “fake news,” “tech addiction,” algorithmic bias, election manipulation, hate speech, etc., etc…

It’s not a pretty picture. But it’s also not the whole picture. By definition, the news reports airline crashes, not safe landings. Even if 2018 really was unique, though, with bad news the rule not the exception, positive developments really are news, then, right? So here are some digital safety developments worth noting:

An important book on cyberbullying vs. dignity: In “Protecting Children Online?” (MIT Press, 2018), author and researcher Tijana Milosevic for the first time places the subject of cyberbullying where it belongs: in the framework (and slowly growing public discussion) of dignity. Why there and not in “Internet safety”? Because “dignity is what is violated when bullying and cyberbullying take place—when a child or a teen is ridiculed because of who or what they are,” Dr. Milosevic writes. “Dignity is sometimes conceptualized as the absence of humiliation,” and – though it can be private or 1:1 like bullying – cyberbullying, because media-based, takes the form of public humiliation almost by definition. Dignity is particularly effective as an antidote to social aggression because it removes the differentiations and imbalances, such as social comparison, social positioning and power imbalances, that fuel it.

“Dignity is an inalienable right, which, unlike respect, does not have to be deserved or earned,” according to Milosevic, citing the work of scholars and practitioners from the fields of political science, education, conflict resolution and clinical psychology. This cross-disciplinary thinking is a major step forward for Internet safety for the very reason that what happens online can’t be separated out from bullying, harassment and hate speech offline and is primarily about our humanity and sociality, rather than our technology.

Real “screen time” clarity, finally: Screen time is not a thing. It’s many things, researchers tell us, which contrasts pretty significantly with lots of scary headlines and many parents’ harsh inner (parenting) critic. Here’s a headline actually drawn from academic research: “We’ve got the screen time debate all wrong. Let’s fix it.” As Wired reported, “time spent playing Fortnite ≠ time spent socializing on Snapchat ≠ time spent responding to your colleague’s Slack messages.” See also “Why the very idea of screen time is muddled and misguided” and “The trouble with ‘screen time rules’” from researchers in the Parenting for a Digital Future blog.

Safety innovation in social norms: A powerful tool for social-emotional safety and civility that humans have shaped for thousands of years, social norms are just beginning to be associated with safety in communities, from schools (see this from Prof. Sameer Hinduja) to online communities. And now this tool is being deployed by some social platforms for their users’ safety (examples here). It’s about platforms giving users more control not ceding responsibility. Some platforms, such as giant Facebook and startup Yubo, are deleting more harmful content than ever proactively rather than only in response to users’ requests. We can contribute to that trend’s momentum by encouraging our students to report content that disturbs or hurts them – showing them they’re part of the solution. We know they are not passive consumers online; they have agency and intelligence, and one way they can exercise their rights of participation is in protecting their own and their peers’ safety in the apps they use. Equipping them for this is part of social-emotional learning. It’s another “tool” that has made real headway in adoption by schools in many states this past year, and it’s being discussed more and more in other countries as well. SEL teaches skills that support children’s empathy development, good social decision-making and recognition of their own and their peers’ dignity and perspectives.

Unprecedented multi-perspective discussion – even in policymakers’ hearings. The first-ever formal House of Commons committee hearing outside the UK, there was grandstanding, sure, but also truly substantive testimony from a rich range of views and expertise; those of scholars, news executives and reporters, as well as platform executives. We will not move the needle in making this new media environment truly work for us until we get all stakeholders at the table talking rationally and respectfully. Old-school shaming, fear-mongering and adversarial approaches will not serve us.

An important new book on content moderation. The ability to get harmful online content deleted has long been the main focus of “online safety.” This was the year it became clear that content moderation is both less and more than our source of online safety – and that we need it but certainly shouldn’t completely rely on it. One person’s “free speech” is another’s harm. It’s highly contextual. “It is essential, constitutional, definitional,” writes Tarleton Gillespie in his important new book Custodians of the Internet. “Moderation is in many ways the commodity that platforms offer.” It defines a platform, our experience of it and even the nature of our media environment. And it defines even more: “We have handed over the power to set and enforce the boundaries of appropriate public speech to private companies,” writes Dr. Gillespie, a principal researcher at Microsoft Research New England, in the Georgetown Law Technology Review.  And we’re talking about “appropriate public speech” in every society on the planet. These are not just platforms or Internet companies, they’re social institutions, a point made by scholar Claire Wardell and journalist Anna Wiener in The New Yorker. That fact calls for new, not more – new forms of risk mitigation and regulation.

Platforms discussing content moderation themselves – publicly. Another first this year was the rich, cross-sector discussion about this on both coasts this year. At two conferences called “CoMo at Scale,” one at Santa Clara University in California, the other in Washington, social media platform executives gathered with scholars, user advocates and the news media and discussed their content moderation tools and operations publicly for the first time. “One of the great things about attending these events is that it demonstrated how each internet platform is experimenting in very different ways on how to tackle these problems,” TechDirt reported. “Some are much more proactive, others are reactive. And out of all that experimentation, even if mistakes are being made, we’re finally starting to get some ideas on things that work for this community or that community.”

Platforms’ improved transparency. There’s a long way to go, but they’re investing in it. This year they put out increasingly granular numbers on what content is coming down. That’s partly due to laws like Germany’s just-enacted anti-online hate law NetzDG (though that too is not all good news, according to The Atlantic. What’s different now is that Facebook now includes numbers on proactive deletions vs. reactive ones, and Twitter includes deletions in response to users’ requests, not just governments. Also for the first time this year, Facebook included data on bullying and harassment violations, saying that in the third quarter (the first time it provided numbers for this category), it took down 2.1 million pieces of such content, 85.1% of it reported by users, demonstrating the importance of users making use of abuse reporting tools (here are Facebook’s and Twitter’s transparency reports). This greater transparency is so important. But it’s not the ultimate goal, right? It’s a diagnostic tool that gets us to a better treatment plan – where the treatment demands a range of skills and actions both human and technological behind the platforms and in society. Safety in this user-driven media environment is a distributed responsibility. When platforms say this, it’s seen as self-serving, but it’s simply a fact of our new media environment. The platforms have their responsibility, on both the prevention and intervention sides of the equation. But there’s a limit to what they can do, and transparency allows users and policymakers to find and fill the gaps and figure out solutions that work for the media environmental conditions we’re only just beginning to get used to.

So that’s it – not for this series, just for 2018. These bright spots are by no means comprehensive; they’re just the developments that stood out the most this year. What’s exciting is, they come with some really interesting ideas for developing solutions to the problems that got so much scrutiny and news coverage this year. That’s what’s coming up next, first thing in 2019: some creative ideas that have surfaced for a safer Internet going forward.

Happy New Year!!

Filed Under: iCanHelpline Blog Tagged With: icanhelpline, Internet helpline

What’s a social media helpline?

February 14, 2017 By ICanHelpline

The U.S. has many fine, well-established hotlines and helplines designed to help with specific social problems (dating abuse, depression, domestic violence, etc.) or support vulnerable populations (such as LGBTQ youth). This helpline is about the online expression of those social problems and types of victimization: usually called “abusive content,” the kind of content that typically violates social media apps’ and services’ Terms of Service. The most common kind young people face is harassment or cyberbullying.

People using devices around a big tableOne way to think of the difference is, traditional helplines help with what’s happening (or being experienced) offline; Internet helplines like ours help with what’s going on online. If people call us about offline issues, we refer them to the specialized help their seeking at traditional helplines (here, in our Resources section, is a list of the U.S.’s top hotlines and helplines for all kinds of offline issues).

What schools report
Having said that, it’s important to add that research shows that there’s a great deal of overlap between what we see in social media and what’s happening in everyday life. True for everybody, it’s especially true for young people. The problems schools report to iCanHelpline are typically relational problems in the school community that are expressed online in the form of texts, tweets, comments, images and videos. Sometimes they’re expressed verbally or physically on campus during school hours; sometimes they’re expressed online on campus; and sometimes they’re expressed online off campus after school hours. (The days of relational issues having clear lines between on campus and off campus or between online and offline are over.)

So there’s the relational issue itself and the visible expression of it online. The latter is what an Internet, or social media, helpline is designed to help with. We can actually be a big support to school administrators dealing with the relational part by helping to remove the hurtful visible expression of it. This content – which can range from being mean to extremely embarrassing to demoralizing or even criminal – can lead to emotional harm, physical fights, threats of violence, lawsuits and worse. A social media helpline can’t resolve the relational issues but it can help get the visible evidence of it deleted so that school staff, students and parents seeking relief from the drama or harm can help restore calm and safety so the relational issues can be resolved.

‘The real-time, real-life reality TV show’
As one educator put it, “Once the content is down, there’s nothing to copy, paste and share, fight over or gossip about. The real-time, real-life reality TV show’s over.” Defusing and disarming gets everybody closer to restorative solutions. Which means people can focus more on teaching, learning and constructive interaction. This can have tremendous positive impact on school climate and culture.

iCanHelpline is one of only a few Internet-native help services in the U.S. and the only one specifically serving youth (through their schools). It’s one of many youth-serving Internet helplines in Europe, Australia and New Zealand (more on that here). Ours is modeled after the UK’s Professionals Online Safety Helpline because we serve school personnel. Along with all Internet helplines around the world, iCanHelpline is part of the new middle layer of help demanded by today’s very social, user-driven media environment – middle in the sense that we help in two directions. We provide help and perspective to users and much-needed context to social media user support teams, making abuse reports actionable. Because, as parents, teachers and school administrators know, it’s very hard to understand young people’s online interactions (even offline ones) without any context, and it’s even harder for people far away who neither know the young people nor their school context. So we help both sides help students better.

Filed Under: iCanHelpline Blog Tagged With: icanhelpline, Internet helpline

Footer

 

Contact

info@socialmediahelpline.com
  • Home
  • About
  • Contact

Copyright © 2025 · All Rights Reserved · The Net Safety Collaborative · Privacy Policy

Top photo by Pavan Trikutam. Lower photo by Marvin Meyer.