An Internet helpline, or online safety helpline, is a service independent of Internet companies that helps Net users – so far mainly minors and adults who work with them – address harmful online content. The Social Media Helpline, piloted in the US 2015-’18, following the model of the UK’s Professionals Online Safety Helpline, served K-12 schools and school districts seeking help with harmful online content on behalf of their students.
There are Internet helplines all over Europe, as well as in Australia and New Zealand. They take many forms. Some, such as in Belgium and Denmark, are part of longstanding child helplines; others, such as Australia’s, in the Office of the eSafety Commissioner, are part of national governments and still others, such as in New Zealand, were established by national laws and evolved from NGOs. Seventy percent of Europe’s “are general service helplines [that include Internet help], with 30% dedicated specifically to Internet,” according to this report from Europe. People contact them using various communication tools: phone, email, text messaging and sometimes all of the above.
How they help
Ideally, helplines provide help in two directions: 1) context to the “cloud,” the platforms, and 2) help, referrals and information to callers on the ground. They make up the new and increasingly worldwide middle layer of help that fills the gap in user care created by the arrival of social media. To users, they provide direct help (including escalation of abusive content to platform moderation teams where appropriate), advice and information about apps and services, as well as referrals to specialized traditional hotlines and emergency responders where appropriate. To content moderation teams in the cloud, helplines provide context for the harmful they escalate – context that distant platform moderators rarely have.
Before the European Commission helped establish them across Europe in the 2000s, helplines were the missing piece on the intervention side of the online risk prevention-intervention equation, when prevention education was plenteous, but all that users had on the intervention side was either platforms’ largely unresponsive abuse reporting systems or law enforcement (the latter has outstanding emergency response protocols but is not an institution designed to navigate child and adolescent development or to address non-emergency, non-criminal, socio-emotionally harmful content produced by kids and teens). And we learned in the same decade that online harm is largely psychosocial (see this national task force report).
To the school professionals the Social Media Helpline served, we provided social media know-how and perspective, escalation of abusive content when appropriate (based on platforms’ Terms of Service), and referrals to emergency or specialized help when needed. We were an information clearinghouse as well as youth online harm helpdesk.
Issues they address
“Among the most persistent risks identified by helplines are bullying and cyberbullying (92%)
followed by hate speech (88%) and sexual content (75%),” the European researchers found. The breakdown for the US helpline was 34% cyberbullying, 33% inappropriate content, 16% “reputation issue,” 8% “sexting,” 6% inappropriate contact and 4% “other,” according to the evaluation report from our independent evaluation service. Ninety-three percent of those who contacted the US helpline were either “extremely satisfied” (63%) or “very satisfied” (30%) (a copy of the evaluation report can be requested via info[at]socialmediahelpline.com). The US helpline benefitted from having direct working relationships with all the major social media services.
Funding
Funding for Internet helplines comes from a variety of sources. Except in the US, all the Internet helplines are at least partially government-funded, with some funding from the European Commission for European ones. The US pilot was primarily funded by the Digital Trust Foundation (which has since closed its doors), with some additional funding from the social media platforms. Ideally, the services are provided to the public gratis, unless their work serves institutions, such as schools, rather than individuals. It is our recommendation that a US helpline – if national and not folded into mental healthcare, youth helplines or other existing help services – remain independent from government and therefore apolitical.
Based on what we learned from helplines outside the US, we also recommend that the Internet industry help support helplines that are struggling financially, because helplines benefit the industry (by educating users and growing “customer satisfaction” as well as screening out false positives, providing context to cases and escalating only actionable abuse reports). We also recommend that the industry help build out this middle layer of help and support the development of an association of the world’s helplines. This would benefit the companies as well as the public because the association could work with the industry to establish and maintain helpline best practices globally. It could foster innovation, coordinate the work of helplines worldwide, train new helplines, be a source of user-care research and be a single point of contact for the industry.
The part about serving youth
Internet helplines help provide the context for harmful youth-posted content that the platform don’t have. What happens between peers and peer groups in school is highly contextual and ever-changing. And since most of any child’s waking hours revolve around school life, school is the context for content and behavior that shows up online – not the Internet. You can feed a machine-learning algorithm reams of students’ speech, images and other data, but as soon as you do, the speech, memes, trends, etc. have changed.
It’s hard enough for an adult assistant principal in the same school to get context on problems at their school that turn up online. That’s why school staff (working with students) and a helpline can get to the bottom of an online incident better than any algorithm and make a report to a platform which is actionable – while the vast majority of user abuse reports are not. Helplines also calm fears, provide perspective and increase inter-generation trust, making offline caregivers’ jobs easier. That in turn increases children’s online safety, reduces blame, and grows “trusted environments” (see the Aspen Task Force report) and positive school climates.
On a case-by-case basis, Internet helplines also represent valuable insight into youth practices in social media in real time. Pooling that knowledge (while abiding by all data privacy laws), through a global network of helplines working with independent researchers, can only expand insight in terms of depth, culture and geography, as well as improve safety and support for youth.
So to summarize, the work of helplines can defuse fears, foster communication and thus trust between youth and adults, provide tangible results and model collaborative problem-solving. As has been the case with our pilot, they can promote restorative and collaborative, rather than punitive, responses to media-related problems among students, especially when student leaders are involved in solution development. They can be the much-needed trusted third party that serves both Internet users and Internet providers.
Helpline challenges
- Sustained funding. Perhaps the Internet industry could encourage and fund the establishment of regional Internet helplines, if not one in every country. A network of regional helplines would eliminate the need for an international association of national ones and would be easier to coordinate. Independence from the industry would be essential, as is the case with the independent Oversight Board which Facebook set up and for which it provided seed funding but which is not its own independent entity.
- Public awareness. A helpline can only be helpful to the extent that the public is aware of it and uses it. So marketing is key. Both government and industry need to help promote Internet helplines.
- Keeping up to date. This involves awareness of new apps and services attracting youth and the research on young people’s use of them. It also involves establishing strong ties with the apps and companies so they’ll provide a contact or liaison who has access to content moderation decision makers.
- Keeping good records. This is the challenge of categorizing what callers are calling about, because often one incident can involve a number of issues: hate speech, sexual harassment, cyberbullying, threats of physical violence, etc. We learned that – although it’s helpful to categorize and cross-categorize correctly for future reference – ultimately the focus needs to be on the child, not the problem.
For further reading
- “Insafe Helplines: Operations, Effectiveness and Emerging Issues for Internet Safety Helplines“
- Internet Safety Helplines: Exploratory Study First Findings“
- A timeline of how Europe’s Internet safety strategy unfolded, starting in 2004, including the 3-pronged approach to Internet safety for children and youth in each country: Safer Internet Centres (education, awareness-raising and coordination), helplines (both SICs and helplines coordinated by the Insafe network) and hotlines (for addressing illegal child abuses images online, coordinated by the INHOPE network).
- From Pew Internet Research: “Code-Dependent: Pros and Cons of the Algorithm Age” – and then there’s this CNET story on how teens game the algorithms (behind Instagram).