• Skip to main content
  • Skip to footer

Social Media Helpline

Lessons from Piloting a Social Media Helpline

  • About
  • Key Takeaways
  • For the Future
  • Contact
  • SOCIALMEDIAHELPLINE.COM

Blog

The Internet is changing, so Internet safety needs to too

March 19, 2019 By ICanHelpline

In its coverage of Facebook’s announced shift in the direction of privacy and private communication, the New York Times reported that the platform’s move will “redefine how people use social media.” Then it contradicted itself, saying that “consumers were already moving en masse toward more private methods of digital communications,” citing Snapchat, Nextdoor, Signal and Telegram. The second part – that people are moving away from public and performative – is more likely. Teens flocking to Snapchat five years ago out of sheer self-presentation fatigue was an early sign. Sure, plenty of us, including teens, still like to put our selves “out there.” Sure, transparency and TMI are growing overall. But not everybody wants to live life in a fishbowl – not all the time – and Facebook’s announcement is an indicator that the pendulum has started to swing back.

emojisSo things are changing, and Internet safety needs to keep up with the change and our children. How? Here are some suggestions:

  • Embrace a new(-old) paradigm. Since its earliest days, Internet safety has been positioned in the law enforcement and, to a degree, public health paradigms. Quite understandably, since research came later, Net safety education has for too long been reactive, often scary, and about control (as in controlling the spread of a disease) and prohibition (as with drug addiction — now “tech addiction”). In the earliest days — and we’re now seeing this again with “digital wellbeing” — self-proclaimed experts and advocates literally made stuff up. At least until 2013 in the US, when researchers analyzed our country’s most widely used Internet safety curricula and programs, none of it was evidence-based or employed even the most basic criteria of risk prevention education. We now have a solid and growing body of youth online risk and social media research, so it’s time we embrace another paradigm: education! Certain aspects of the old model continue to make sense, for example public health’s “levels of prevention” (primary prevention instruction for all youth, e.g., digital literacy, media literacy and social literacy/SEL; secondary more situational and targeted instruction when incidents arise; and tertiary prevention and intervention for vulnerable youth). But not the laser focus on threat reduction. Internet safety also needs to fit the education model….
  • Teach rather than control. Instead of fear, control and prohibition, can we embrace evidence, agency and efficacy, as in the education paradigm? Can we demand evidence-based instruction, particularly for schools? Though some scholars have criticized “Internet safety” as too much of a catch-all subject, if it is a single subject, it needs to teach skills and afford agency — enable students to help themselves and each other. Certainly by now, we no longer see control and prohibition supporting any but possibly the most vulnerable young people, right? And even for them, protection must include empowerment and self-actualization. For years, researchers throughout Europe and North America have been pointing us in this direction, and I remember hearing a researcher in Australia say back in 2013 that Internet safety education has “reached the saturation point” for youth in that country. How much more so in 2019?! What would help, I think, is just to teach “The Internet,” not just “Internet safety.” The future makers and beneficiaries of Internet policy need to know things like how the Internet works, how it started and has evolved, how it’s governed, what algorithms and A.I. are, and what their human, legal and digital rights are. [See a curriculum called the Living Online Lab for teaching that includes this kind of history and context.]
  • Double down on student empowerment — because peer mentoring is powerful, we know from the research that 70% of bystanders in bullying incidents try to help the target, and students will help us help their peers. Part of this is 1) digital literacy ed – encouraging, teaching and helping young users how to report abuse on the platforms they use (or having older students do so). 2) Teach students their constitutional and human rights – including the rights of the UN Convention on the Rights of the Child (even though the U.S. hasn’t ratified it); see this about the UNCRC’s three categories of rights: provision and participation, as well as protection. 3) Make the connection for students between what they learn in bullying prevention and social emotional learning lessons and their digital interactions. We put great stress on their responsibilities, but what of their rights and the literacies that enable their citizenship and civic engagement wherever they are, online or offline: social literacy (SEL) as well as media literacy and digital literacy? For their effective navigation of today’s very social digital media environments, not one of these literacies can be left out.

The other part of early Internet safety we got wrong was the idea that all youth were equally at risk online. They’re not. We learned from a thorough review of the research literature in the last decade that the young people most vulnerable online are those most vulnerable offline. Many of us have absorbed that. We know that students with special needs and LGBTQ youth are more vulnerable online. But let’s think about that a for a moment longer. This is a finding about all youth. All kids have varying levels of resilience, that ability to bounce back when bad things happen – at different points in their lives, even different situations and contexts. So we can stop with the generalizations and start helping all children develop their literacies and other internal safeguards that enable effective Internet use and so much more.

The original extended version of this post can be found here at NetFamilyNews.org.

Filed Under: iCanHelpline Blog Tagged With: bullying, cyberbullying, Facebook, Internet safety, Snapchat

Digital safety 2019: The new ‘middle layer’ of user care (Part 2)

January 23, 2019 By ICanHelpline

Part 1 of this series was 2018 highlights. Now it’s time to shine light on some interesting ideas and developments that people have surfaced for a better Internet in 2019 and beyond.

So what is this middle layer thing? It’s a way of thinking about Internet safety which people have actually been discussing and building in various ways for most of this decade. We just haven’t thought of it as a whole. It’s like the proverbial elephant we know is there, but we’re all blindfolded and busy dealing with only our part of it – usually in the areas of prevention education, intervention (including law enforcement’s), content moderation or regulation – so we’re not seeing it as a whole. I suggest that we think together about the whole animal.

Why a “middle layer”? We’ve been working the problems on only two levels: in the cloud and on the ground. In the “cloud” it’s: content moderation, machine-learning algorithms for detection before harm happens, transparency reports, and other self-regulatory tools. The “ground” is a whole array of traditional solutions, such as regulation, 911 & law enforcement, lawsuits, school discipline, hotlines providing care for specific risks and groups (e.g., domestic violence, sexual assault, depression, suicidal crisis) and of course parenting.

All of that is needed (maybe–see this) but is not enough. Because our new, fast-changing, global- but- also- very personal media environment calls for new approaches to regulation and user care. We now need to be working consciously on three levels, and it’s on the middle level at which some really interesting thinking has going on especially in the areas of regulation and moderation.

THE ‘MIDDLE LAYER’ & REGULATION
Regulation as we know it is not enough. “What we don’t hear nearly enough,” wrote University of Toronto law professor Gillian Hadfield in Quartz last summer, “is the call to invent the future of regulation. And that’s a problem.” Interestingly, even the platforms are on board with that, Wired reports. Facebook CEO Mark Zuckerberg announced an independent court of appeals for content decisions, according to Quartz.

Whatever shape that takes, it’s the independent part that defines the middle layer – not part of what platforms do and not part of what government does – though it certainly works with both.

“Our existing regulatory tools…are not up to the task. They are not fast enough, smart enough, or accountable enough to achieve what we want them to achieve,” Dr. Hadfield added.

What we need now 1) folds in tech expertise that keeps up with the pace of tech change, 2) can allows for laws to be reviewed and adapted to changing needs, maybe even have an expiration date, and 3) draws on multiple perspectives, not just those of the companies being regulated but of age and demographic groups the regulators aim to protect – and those of researchers!

‘SUPER-REGULATION’
For that first criterion, Dr. Hadfield calls for “super-regulation” – “’super’ because it elevates governments out of the business of legislating the ground-level details and into the business of making sure that a new competitive market of [licensed] private regulators operate in the public interest.”

These private regulators fit the description of “middle layer” because they’d have to keep both governments and “regulatory clients” happy “in order to keep their license to regulate.” Keeping their clients happy means developing “easier, less costly and more flexible ways of implementing regulatory controls.”

This layer of competitive independent regulators has actually been developing for some time, Hadfield says. She gives examples such as Microsoft “leading efforts to build global standards for privacy and cybersecurity and Google submitting to AI safety principles.” Other, slightly different, parts of the new regulatory layer have been in development too, as described by researcher Tijana Milosevic in her new book, Protecting Children Online?

Some ideas offered by researcher and author Tarleton Gillespie – a “public ombudsman” or “social media council” – could fall into either the regulation or moderation category, or both. “Each platform,” he writes in Wired, “could be required to have a public ombudsman who both responds to public concerns and translates those concerns to policy managers internally; or a single ‘social media council’ could field public complaints and demand accountability from the platforms,” he adds, citing a concept fielded by the international NGO Article19. “By focusing on procedures, such oversight could avoid the appearance of imposing a political viewpoint.” That would be imperative because, to be effective, the middle layer has to be credible to all whom it serves. Independence from the platforms and, in some countries, government, is key.

THE ‘MIDDLE LAYER’ & MODERATION
Think of content moderation as user care. It both protects users and defines “the boundaries of appropriate public speech,” writes Dr. Gillespie in his 2018 book Custodians of the Internet. The thing is, most of that protection and definition is internal to the platforms – to the cloud. It’s being done by private companies, not by governments and traditional care providers such as crisis hotlines or even 911 (on the ground).

There are several problems with that. The platforms have neither the context nor the expertise to provide real care. All they can do is delete content, which can help a lot in some cases, but – as Gillespie spells out in detail in his book – a lot of content doesn’t get deleted. Not necessarily intentionally on the platforms’ part and not only because of sheer volume, but because deletion decisions are sometimes really complicated. One person’s free speech is another person’s harm. And images that are common in one country can be extremely incendiary and dangerous in another. Potential impacts on the ground often can’t be imagined by platform moderators who’ve never been to the place where the incendiary content was posted.

Another problem is that what we’re talking about, here, is not mainly about technology – even though so many (especially those of us not born with the Internet) think that it is. It’s actually about our humanity. What happens on the Internet is rooted in people’s everyday lives and relationships, so moderating content is often, not always, more like taking a painkiller than really getting at the pain. Which is why Internet help needs to work closely with on-the-ground help such as parents, school administrators, risk prevention experts and mental healthcare specialists. They’re the ones qualified to get at the real issue and help alleviate the pain.

FILLING THE CONTEXT GAP
Because what’s happening on the ground, in offline life, is the real context of what we see online. In his hearing on Capitol Hill last spring, Facebook CEO Mark Zuckerberg suggested algorithms were getting better and better and would eventually solve the context problem. Yes, maybe for some kinds of content, but not cyberbullying, one of the most prevalent online risks for kids. Nothing is more contextual or constantly changing – within a single peer group at a single school, let alone hundreds of millions of youth in every country on the planet. Even Facebook says in its latest Transparency Report that harassment is content of a “personal nature” that’s hard to detect and proactively remove without context. I agree, and suspect school administrators do too. It’s hard to understand what hurts whom—and what is or isn’t intended to hurt—without talking with the kids involved even in a single peer group, much less school.

So a middle layer of moderation has been developing in the form of “Internet helplines” throughout Europe, in Brazil, and in Australia and New Zealand. Some have folded Internet content deletion into long standing mental healthcare helplines serving children. Others became part of other longstanding charities such as Save the Children Denmark and Child Focus in Belgium. Some, like SaferNet in Brazil, were nonprofit startups created just for the Internet, and still others, such as Australia’s eSafety Commissioner’s Office and New Zealand’s Netsafe, are part of national governments. But the government-based ones are not regulators, and so far they seem to meet that crucial trust criterion of keeping apart from national politics.

HELP IN 2 DIRECTIONS
These helplines provide help in two directions: up to the platforms and down to users. To the platforms, the greatest service is context (because how can algorithms and the people who write them tell social cruelty from an inside joke that only looks cruel to an outsider?). Context makes abuse reports actionable so harmful content can come down. The great majority of abuse reports the platforms get are what they call “false positives”: not actionable. There are all kinds of reasons for that, from users not knowing how to report it to users reporting content that doesn’t (without context) seem to violate Terms of Service to users abusing the system. And then there’s the content that hurts but doesn’t violate Terms. There is so much the platforms can’t possibly know, which is why they need help. They need to acknowledge this.

To users, Internet helplines help with things neither the platforms nor services on the ground can help with: understanding the issue in the context of its occurrence as well as where to go for the best on-the-ground help in their country, how to gather the evidence of the online harm the app/platform needs in order to take proper action, and when content should be sent to the platform for suggested deletion. I say “suggested” because obviously only the platforms can decide what violates their own Terms of Service, but that independent, trusted 3rd party can cut through a lot of guesswork.

TRUST IS ESSENTIAL
I’m not saying these services are perfect; they’re only a very important start. There’s much work to do, including developing uniform best practices for helplines worldwide, and I hope it stops being ad hoc and piecemeal. But the work has begun. It’s independent of the platforms and in many cases even of government. Trust is essential. To be effective, the operations in this new layer need the trust of users, platforms and governments.

As it’s built out, the middle layer will provide more and more support for people, platforms and policymakers, enabling each to serve the others better. Closing the circle of care, you might say. Now it just needs to be built out more proactively and strategically – not in reaction to tragedies and laws (sometimes badly written ones) – and drawing on the expertise of all stakeholders, including our children.

This post was written by Anne Collier, founder of SocialMediaHelpline.com and a safety adviser to Facebook, Twitter, Snapchat, Yubo and other social media services.

Filed Under: iCanHelpline Blog Tagged With: 2019, digital safety

Digital safety, wellbeing: 2018 highlights

January 10, 2019 By ICanHelpline

This may not be the Internet safety look-back on 2018 you’d expect. With all the news about data breaches, “fake news,” “tech addiction,” algorithmic bias, election manipulation, hate speech, etc., etc…

It’s not a pretty picture. But it’s also not the whole picture. By definition, the news reports airline crashes, not safe landings. Even if 2018 really was unique, though, with bad news the rule not the exception, positive developments really are news, then, right? So here are some digital safety developments worth noting:

An important book on cyberbullying vs. dignity: In “Protecting Children Online?” (MIT Press, 2018), author and researcher Tijana Milosevic for the first time places the subject of cyberbullying where it belongs: in the framework (and slowly growing public discussion) of dignity. Why there and not in “Internet safety”? Because “dignity is what is violated when bullying and cyberbullying take place—when a child or a teen is ridiculed because of who or what they are,” Dr. Milosevic writes. “Dignity is sometimes conceptualized as the absence of humiliation,” and – though it can be private or 1:1 like bullying – cyberbullying, because media-based, takes the form of public humiliation almost by definition. Dignity is particularly effective as an antidote to social aggression because it removes the differentiations and imbalances, such as social comparison, social positioning and power imbalances, that fuel it.

“Dignity is an inalienable right, which, unlike respect, does not have to be deserved or earned,” according to Milosevic, citing the work of scholars and practitioners from the fields of political science, education, conflict resolution and clinical psychology. This cross-disciplinary thinking is a major step forward for Internet safety for the very reason that what happens online can’t be separated out from bullying, harassment and hate speech offline and is primarily about our humanity and sociality, rather than our technology.

Real “screen time” clarity, finally: Screen time is not a thing. It’s many things, researchers tell us, which contrasts pretty significantly with lots of scary headlines and many parents’ harsh inner (parenting) critic. Here’s a headline actually drawn from academic research: “We’ve got the screen time debate all wrong. Let’s fix it.” As Wired reported, “time spent playing Fortnite ≠ time spent socializing on Snapchat ≠ time spent responding to your colleague’s Slack messages.” See also “Why the very idea of screen time is muddled and misguided” and “The trouble with ‘screen time rules’” from researchers in the Parenting for a Digital Future blog.

Safety innovation in social norms: A powerful tool for social-emotional safety and civility that humans have shaped for thousands of years, social norms are just beginning to be associated with safety in communities, from schools (see this from Prof. Sameer Hinduja) to online communities. And now this tool is being deployed by some social platforms for their users’ safety (examples here). It’s about platforms giving users more control not ceding responsibility. Some platforms, such as giant Facebook and startup Yubo, are deleting more harmful content than ever proactively rather than only in response to users’ requests. We can contribute to that trend’s momentum by encouraging our students to report content that disturbs or hurts them – showing them they’re part of the solution. We know they are not passive consumers online; they have agency and intelligence, and one way they can exercise their rights of participation is in protecting their own and their peers’ safety in the apps they use. Equipping them for this is part of social-emotional learning. It’s another “tool” that has made real headway in adoption by schools in many states this past year, and it’s being discussed more and more in other countries as well. SEL teaches skills that support children’s empathy development, good social decision-making and recognition of their own and their peers’ dignity and perspectives.

Unprecedented multi-perspective discussion – even in policymakers’ hearings. The first-ever formal House of Commons committee hearing outside the UK, there was grandstanding, sure, but also truly substantive testimony from a rich range of views and expertise; those of scholars, news executives and reporters, as well as platform executives. We will not move the needle in making this new media environment truly work for us until we get all stakeholders at the table talking rationally and respectfully. Old-school shaming, fear-mongering and adversarial approaches will not serve us.

An important new book on content moderation. The ability to get harmful online content deleted has long been the main focus of “online safety.” This was the year it became clear that content moderation is both less and more than our source of online safety – and that we need it but certainly shouldn’t completely rely on it. One person’s “free speech” is another’s harm. It’s highly contextual. “It is essential, constitutional, definitional,” writes Tarleton Gillespie in his important new book Custodians of the Internet. “Moderation is in many ways the commodity that platforms offer.” It defines a platform, our experience of it and even the nature of our media environment. And it defines even more: “We have handed over the power to set and enforce the boundaries of appropriate public speech to private companies,” writes Dr. Gillespie, a principal researcher at Microsoft Research New England, in the Georgetown Law Technology Review.  And we’re talking about “appropriate public speech” in every society on the planet. These are not just platforms or Internet companies, they’re social institutions, a point made by scholar Claire Wardell and journalist Anna Wiener in The New Yorker. That fact calls for new, not more – new forms of risk mitigation and regulation.

Platforms discussing content moderation themselves – publicly. Another first this year was the rich, cross-sector discussion about this on both coasts this year. At two conferences called “CoMo at Scale,” one at Santa Clara University in California, the other in Washington, social media platform executives gathered with scholars, user advocates and the news media and discussed their content moderation tools and operations publicly for the first time. “One of the great things about attending these events is that it demonstrated how each internet platform is experimenting in very different ways on how to tackle these problems,” TechDirt reported. “Some are much more proactive, others are reactive. And out of all that experimentation, even if mistakes are being made, we’re finally starting to get some ideas on things that work for this community or that community.”

Platforms’ improved transparency. There’s a long way to go, but they’re investing in it. This year they put out increasingly granular numbers on what content is coming down. That’s partly due to laws like Germany’s just-enacted anti-online hate law NetzDG (though that too is not all good news, according to The Atlantic. What’s different now is that Facebook now includes numbers on proactive deletions vs. reactive ones, and Twitter includes deletions in response to users’ requests, not just governments. Also for the first time this year, Facebook included data on bullying and harassment violations, saying that in the third quarter (the first time it provided numbers for this category), it took down 2.1 million pieces of such content, 85.1% of it reported by users, demonstrating the importance of users making use of abuse reporting tools (here are Facebook’s and Twitter’s transparency reports). This greater transparency is so important. But it’s not the ultimate goal, right? It’s a diagnostic tool that gets us to a better treatment plan – where the treatment demands a range of skills and actions both human and technological behind the platforms and in society. Safety in this user-driven media environment is a distributed responsibility. When platforms say this, it’s seen as self-serving, but it’s simply a fact of our new media environment. The platforms have their responsibility, on both the prevention and intervention sides of the equation. But there’s a limit to what they can do, and transparency allows users and policymakers to find and fill the gaps and figure out solutions that work for the media environmental conditions we’re only just beginning to get used to.

So that’s it – not for this series, just for 2018. These bright spots are by no means comprehensive; they’re just the developments that stood out the most this year. What’s exciting is, they come with some really interesting ideas for developing solutions to the problems that got so much scrutiny and news coverage this year. That’s what’s coming up next, first thing in 2019: some creative ideas that have surfaced for a safer Internet going forward.

Happy New Year!!

Filed Under: iCanHelpline Blog Tagged With: icanhelpline, Internet helpline

What cyberbullying is and why that matters

November 8, 2018 By ICanHelpline

Pew chartThe Pew Research Center reported a surprisingly high figure for “cyberbullying.” The researchers report that 59% of US 13-17 year-olds had experienced some form of it.

But it’s important to zoom in on the “some form of” part. Pew’s researchers asked their respondents which forms of abusive behavior they had experienced online (the 59% was the number for teens who’d experienced at least one).

Three of the forms of behavior—name-calling (the most common, at 42%), rumor-spreading (32%) and physical threats (16%)—don’t require digital media or devices for delivery and aren’t even technically bullying, though they can certainly be used in bullying.

A fourth, a form of stalking (21% said they’d experienced constantly being asked where they are, what they’re doing, etc., by someone other than a parent), has also been going on for eons, but can be even more constant and extreme with mobile phones involved. It can also be a form of dating abuse.

The final two on the list are forms of what popular culture calls “sexting”: receiving unsolicited explicit images (25%) or having explicit images of oneself shared without one’s consent (7%). Both can be forms of bullying, but not necessarily; and both, especially the latter, are often sexual harassment, and—the better to protect themselves—young people need to understand this digital form of sexual harassment as such.

What cyberbullying is and why that matters
So when can any of these actually be cyberbullying? When they’re inflicted on someone repeatedly (which usually means intentionally) using digital tools or media. So the name-calling, rumor-spreading, etc. would need to be repeated and aimed, usually aggressively, at one person. Traditional definitions of bullying usually also refer to “a power imbalance,” whether physical, emotional or social, but that’s pretty well implied by the repeated aggression, right? If one person isn’t being victimized in a one-sided way, we’re usually talking about plain-old conflict, not bullying. Here’s the latest information on that from the Cyberbullying Research Center and the National Research Council.

Why does any of this matter? Well, because 59% of US teens is a lot, and this is a highly credible research organization with solid methodology. So it’s good to know what we’re talking about, here—so that we know that cyberbullying has not in fact gotten much worse—a conclusion that people who see that figure in the same headline with “cyberbullying” could easily reach. We don’t need to believe the worst about a teen’s experiences or behavior. And of course, it’s good to remember that people under 18 aren’t the only ones who experience or engage in any of these behaviors!

The latest figure for how many US teens have ever experienced cyberbullying is 33.8%, and that’s from a huge representative sample of 5,700 US teens surveyed by the Cyberbullying Research Center.

Zooming in on name-calling
It’s important to note that offensive name-calling is the most common form of online harassment that Pew’s respondents experienced. In her work, prominent bullying researcher Dorothy Espelage at the University of Florida has found that addressing content or behavior in social media doesn’t reduce cyberbullying nearly as much as addressing bullying, homophobic name-calling and gender-based harassment. She has also said that homophobic name-calling in upper-elementary and middle school grades predicts sexual harassment in high school, and dating violence at colleges and universities.

Who’s helping & how’re they doing?
Check out the study to see what Pew heard teens say about how well parents, politicians, police, teachers and bystanders are doing in alleviating cyberbullying. Of all those groups, parents were, at 59%, doing an “excellent/good job.”

Filed Under: iCanHelpline Blog Tagged With: bully, cyberbulling research center, cyberbully, Pew Research Center, school safety

‘Momo’ & media literacy

August 27, 2018 By ICanHelpline

momosculpture
At KnowYourMeme.com, photo of the bird-girl sculpture at Link Factory in Japan

Educators may be wondering about the latest viral media scare called “Momo,” so we’re here to help. Though it has been likened to the “Blue Whale game” of 2015-‘17, it doesn’t seem quite as viral—yet, anyway. But like Blue Whale, the very reason why it has affected a number of countries is that it’s really creepy, strikes fear in the hearts of adults who care about kids, and spreads widely through social media (with a whole lot of help from mainstream news media, which tends to cover things that go viral).

Because multiplying news reports refer to Momo as “a suicide challenge,” police rightfully feel obligated to look for any connection when they investigate cases. Then they tell reporters who ask about a connection that they’re checking into it. That’s what happened in a case reported by the Buenos Aires Times, but we haven’t been able to find a single report of police in any country confirming that a minor’s suicide was linked to Momo.

What it is

What “Momo” actually is, basically, is one or more WhatsApp accounts that reportedly send a message saying recipients will be cursed if they don’t reply. If they do, they get other threats, frightening photos, and/or challenges to complete harmful tasks, according to many news reports. So people who reply to or contact a Momo account are basically giving someone permission to troll them—and possibly send malware to their phones, some reports say. Momo is probably more than one account because copycats often join the “fun” as coverage grows, and more than one phone number associated with it has been found in WhatsApp.

Momo doesn’t appear to be as widespread as Blue Whale was, but this could be early days. As with other viral scares, the creators have no intention to be found. So it’s really hard to know how this one got started and who started it where on the Internet—which is why media outlets typically just cite each other as sources (even sketchy supermarket-tabloid-type publications in other countries). So we’re looking at a sort of (non-)vicious circle of quasi-news reporting.

Where it’s showing up

Some reporters are more responsible in their reporting than others. Heavy.com, which is rated “high” for factual reporting by MediaBiasFactCheck.com, found that Momo so far has had the most media coverage—and thus raised the most concern—in Spanish-speaking countries. Heavy also reported that its creepy profile photo depicts a sculpture created by a special effects company in Japan called Link Factory, not Midori Hayashi, the Japanese artist mentioned in too many fear-fomenting news reports citing other scary news reports.

It might help parents and kids on the alert for Momo-like contacts to know that Heavy and other news outlets have reported that three Momo-associated phone numbers have been found so far, one that appears to be from Japan, with country code 813, one from Colombia (52) and one from Mexico (57). If someone with a ridiculously creepy profile photo pops up in your app, click to their profile to see if their phone number has one of those country codes (though the photo will probably tip you off right away!).

Talking points for homes, classrooms

Checking in: Honest curiosity works better than fear or anxiety when talking with kids about what they’re seeing in social media. There’s nothing wrong with parental concern, but—if you’re feeling it—tell your kids you are and why, then ask them if they’ve come across anything about Momo and where. Most likely it was from a peer, another parent or a teacher rather than from some sinister profile itself. But if they have WhatsApp, ask them if they’ve seen anything about it there. It’s super unlikely that they replied, but if they did, just advise them to block and report that contact—and make sure they know you have their back whenever they’re creeped out by something like this online.

Most kids smart, some vulnerable: The thing is, most kids don’t want to give trolls (or anybody) permission to torment them—unless, of course, peer pressure’s involved. If peers are involved, they’re likely to be messing with “Momo” together, as a kind of game, in effect “protecting” each other from emotional harm. They’re not likely to invite trouble unless they’re at risk or vulnerable in other ways, in which case caring adults in their lives are probably aware of their vulnerability. If not, those adults could talk with their child along the lines suggested just above and use “Momo” as an opportunity to let their kids know, again, that they love them, have their backs, and will do their best to provide whatever appropriate care is needed.

Zooming in: You may see reports about law enforcement linking Momo to suicide cases. Look at the words reporters and their sources use. In most cases (when reported responsibly), investigators say “may be linked to the Momo challenge” (emphasis ours) or they’re looking into any such association. Responsible reporters use words like “reportedly” and “allegedly” when referring to information that can’t be confirmed.

Exactly what’s viral?: Think about what it is that’s going viral and why: Is it the threat itself or news of the threat? Whether it’s called a “game,” “challenge,” or “scare,” very often what we’re seeing is a viral response to it more than the thing itself. We’re reacting to the exposure of that creepy thing not the thing itself. After all, the photo of the bird-girl sculpture in Japan is definitely creepy, especially when the person who posts it zooms in on just the face. But the more it’s seen, the less of a problem it is (see “Steal intelligence” below).

For further info

  • A different one in Europe: Not as viral (yet, maybe), but another “suicide game” scare reported by British tabloid The Sun (like the ones sold by supermarket checkout stations in the U.S.). Called “Deleted,” it’s more like the global Blue Whale scare of 2015-’17 in that it’s associated with online “death groups” that young Russians allegedly join. But it’s unlike Blue Whale in that, as of this writing, it seems to be more a media story than an actual threat in the UK or outside of Eastern Europe, and also as of this writing, we could find only one story about it in the UK with a search in Google News. If we start to see more media scares running simultaneously around the world, news consumers may actually start seeing them for the clickbait that they mainly are. Here is vital perspective from the Safer Internet Centre in Bulgaria, leading source on the international Blue Whale scare.
  • The “clickbait” part: Sometimes it’s super scary stories, sometimes it’s offers of free stuff. What makes stories go viral is “kind of a complex thing as a mixture of misinformation, phishing, scamming, bad advertising and monetization,” Peerapon Anutarasoat, the lead fact-checker for the Thai News Agency’s anti-fake news project, told the U.S.’s Poynter Institute, which focuses on journalistic ethics. Poynter was looking into how misinformation spreads on Line, the messenger service like WeChat and WhatsApp that started in Japan. In Poynter’s story, the “free stuff” is digital stickers, which are extremely popular in Taiwan, Thailand and Indonesia. The clickbait accounts lure new users in with offers of free stickers, then start pushing “bogus health products” at them, for example.
  • The “juvenoia” part: About the Momo scare, Larry Magid, tech journalist and my co-founder at ConnectSafely.org, wrote that “dire warnings about children dying because of apps and games is a form of ‘juvenoia’,” alluding to a term coined by Prof. David Finkelhor, director of the Crimes Against Children back in 2011. His definition was “the exaggerated fear of the influence of social change [including technology] on youth.” Here’s more on juvenoia.
  • Stealth intelligence: BBC Brazil cited a report by ReignBot, a YouTuber who intelligently “explores creepy internet weirdness” like Momo, that—as of this writing—has gotten more than 2 million views (YouTube responsibly put up an interstitial saying the video “may be inappropriate or offensive to some audiences,” but there’s nothing offensive about the audio, so if you’re interested in ReignBot’s reporting, just listen to it). The good news in all this is, “once this [ReignBot’s massively viewed report] goes up alongside other videos covering the same topic, Momo is most likely to be so riddled with fakes and copycats that it’ll completely lose its appeal.”
  • As for Minecraft: The Momo phenomenon showed up in the world of Minecraft in the form of a mod, reports gaming and pop culture news site ComicBook.com. The mod creates an avatar or character that sort of looks like the creepy “Momo” bird-girl and “chases down other Minecraft players. ComicBook.com adds that, in a statement made to Fox News, “the team at Microsoft called the latest mod ‘sick’,” said that it’s “taking action to restrict access to the mod,” and wrote, “This content, which was independently developed by a third party, does not align with our values and is not part of the official Minecraft game.”
  • Viral not only negative, of course: Remember the “ice bucket challenge” of 2014? Also truly viral (though probably more U.S.-based), it was a campaign, not a scare. It raised more than $100 million for medical research and probably went viral because infectiously fun, for a good cause; and famous people like Bill Gates and Barack Obama did it, helping to increase the campaign’s momentum.
  • Media literacy tools: “A Parent’s Guide to Media Literacy” from the National Association for Media Literacy Education (NAMLE); for kids who want to learn how to check the credibility of “news” stories they encounter, Checkology.org provides free instruction from the Washington, D.C.-based News Literacy Project to students and educators all over the world; Poynter Institute’s MediaWise is for teens to work with the Institute’s journalists to “sort out fact and fiction on the internet and social media”; and here’s KnowYourMeme.com on the Momo image.

Filed Under: iCanHelpline Blog Tagged With: media literacy, media scare, momo, viral

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 5
  • Go to Next Page »

Footer

 

Contact

info@socialmediahelpline.com
  • Home
  • About
  • Contact

Copyright © 2025 · All Rights Reserved · The Net Safety Collaborative · Privacy Policy

Top photo by Pavan Trikutam. Lower photo by Marvin Meyer.