Stopping sextortion | WORLD
Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Stopping sextortion

Social media companies have the resources—but not the will—to protect children from blackmail scams


tanawit sabprasan / iStock via Getty Images Plus

Stopping sextortion
You have {{ remainingArticles }} free {{ counterWords }} remaining. You've read all of your free articles.

Full access isn’t far.

We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.

Get started for as low as $3.99 per month.

Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.

LET'S GO

Already a member? Sign in.

If you’re like most Americans, chances are in the past month you’ve received a text from a random number striking up a conversation out of the blue. After apologizing for texting the wrong number, they might, if you’re foolish enough to engage them, start up a flirtatious chat, and eventually share a picture of themselves (invariably an attractive member of the opposite sex). Their end goal? Sextortion—a rising crime wave in which perpetrators try to coax unsuspecting users into sexually explicit chats or sharing explicit images, before revealing their true identity and threatening to broadcast the photos or screenshots far and wide unless paid handsomely.

Most sextortion, however, starts on social media platforms, not text messages, and targets naïve teens, not savvy adults. Indeed, one target demographic is 14- to 17-year-old boys, easily flattered into thinking that a cute girl wants to chat with them, and easily terrified into paying off the scammers lest their parents, teachers, and friends find out what they’ve done. In some cases, unable to come up with the cash, the victims resort to suicide. Data from the National Center for Missing and Exploited Children (NCMEC) showed an 18,000% increase in sextortion scams from 2021-23. A crime wave like that doesn’t just happen; and sure enough, it has recently emerged that a cybercrime collective based in Nigeria, known as the Yahoo Boys, is behind most cases, and is increasingly using AI bots to carry out their scams for them.

In a recent briefing, the National Center on Sexual Exploitation drew attention to the scale of this crisis and the abysmal job that most tech companies have done in combatting it. Although determined criminals will always find a way to get to their victims, it is shocking just how easy many of these platforms have made their job. Not only do the Yahoo Boys use popular social media platforms to target victims, but also to publicly share their exploits and train others in their best methods. As one analyst, Chris Raffile, has chronicled, “They are circulating their scripts and how-to guides, literally publishing how to sextort minors openly on YouTube, Facebook groups, Instagram and TikTok.” Meanwhile Snapchat has allowed over 10,000 sextortion reports per month to go unaddressed, and 40-75% of accounts on Cash App (the preferred payment vehicle for sextortionists) are reportedly fraudulent.

Beyond design flaws, however, perhaps the biggest issue is simply that these companies invest so little effort into policing what happens on their platforms.

In many cases, these apps are dangerous by design. Most apps, even when they claim to be for adults only, fail to invest in age-verification technologies to keep teens off of them. Some, like Instagram, until recently made a user’s entire friends list public by default, so that blackmailers could tell their victims exactly whom they would be sharing their explicit photos with. Snapchat, meanwhile, boasted as its main design feature that photos disappeared within 10 seconds of being opened. But bad actors could easily save and store the photos, and Snapchat became the platform of choice for sextortion schemes.

Beyond design flaws, however, perhaps the biggest issue is simply that these companies invest so little effort into policing what happens on their platforms. While they claim to be powerless, this is preposterous in an age of sophisticated AI algorithms that can detect patterns of shady behavior and immediately flag the bad actors. The reality is simply that these companies have, to date, sought to get by with spending as little as possible on product safety. Meta, for example, boasts of spending $5 billion a year on safety and security measures. That sounds like a lot, until you realize that Meta’s 2024 revenues were $165 billion and profits were $62 billion. In other words, Meta could have afforded to increase safety spending tenfold last year and still made a handsome profit!

The reason they don’t is simply because they don’t need to. It’s not that corporations are uniquely heartless; it is human nature to try to maximize our profits and minimize our costs, preferably by passing them off onto others. This is all the easier when the victims of our laziness are out of sight and out of mind—as is generally the case in the modern economy. The reality is that whatever good intentions such companies may profess or really possess, they will rarely be motivated to take decisive action to protect their users without regulation.

Such regulation need not take the form of heavy handed government censorship. Thankfully, one of the easiest solutions is also among the most market-friendly: simply expose digital platforms to the same litigation risks that most other companies face, and their own cost-benefit calculations will spur them to invest in the technical tools to crack down on bad actors. Until now, internet companies have been allowed to post astounding profit margins thanks to liability protections from an obsolete 1996 law, Section 230. These protections may have been plausible in the fledgling days of the internet, but they have become an incentive for irresponsibility today. It’s time to take action in public policies that will hold these companies accountable and protect our children.


Brad Littlejohn

Brad (Ph.D., University of Edinburgh) is a fellow in the Evangelicals and Civic Life program at the Ethics and Public Policy Center. He founded and served for 10 years as president of The Davenant Institute and currently serves as a professor of Christian history at Davenant Hall and an adjunct professor of government at Regent University. He has published and lectured extensively in the fields of Reformation history, Christian ethics, and political theology. You can find more of his writing at Substack. He lives in Northern Virginia with his wife, Rachel, and four children.


Read the Latest from WORLD Opinions

Ted Kluck | The NCAA tournament, NIL, and America’s basketball problem

Ericka Andersen | We haven’t begun to deal with the exponential problems of pornography

Steven Wedgeworth | The Roman Catholic Church in America is cratering

Katelyn Walls Shelton | The Trump administration takes an unexpected route to reduce tax dollars for the abortion giant

COMMENT BELOW

Please wait while we load the latest comments...

Comments

Jason Maas

This was an interesting column until the last paragraph. Section 230 has been overall a huge win for Internet content from the "peasants" in the USA. There would be drastic side effects from removing it. I'm aware that it's not perfect, but anybody talking about removing it without addressing the negative consequences of that move isn't telling the whole story.

TYOU3119Jason Maas

Jason Maas, could you please elaborate? What exactly is sectiom 230, how has it been a win for (us?) “peasants,” what do we stand to lose, and what would be a better solution to increase internet safety?

Jason MaasTYOU3119

Sure! Yes, I meant us regular folks by "peasants", somewhat tongue in cheek, meaning that we're not rich people in charge of Big Tech.

Section 230 removes liability for companies that host user-generated content.

So it makes this comment system at World and any other website feasible. Without it, WORLD would liable for anything that anyone posts. That's a lawyer's worst nightmare, so they'd just not have any comment system.

Same idea for social media websites, and also website publishing systems like Wix, Squarespace, etc.

Basically us commoners wouldn't be able to post anything on the Internet in the USA without section 230.

Here's a decent article that explains some of why powerful people on both sides of the aisle are chafing at Section 230 for different reasons. And if both parties don't like something, that probably means it's doing its job!

https://appleinsider.com/articles/25/03/21/the-future-of-internet-liability-is-uncertain-as-congress-targets-section-230