PoliticsStories

Big Tech can’t win – But we can. (Part 1)

In the war against disinformation, social media platforms face a million dilemmas. The most sustainable solutions, however, are in our hands.

This is the first of a three-part series. Stay tuned for the second installment coming soon.

In 2019 I encountered Carole Cadwalladr’s gripping TED talk about Facebook’s role in Brexit. Working as a reporter, she traveled to Ebbw Vale, a town in South Wales that had one of the highest “leave” votes in the country. She found that despite their receiving more than 450 million pounds in aid from the European Union to fund a college, a sports center, road improvements, and other regeneration projects — and a large sign at each site announcing where the funding came from — many residents claimed the European Union had done nothing for them. They complained about the immigrants and refugees, yet their town had one of the lowest immigration rates in the country.

After Cadwalladr released her article, she was contacted by a woman from Ebbw Vale who told her about ads and information she had encountered on Facebook. “She said it was all this quite scary stuff about immigration, and especially about Turkey. So I tried to find it. But there was nothing there.”

There appeared to be no way to trace what information had been pushed onto this woman’s news feed. Cadwalladr continues:

We have no idea who saw what ads or what impact they had, or what data was used to target these people. Or even who placed the ads, or how much money was spent, or even what nationality they were. …But we do know that in the last days before the Brexit vote, the official “Vote Leave” campaign laundered nearly three-quarters of a million pounds through another campaign entity that our electoral commission has ruled was illegal…and with this illegal cash, “Vote Leave” unleashed a fire hose of disinformation.

And most of us, we never saw these ads because we were not the target of them. “Vote Leave” identified a tiny sliver of people who it identified as persuadable, and they saw them.

This was the biggest electoral fraud in Britain for 100 years — in a once-in-a-generation vote that hinged upon just one percent of the electorate.

The disinformation ads that were run and targeted to persuadable voters spread outright lies about the European Union, and it seems that Facebook just turned a blind eye and let it happen on their platform.

I was disturbed — as Carole Cadwalladr pointed out, this kind of targeted electoral fraud is an actual threat to democracy. As a Facebook user, would I one day be the unsuspecting target of someone’s criminal efforts to change the way I see the world and thus my vote?

Fast-forward to early 2021 when I found a YouTube video by Destin Sandlin, creator of the Smarter Every Day channel, showing that Facebook had created a publicly accessibly archive of the ads run on their site, including “who funded the ad, a range of how much they spent and the reach of the ad across multiple demographics.”

This was a straightforward solution to the problem Cadwalladr encountered in 2016. And I had never heard about it. Suddenly an issue that seemed black-and-white had a lot more grey. What I’ve discovered since researching further has made me more inclined to give even the leaders of “Big Tech” a little more grace — grace I hadn’t thought they deserved. I would not argue that their actions are always correct or free of political or economic motivation, but there is far more complexity to the issue than appears to regular social media users. Understanding that complexity is a crucial part of finding sustainable solutions.

Misinformation, disinformation, & fake accounts

To lay the foundation to understand the dilemmas social media platforms face, we need to make the crucial distinction between misinformation and disinformation. I’ll let the media literacy group MediaWise explain it with some of their recent tweets.

Misinformation is a hot topic in today’s world, and the debate about what is or is not “misinformation” is an important one. However, that discussion is beyond the scope of this article. Our focus is the more malicious material, disinformation.

Renée DiResta is an expert on misinformation and disinformation, studying how various actors use our information ecosystem to sway opinion and manipulate the real world. In her article “The Digital Maginot Line,” she argues that we, the regular users of social media, have greatly underestimated and misrepresented the problem of disinformation. We tend to “view these platforms as ordinary extensions of physical public and social spaces — the new public square, with a bit of a pollution problem.” The pollution, as we see it, is the rumors or misinformation our friends unwittingly share. But this isn’t just a pollution problem, she argues. “We are immersed in an evolving, ongoing conflict: an Information World War in which state actors, terrorists, and ideological extremists leverage the social infrastructure underpinning everyday life to sow discord and erode shared reality.”

It sounds extreme. But could it be true?

Destin Sandlin, the creator of the video I referenced earlier, actually created a series of videos about social media manipulation, and I highly encourage you to set aside some time to watch them (the information he presented forms the basis for this article series). In his videos series, he traveled to social media headquarters to ask about their challenges with disinformation. His interviews were enlightening and, at times, shocking.

His interview with Tessa Lyons at Facebook revealed that they remove over 1,000,000 fake accounts every day. “We use a combination of artificial intelligence and human review to identify fake accounts and to remove them from our platform because fake accounts violate our community standards.”

According to Joel Roth, Head of Site Integrity at Twitter, their service challenges 8,000,000 to 10,000,000 accounts every week. “We obviously are focused on removing malicious activity from the service. We have taken really substantial steps, especially in the last few years, to address some of the most common forms of platform manipulation, like bots and automation.”

About three-quarters of the accounts they challenge weekly are automatically removed. “I think it’s very real that there’s an arms race around platform manipulation.”

Much of the work of removing accounts has to be automated because it would take a truly enormous number of employees to sift through that many accounts manually. Even so, Rob Leathern from Ads Integrity at Facebook said:

The amount of effort that we’re spending to stop this [is huge]…I think we said that we have something like 30,000 people working on this problem … but in some ways, that’s still a small number of people compared to the scale of the content.”

Despite the incredible number of resources devoted to defending the platform, some fake accounts inevitably make it onto the platform.

So, who is behind these fake accounts, and what are they after?

In his interview with Destin Sandlin, Sebastian Bay from NATO Strategic Communications Centre of Excellence stated,

At the root of all this is fake accounts. People are creating and running and maintaining fake accounts. It’s a whole industry out there … because there is a lot of money involved … For a long time, we haven’t really understood how widespread this is, and as outsiders, we still don’t understand that.

People tend to trust information that comes from sources that they can identify with — sources that look and act like themselves. The vendors who sell fake accounts know this and offer accounts that match the demographic their purchasers want to influence. If you want to buy an account that appears to be the profile of a white 25-year-old Canadian man who has been online for several years, already has a friend group, and is interested in soccer, vegetarianism, and a particular political party, you can make that request and purchase that custom-made account.

If a buyer is not necessarily looking for a fake account but engagement to inflate an account they already have, they can purchase that too. Destin Sandlin, in his video about manipulation on YouTube, said that the way to trick the algorithms into promoting polarizing videos or disinformation “is with artificial engagement. Artificial engagement is done with fake logins or compromised accounts. They sell them like wine on the black market. A new one’s gonna set you back about a quarter. A 2014 [account] is going to set you back about seven bucks.”

Click farms around the world offer fake engagement services, and they work because we (often subconsciously) use metrics such as likes, comments, and followers to gauge whether a source is likely to be legitimate. False engagement is a cheap way to appear trustworthy.

In an attempt to understand the industry of fake accounts and engagement, the NATO Strategic Communications Centre of Excellence put together an extensive report, and one of their most interesting findings was the extent to which these services are accessible. While I do not endorse it, you could conduct a simple internet search and purchase likes, follows, and engagement to promote your own social media presence. Sebastian Bay continued:

When we talked to these people, because we’ve interviewed people that sell these things, they’d say that when social media platforms put new regulations into account, into effect, it takes them about two or three weeks to counter those new speed bumps that they put in, if you want to use that word. And they do that by being inventive, simply by trial and error. If [the platforms] put in a CAPTCHA service, well, they do ways to break that. If they put geofencing on IP addresses, they use proxies, etc., etc… So it’s become more difficult … so far, we don’t see any limitation on the supply side. So you can buy fake activity on Twitter, for example, within 15 minutes if you have a PayPal account. Many of the people we have spoken to that do these engineering, they are people that live in developing countries but also in developed countries.

The incentive to deploy fake accounts or engagement usually falls into one of two categories: economic motivation or political/ideological manipulation.

When Sandlin interviewed Renée DiResta, she stated:

I think it’s mostly financially motivated. I think Facebook has said the same thing about propaganda. A lot of the stuff on their site coordinated inauthentic activity, mostly economically motivated. Even during 2016, now the notion of fake news was tied to Russia, but fake news wasn’t actually about Russia. If you remember, back in 2016, during the campaign, it was about people just creating these hyper-partisan sites that were literally fake news, demonstratively fake: “Pope endorses Donald Trump” and this kind of stuff. And it was just pushing people to the sites to try to make money on the ads. And so that’s what I think a lot of the challenge here is.

Economic and political motivations can be hard to distinguish because political misinformation or polarizing political content is often the vehicle for financial gain. As the Wall Street Journal reported in 2021, “The company’s data scientists had warned Facebook executives in August [2020] that what they called blatant misinformation and calls to violence were filling the majority of the platform’s top “civic” Groups, according to documents The Wall Street Journal reviewed. …[A top] group claimed it was set up by fans of Donald Trump, but it was actually run by “financially motivated Albanians” directing a million views daily to fake news stories and other provocative content.”

It is sobering to realize that millions of actual Americans who thought they were doing their part to be politically involved were, in reality, being preyed on by people who understood how to use their own emotions against them. These Americans’ human weaknesses, including a tendency towards violence, were leveraged to make money for people who wouldn’t have to suffer the consequences of a divided society. I can’t help but wonder if fears fomented and violent speech provoked by these Facebook groups played at least a part in laying the foundation for the January 6th riots.

This group is in no way an isolated incident, either.

From the same article, “Americans didn’t run some of the most popular Groups, the August presentation noted. It deemed a Group called “Trump Train 2020, Red Wave” as having “possible Macedonian ties” and of hosting the most hate speech taken down by Facebook of any U.S. Group.”

This kind of behavior can be found on any social media platform where the content can be monetized (such as YouTube) or where people can be directed to another site where ads can be monetized (such as using Facebook to link to a fake news article on their own website). After analyzing polarizing, monetized videos created to bypass YouTube’s algorithms for detecting fraud, Destin Sandlin remarked, “This is a coordinated attack against the YouTube algorithm, complete with countermeasures. This is a serious, well-funded activity done by people meant to do harm.”

While many disinformation campaigns are run for economic gain, some are organized to shift public opinion, as we saw with the Brexit vote. As Renée DiResta wrote:

This particular manifestation of ongoing conflict was something the social networks didn’t expect. Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets or taking over critical systems.

…But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion.

The impact of these influence operations is real. In the case of the Rohingya genocide in Myanmar, disinformation campaigns led to the death of at least 25,000 people and caused over 1,000,000 to flee the country.

Closer to home, other disinformation campaigns have had large-scale influence as well.

American elections & protests

In the 2016 United States election, hundreds of fake bot accounts posted thousands of tweets about the 2016 United States election. According to an analysis, “The troll accounts posed as news outlets, activists, and politically engaged Americans…contacting prominent individuals through mentions, organizing political events and abusive behavior and harassment.”

Many of those bots were run by the Internet Research Agency, a Russian organization coordinating the attacks, backed by Russian state actors. Twitter’s investigation showed that 1.4 million Twitter users had engaged with those fraudulent accounts.

It’s easy to assume that organizations like the Internet Research Agency who seek to influence public opinion, mainly post some of the most inflammatory political material. In reality, their messages are often far more subtle, looking less like a battle cry and more like a nudge towards disunity.

In his video about Facebook, Destin Sandlin shared the following screenshot of an actual post from a fake account created by the IRA.

He commented,

Every Christian I know would look at this and not only roll their eyes, but that kind of thing breaks our hearts.

[The fake post] doesn’t have to be the legitimate viewpoint of anyone for this to be effective. It only has to vaguely resemble some semblance of truth, and that will plant the seed, and then our fears, biases, and insecurities will take over from there.

These kinds of posts are powerful at sowing division, at creating an image of what a certain “side” is. With enough of these half-second nudges, people may find themselves adopting new opinions, new biases against their neighbors, new political stances.

The disinformation campaign wasn’t limited to online posts. As Renée DiResta explained:

The Internet Research Agency accounts during our own election were creating events. So they were holding rallies in the real world. So there was one event in 2016 where they set up two protests by two opposing groups — and you can actually go find the footage for this on YouTube — there are people with White Lives Matter banners standing on one side of the street, screaming at members of the Islamic Cultural Center who showed up to counter-protest on the other side of the street. The goal is not just to keep people mad online; the goal is to carry that with you as a person operating both online and offline.

Election interference via social media manipulation does not only come through foreign governments. A now-defunct agency called the Psy Group offered services to American politicians, including offering their services to Donald Trump to assist in his 2016 election campaign. Their sales document was titled “Reality is a Matter of Perception.” While it is unclear whether their services were used, the Wall Street Journal accessed a presentation from the agency which analyzed how social media manipulation had been used to influence the election.

Since the 2016 election and the subsequent investigations into the Internet Research Agency, social media platforms have closed many of the loopholes that allowed the bots to make it through. In more recent elections, disinformation was spread in new ways, as we saw with the violence-spurring Facebook groups in 2020.

Unfortunately, the frequency of these attacks can sow distrust in foreign nations. In an interview with Destin Sandlin, Nathaniel Gleicher, Head of Cybersecurity Policy at Facebook, offered this warning:

We know that the sophisticated actors, particularly the state actors in this space, part of their goal is to make themselves bigger than they are, to make people think they’re everywhere. People see anything they distrust or dislike; they think it’s Russia. They think it’s a foreign government. And we try very hard not to play into that and not to help with that. So, on the one hand, it’s important that people know the techniques, but on the other hand, you don’t want to be hyperbolic about what you’re seeing. You want to be as factual and specific as possible so that people don’t jump to conclusions, and you don’t feed into that.

“Factual and specific” — it takes work to think that way, and it takes courage to admit when we don’t know enough to make an informed opinion. But perhaps that is a key to understanding these complex problems without losing trust in the election process, losing faith in foreign nations’ motivations as a whole, or feeling anxious and powerless.

Donations to Ukraine (well, actually to scammers)

The YouTube scam-fighting expert who goes by the name of Kitboga released a video showing how scammers are using TikTok and other platforms to trick viewers into thinking they are war victims. The manipulation is elementary — scammers use video game footage, movie scenes, or sometimes totally random live-streamed videos and simply include trending hashtags to make them appear in users’ recommended videos. If the fake videos are emotional enough, viewers send them real money.

In this video, Kitboga criticizes TikTok for allowing these videos to trend on their algorithms and reach millions of viewers. I have not yet found adequate information about TikTok’s efforts to curb manipulation, so I cannot say whether TikTok has countermeasures in place that are failing or whether they are aware of this content and have consciously chosen to permit it. The obvious challenge for platforms in this situation is how to filter the content on their platform efficiently enough — even when they are literally being attacked with a flood of malicious activity — without prohibiting the genuine voices from being heard in a timely manner.

The social media platforms we frequent are literally under attack every day, and they can’t let their guard down for a second. There’s no way to win, but we expect them to manage it nearly perfectly, as seen by the outrage whenever it is discovered that criminals succeeded in leveraging the platforms to sway political opinion or prey on innocent people. In many cases, the same protections that would give these platforms an advantage in their arms race are fraught with ethical and political dilemmas.

Stay tuned for part two, which dives deeper into those dilemmas, the benefits and problems of government regulation, and what social media platforms have had to do during the crises of the past couple of years.

Sanneke Taylor is a young writer and graduate of Southern Virginia University. She loves to explore and write on a wide variety of topics from current events to health to local histories. Follow her on Instagram (@sanneke.taylor.writing) to see more of her work.