By Tony Romm and Kurt Wagner, for Recode
Facebook, Google and Twitter plan to tell congressional investigators this week that the scope of Russia’s campaign to spread disinformation on their sites — and to potentially disrupt the 2016 U.S. presidential race — is much broader than the companies initially reported.
At Facebook, roughly 126 million users in the United States may have seen posts, stories or other content created by Russian government-backed trolls around Election Day, according to a source familiar with the company’s forthcoming testimony to Congress. Previously, Facebook had only shared information on ads purchased by Kremlin-tied accounts, revealing that they reached more than 10 million U.S. users.
Google, which previously had not commented on its internal investigation, will break its silence: In a forthcoming blog post, the search giant confirmed that it discovered about $4,700 worth of search-and-display ads with dubious Russian ties. It also reported 18 YouTube channels associated with the Kremlin’s disinformation efforts, as well as a number of Gmail addresses that “were used to open accounts on other platforms.”
And Twitter will tell Congress that it found more than 2,700 accounts tied to a known Russian-sponsored organization called the Internet Research Agency, according to sources familiar with its testimony. Twitter initially informed lawmakers about just 200 known accounts. The company will also release a new study that shows the extent to which Russian-based automated accounts, or bots, of all sorts tweet on its platform.
In sharing these findings with congressional investigators, the three tech giants plan to emphasize that Russian-fostered disinformation — while troubling — amounted to only a small portion of the ads and other content published regularly on their platforms. Facebook, for example, hopes to highlight that its U.S. users are served more than 200 stories in their News Feeds each day, according to a source familiar with its thinking.
Still, the companies’ explanations may not satisfy an ever-expanding chorus of critics on Capitol Hill. Lawmakers are increasingly demanding that Facebook, Google and Twitter step up their efforts to counter the Kremlin’s attempts to sow political and social discord — or else face more regulation by the U.S. government.
For the tech industry, the first test comes on Tuesday: A crime- and terrorism-focused committee led by Republican Sen. Lindsay Graham will grill Colin Stretch, the general counsel of Facebook; Richard Salgado, the director of law enforcement and information security at Google; and Sean Edgett, the acting general counsel of Twitter.
On Wednesday, Facebook’s Stretch and Twitter’s Edgett will return to the Capitol and submit to two back-to-back sessions before the House and Senate Intelligence Committees. There, they’ll be joined by Kent Walker, the general counsel of Google.
Upon entering the hearings, these tech giants each pledged to improve their handling of political advertising — seemingly in a bid to stave off congressional scrutiny. Facebook and Twitter, for example, in October promised more manual review of those ads, along with greater disclosure as to who is paying for them in the first place.
And Google newly revealed on Monday that it would do the same. The company announced that it sought to create a new database for election ads purchased on AdWords and YouTube, along with stronger disclosure rules and a new ad transparency report due in 2018. Google said it would also put in place new procedures to verify that advertisers running political ads are based in the U.S.
But lawmakers’ concerns aren’t limited to ads. Members of Congress are likely to press some tech executives on their handling of organic posts — the stories, status updates or other content published and shared on social media sites without cost. In many ways, this content is harder to identify, and at times it is impossible to regulate in a way that doesn’t trigger free-speech concerns.
At Facebook, for example, Russian trolls created 80,000 pieces of organic content between January 2015 and August 2017, the company plans to tell lawmakers at the hearing. About 29 million Americans saw those posts directly in their News Feed over that two-year period. And those users also liked, shared and followed these posts and pages, exposing them to their friends — meaning 126 million U.S. users in total might have seen at least some Russian-generated content, according to a source familiar with the findings.
On Instagram, meanwhile, Facebook deleted roughly 170 accounts tied to Russian trolls that posted about 120,000 pieces of content, the company plans to reveal in its testimony.
Taken together, those organic posts had a much greater reach than the 3,000 ads purchased by Russian agents on Facebook around Election Day. In October, the company provided key congressional committees with copies of the ads, which sought to sow social and political unrest around contentious topics, including immigration and Black Lives Matter.
Muck like its peers, though, Facebook plans to stress to U.S. lawmakers that the activity represents only a fraction of what happens daily on its site. Russian-generated disinformation during the election amounted to four-thousandths (0.004) of one percent of content in the News Feed, according to a source familiar with the company’s findings.
Google, meanwhile, plans to tell Congress that it “found only limited activity on our services,” wrote general counsel Walker and Salgado, a company security executive, in a blog post published before the hearing.
Initially, sources had flagged $4,700 in ad spending by Russia’s so-called Internet Research Agency, and Google finally confirmed the number Monday. In doing so, it said search and display ads were not targeted based on users’ geography or political preferences.
Its audit of YouTube, meanwhile, turned up 18 channels tied to Russian trolls, which had uploaded 1,108 videos. In total, they had been viewed roughly 309,000 times between June 2015 and November 2017; about 3 percent of those videos had more than 3,000 views. The channels have been suspended.
Yet one of Google’s biggest challenges — much like Facebook and Twitter — is its handling of organic content, including videos uploaded by RT, a Russian government-funded news network. Called a propaganda arm of the Kremlin, RT videos have millions of views on YouTube. In Google’s investigation, however, the tech giant said it “found no evidence of manipulation of our platform or policy violations.” As a result, Google said that RT and other state-sponsored media outlets are still “subject to our standard rules.”
Twitter, for its part, recently banned RT from advertising on its platform, though the publication is still allowed to tweet there. Facebook has announced no change.
For its part, Twitter plans to unveil two new key findings during its testimony to Congress, sources told Recode on Monday. Chief among them: The company’s acting general counsel, Edgett, will note that the company had discovered — and suspended — roughly 2,752 accounts tied to known Kremlin trolls.
Initially, Twitter pegged this number at about 200 accounts. And while the company at the time described it as an early estimate, it still faced sharp criticism from lawmakers like Sen. Mark Warner, who charged that the company hadn’t done an exhaustive investigation.
Twitter also sought to study election-related tweets sent between Sept. 1 and Nov. 15, 2016. Among a pool of 189 million tweets, the company identified about 1.4 million sent by automated Russian-affiliated accounts.
In Twitter’s estimation, that’s less than three-quarters of a percent of all of the election-sampled tweets sent using its service over a roughly two-month window — and Edgett will stress they “underperformed” in generating impressions on the site when compared to an average, normal tweet. In contrast, Twitter also noted that tweets from accounts including WikiLeaks tended to benefit from significantly more engagement by Russian bots.
By Tony Romm and Kurt Wagner, for Recode