How Forcing TikTok To Completely Separate Its US Operations Could Actually Undermine National Security

from the when-government-demands-backfire dept

Back in August 2020, the Trump White House issued an executive order purporting to ban TikTok, citing national security concerns. The ban ultimately went nowhere — but not before TikTok and Oracle cobbled together “Project Texas” as an attempt to appease regulators’ privacy worries and keep TikTok available in the United States.

The basic gist of Project Texas, Lawfare reported earlier this year, is that TikTok will stand up a new US-based subsidiary named TikTok US Data Security (USDS) to house business functions that touch US user data, or which could be sensitive from a national security perspective (like content moderation functions impacting Americans). Along with giving the government the right to conduct background checks on potential USDS hires (and block those hires from happening!), TikTok committed as part of Project Texas to host all US-based traffic on Oracle-managed servers, with strict and audited limits on how US data could travel to non-US-based parts of the company’s infrastructure. Needless to say, Oracle stands to make a considerable amount of money from the whole arrangement.

Yesterday’s appearance by TikTok CEO Shou Zi Chew before the House Energy and Commerce Committee shows that even those steps, and the $1.5 billion TikTok are reported to have spent standing up USDS, may prove to be inadequate to stave off the pitchfork mob calling for TikTok’s expulsion from the US. The chair of the committee, Representative Cathy Rodgers of Washington, didn’t mince words in her opening statement, telling Chew, “Your platform should be banned.”

Even as I believe at least some of the single-minded focus on TikTok is a moral panic driven by xenophobia, not hard evidence, I share many of the national security concerns raised about the app. 

Chief among these concerns is the risk of exfiltration of user data to China — which definitely happened with TikTok, and is definitely a strategy the Chinese government has employed before with other American social networking apps, like Grindr. Espionage is by no means a risk unique to TikTok; but the trove of data underlying the app’s uncannily prescient recommendation algorithm, coupled with persistent ambiguities about ByteDance’s relationship with Chinese government officials, pose a legitimate set of questions about how TikTok user data might be used to surveil or extort Americans.

But there’s also the more subtle question of how an app’s owners can influence what people do or don’t see, and which narratives on which issues are or aren’t permitted to bubble to the top of the For You page. Earlier this year, Forbes reported the existence of a “heating” function available to TikTok staff to boost the visibility of content; what’s to stop this feature from being used to put a thumb on the scale of content ranking to favor Chinese government viewpoints on, say, Taiwanese sovereignty? Chew was relatively unambiguous on this point during the hearing, asserting that the platform does not promote content at the request of the Chinese government, but the opacity of the For You page makes it hard to know with certainty why featured content lands (or doesn’t land) in front of viewers.

Whether you take Chew’s word for it that TikTok hasn’t done any of the nefarious things members of Congress think it has — and it’s safe to say that members of the Energy and Commerce Committee did not take him at his word — the security concerns stemming from the possibility of TikTok’s deployment as a tool of Chinese foreign policy are at least somewhat grounded in reality. The problem is that solutions like Project Texas, and a single-minded focus on China, may end up having the counterproductive result of making the app less resilient to malign influence campaigns targeting the service’s 1.5 billion users around the world.

A key part of how companies, TikTok included, expose and disrupt coordinated manipulation is by aggregating an enormous amount of data about users and their behavior, and looking for anomalies. In infosec jargon, we call this “centralized telemetry” — a single line of sight into complex technical systems that enables analysts to find a needle (for instance, a Russian troll farm) in the haystack of social media activity. Centralized telemetry is incredibly important when you’re dealing with adversarial issues, because the threat actors you’re trying to find usually aren’t stupid enough to leave a wide trail of evidence pointing back to them.

Here’s a specific example of how this works:

In September 2020, during the first presidential debate of the 2020 US elections, my team at Twitter found a bunch of Iranian accounts with an awful lot to say about Joe Biden and Donald Trump. I found the first few — I wish I was joking about this — by looking for Twitter accounts registered with phone numbers with Iran’s +98 country code that tweeted with hashtags like “#Debate2020.” Many were real Iranians, sharing their views on American politics; others were, well, this:

Yes, sometimes even government-sponsored trolling campaigns are this poorly done.

As we dug deeper into the Iranian campaign, we noticed that similar-looking accounts (including some using the same misspelled hashtags) were registered with phone numbers in the US and Europe rather than Iran, and were accessing Twitter through different proxy servers and VPNs located all over the world. Many of the accounts we uncovered looked, to Twitter’s systems, like they were based in Germany. It was only by comparing a broad set of signals that we were able to determine that these European accounts were actually Iranian in origin, and part of the same campaign.

Individually, the posts from these accounts didn’t clearly register as being part of a state-backed influence operation. They might be stupid, offensive, or even violent — but content alone couldn’t expose them. Centralized telemetry helped us figure out that they were part of an Iranian government campaign.

Let’s turn back to TikTok, though:

TikTok… do a lot of this work right now, too! They’ve hired a lot of very smart people to work on coordinated manipulation, fake engagement, and what they call “covert influence operations” — and they’re doing a pretty good job! There’s a ton of data about their efforts in TikTok’s (also quite good!) transparency report. Did you know TikTok blocks an average of 1.8 billion fake likes per month? (That’s a lot!) Or that they remove an average of more than half a million fake accounts a day? (That’s also a lot!) And to their credit, TikTok’s state-affiliated media labels appear on outlets based in China. TikTok have said for years that they invest heavily in addressing manipulation and foreign interference in elections — and their own data shows that that’s generally true.

Now, you can ask very reasonable questions about whether TikTok’s highly capable threat investigators would expose a PRC-backed covert influence operation if they found one — the way Twitter and Facebook did with a campaign associated with the US Department of Defense in 2022. I personally find it a little… fishy… that the company’s Q3 2022 transparency report discloses a Taiwanese operation, but not, say, the TikTok incarnation of the unimaginably prolific, persistent, and platform-agnostic Chinese influence campaign Spamouflage Dragon (which Twitter first attributed to the Chinese government in 2019, and which continues to bounce around every major social media platform).

But anyway: the basic problem with Project Texas and the whole “we’re going to air-gap US user data from everything else” premise is that you’re establishing geographic limits around a problem that does not respect geography — and doing so meaningfully hinders the company’s ability to find and shut down the very threats of malign interference that regulators are so worried about. 

Let’s assume that USDS staff have a mandate to go after foreign influence campaigns targeting US users. The siloed nature of USDS means they likely can only do that work using data about the 150 million or so US-based users of TikTok, a 10% view of the overall landscape of activity from TikTok’s 1.5 billion global users. Their ability to track persistent threat actors as they move across accounts, phone numbers, VPNs, and hosting providers will be constrained by the artificial borders of Project Texas.

(Or, alternatively, do USDS employees have unlimited access to TikTok’s global data, but not vice versa? How does that work under GDPR? The details of Project Texas remain a little unclear on this point.)

As for the non-USDS parts of TikTok, otherwise known as “the overwhelming majority of the platform,” USDS turns any US-based accounts into a data void. TikTok’s existing threat hunting team will be willfully blind to bad actors who host their content in the US — which, not for nothing, they definitely will do as a strategy for exploiting this convoluted arrangement. 

USDS may seem like a great solution if your goal is not to get banned in the US (although yesterday’s hearing suggests that it may actually be a failure when it comes to that, too). But it’s a terrible solution if your goal is to let threat investigators find the bad actors actually targeting the people on your platform. Adversarial threats don’t respect geographic limits; they seek out the lowest-friction, lowest-risk ways to carry out their objectives. Project Texas raises the barriers for TikTok to find and disrupt inauthentic behavior, and makes it less likely that the company’s staff will be successful in their efforts to find and shut down these campaigns. I struggle to believe the illusory benefits of a US-based data warehouse exceed the practical costs the company necessarily takes on with this arrangement.

At the end of the day, Project Texas’s side effects are another example of the privacy vs security tradeoffs that come up again and again in the counter-influence operations space. This work just isn’t possible to do without massive troves of incredibly privacy-sensitive user data and logs. Those same logs become a liability in the event of a data breach or, say, a rogue employee looking to exfiltrate information about activists to a repressive government trying to hunt them down. It’s a hard problem for any company to solve — much less one doing so under the gun of an impending ban, like TikTok have had to. 

But, whatever your anxieties about TikTok (and I have many!), banning it, and the haphazard Project Texas reaction to a possible ban, won’t necessarily help national security, and could make things worse. In an effort to stave off Chinese surveillance and influence on American politics, Project Texas might just open the door for a bunch of other countries to be more effective in doing so instead.

Yoel Roth is a technology policy fellow at UC Berkeley, and was the head of Trust & Safety at Twitter.

Filed Under: , , , , , , , , ,

Companies: oracle, tiktok, tiktok usds




Source link