Categories
Audio Sources - Full Text Articles

New Bill Proposes Banning TikTok in the U.S.

Listen to this article

50179261657_c2fd0329d6_o_0.jpg

The Committee on Foreign Investment in the United States (CFIUS), which screens foreign investments in the U.S. for national security risks, is by multiple accounts in conversation with the social media platform TikTok, which is owned by the Chinese tech giant ByteDance. Little is publicly known about these conversations, but media reports indicate that CFIUS and TikTok are discussing a deal that would address U.S. government security concerns while also allowing TikTok to keep operating in the U.S. without sale by its parent ByteDance.

As this saga plays out behind closed doors, some members of Congress are pursuing a different approach. On Dec. 13, Sen. Marco Rubio (R-Fla.), along with House members Rep. Mike Gallagher (R-Wis.) and Rep. Raja Krishnamoorthi (D-Ill.), introduced a new bill to ban TikTok and ByteDance from operating in the United States. There has been some bipartisan consensus around TikTok-related issues, but Democratic co-sponsorship is notable, as Republicans have on the whole been more vocal in calling for a complete ban. The members titled the bill the Averting the National Threat of Internet Surveillance, Oppressive Censorship and Influence, and Algorithmic Learning by the Chinese Communist Party Act—or the ANTI-SOCIAL CCP Act, for short. The bill’s stated purpose is “to protect Americans from the threat posed by certain foreign adversaries using current or potential future social media companies that those foreign adversaries control to surveil Americans, learn sensitive data about Americans, or spread influence campaigns, propaganda, and censorship.”

In several ways, the bill contributes to the risk assessment landscape around foreign technology companies. It defines terms such as “entity of concern” and establishes a list of criteria that would indicate a foreign social media platform is unduly subject to a foreign adversarial government’s control. It also lists countries of concern beyond China, including Russia, Venezuela, Cuba, and North Korea.

The bill has several concerning components. While it describes particular risks associated with foreign social media platforms—including the ability for a foreign government to compel a firm to hand over data or compel a company to modify its content moderation practices—it is not clear the bill and its writers are building a policy that allows for a more nuanced risk assessment that translates into adaptable policy responses. Instead, the bill proposes a template, one-size-fits-all approach—a complete ban—to foreign social media companies identified as a security risk under the criteria. Notably, the bill would also circumvent an executive action limitation meant to constrain the president from overreaching in banning information-related transactions.

All told, it is a noteworthy piece of legislation, and it delineates between the risk of data access and the risk of content manipulation better than then-President Trump’s executive order on TikTok. But it raises many questions about how the U.S. government should approach concerns about foreign social media platforms and national security—and how the U.S. government should be able to respond to potential risks. In addition to clearly defining a problem and distinguishing between distinct security risks, a remaining imperative for legislators and policymakers is developing risk frameworks that are precise, nuanced, and compatible with a suite of tailored policy responses.

Invoking the International Emergency Economic Powers Act

This new bill has many provisions, but at its core, it directs the president of the United States to effectively ban certain foreign social media companies from operating in the United States—explicitly specified herein as ByteDance and TikTok. Based on how the legislation is more broadly written, the call for a ban could theoretically expand in the future to cover other foreign social media platforms deemed to be a risk to U.S. national security (the bill’s determination process for this status is discussed below). Calling for this action means rehashing, once more, one of the Trump administration’s most infamous, and failed, tech policy actions: attempting to ban TikTok in the United States.

The bill states that 30 days after the bill is enacted, the president will use their powers under the International Emergency Economic Powers Act (IEEPA) to “the extent necessary to block and prohibit all transactions in all property and interests of property of a social media company” as laid out in the bill. IEEPA, as explained by law professor Bobby Chesney, essentially allows the president to impose sanctions and embargoes on foreign entities when the president deems there is a “national emergency” with U.S. interests at stake. Trump invoked this authority in August 2020 when he issued executive orders to ban TikTok and WeChat in the United States. (Multiple courts subsequently overturned these actions, and the Biden administration withdrew the orders in July 2021.)

In this case, the bill would exempt the president from 50 U.S.C. § 1701 and § 1702(b) of the IEEPA, meaning that the president would not have to declare a national emergency before invoking IEEPA (which is required under § 1701)—and would not be constrained by the prohibitions (under § 1702(b)) on regulating the import or export of information or informational materials, among others. The former exemption is not surprising, because the bill’s very premise is that certain social media platforms pose risks to national security through their U.S. operations. The latter exemption is more notable. IEEPA is written explicitly to have limitations. By not constraining the president by 50 U.S.C. § 1702(b), the bill would circumvent IEEPA’s clear limitation on the president regulating or prohibiting, directly or indirectly, the importation or exportation “whether commercial or otherwise, regardless of format or medium of transmission, of any information or informational materials.” Social media platforms could arguably fall under this definition, which makes circumventing the limitation all the more significant and potentially concerning. If Congress does pass this bill, that would indicate a legislative belief that TikTok’s security risks make it necessary to bypass the IEEPA constraint.

According to the bill, for a social media company to qualify for this blocking and prohibition, it must satisfy at least one of the following (paraphrased) criteria:

  • The company is domiciled, headquartered, or has its principal place of business in or is organized under the laws of a “country of concern.”
  • A country and/or entity of concern directly or indirectly owns, “controls with the ability to decide important matters,” or holds 10 percent or more of the company’s voting shares or stocks.
  • The company uses software or algorithms that are controlled, or whose export is controlled, by a country or entity of concern.
  • A country or entity of concern can substantially, directly or indirectly influence the company to (a) share data on U.S. citizens or (b) modify its content moderation practices.

The bill explicitly states that ByteDance and TikTok are “deemed companies” that satisfy these criteria.

Defining Undue National Security Risks (Though Not in Those Words)

On top of designating ByteDance and TikTok as deemed companies, the bill defines a “country of concern” and an “entity of concern” in a way that could enable additional, future applications of this IEEPA social media platform ban. Notably, the bill is not just looking to compel executive action against ByteDance and TikTok. It would also establish a set of definitions and criteria against which other foreign social media companies could be compared in the future. As U.S. policy around foreign technology companies, products, and services evolves, these contributions to the risk assessment landscape illuminate policymaker thinking on identifying and mitigating risks. The bill would also effectively set a precedent by which the president bans a foreign social media platform on national security grounds, using IEEPA but without some of its constraints.

To define a “country of concern,” the bill points to the term “foreign adversary” in the Secure and Trusted Communications Networks Act of 2019. According to that 2019 legislation, a foreign adversary is defined as “any foreign government or foreign non-government person engaged in a long-term pattern or serious instances of conduct significantly adverse to the national security of the United States or security and safety of United States persons.” In this respect, the bill explicitly names the People’s Republic of China (including the special administrative regions of Hong Kong and Macau), Russia, Iran, North Korea, Cuba, and Venezuela. These countries are also frequently named as foreign adversaries in U.S. policy documents and legislative proposals around risks to national security.

In turn, the definition for “entity of concern” covers a wide range of scenarios. This definition includes the armed forces, the leading political party, or a governmental body at any level in a “country of concern.” It also includes:

(D) an individual who is a national of a country of concern and is domiciled and living in a country of concern, and who is subject to substantial influence, directly or indirectly, from a country of concern; or

(E) a private business or a state-owned enterprise domiciled in a country of concern or owned or controlled by a private business or state-owned enterprise domiciled in a country of concern.

From a national security risk perspective, some of this appears reasonable. Of course, if an entity in question is literally part of the armed forces of a country of concern to U.S. national security, it creates or heightens the risk that said entity will use its access to or control over a technology platform to assist with its own country’s national security objectives. Many recent U.S. sanctions and other restrictions, for example, have targeted Russian and Chinese companies because of the ties they have to the Russian and Chinese militaries. If those militaries themselves had control over or substantial access to a social media platform, it would raise a number of important security questions. The same clear risk is present with entities that are part of a foreign political party or a foreign government, and this is especially true if that entity is a security agency.

The reference to individuals subject to “substantial influence, directly or indirectly, from a country of concern” likewise appears reasonable on its face. There are certainly foreign countries where law enforcement agencies or intelligence services are known to place intense, coercive pressure on individuals at technology companies to compel them to hand over information, to cooperate with the state on an ongoing basis, or even to send a message to company leadership. For example, in the fall of 2021, the Russian government demanded that Apple and Google delete opposition leader Alexey Navalny’s voting app from their app stores, ahead of nationwide Russian elections. When the companies refused, the Kremlin sent masked thugs to sit around the Google Moscow office with guns, the Russian parliament called in company representatives and gave them lists of local employees who would be hauled off to jail, and the Federal Security Service, Russia’s domestic security agency and the KGB’s successor, went to the home of the top Google executive in Russia and then chased her to a second location. Sure enough, both companies reversed course and complied with Moscow’s demand.

A definition that includes “substantial influence” speaks to real, high-risk intelligence and law enforcement activities in countries like Russia and China. Nonetheless, the proposal begs the question of how broadly this definition should be applied. There are many different scenarios in which it is arguably possible that a foreign government could exert substantial influence over someone at a foreign tech company. But possible does not equal probable, and part of a risk assessment is working to identify scenarios in which a harmful outcome is more likely. There are millions of people working in China’s technology sector, for example—which means that a policymaker assessing the risk of substantial Chinese government influence over a person or company must think through how to distinguish between higher-risk and lower-risk scenarios. It is not clear in the bill whether the legislation’s authors have such a framework in mind. Taken literally, though, the bill’s definition here suggests that the mere possibility of a foreign government’s substantial influence over a foreign person or social media company—when the company operates in the United States at a certain scale—is enough to compel a U.S. national security action.

Additionally, the last part of this definition, referring to private businesses in countries of concern, speaks to a broader policy question that U.S. policymakers must attempt to answer. State-owned enterprises are one risk category. They are controlled by a foreign government, and the government is clearly and actively involved in managing the enterprise. This direct state ownership also suggests the company would be more cooperative with that state’s law enforcement and intelligence agencies than an enterprise with no state affiliation, even if that private enterprise had limited room to push back. However, the inclusion of private businesses in this list begs the question of whether some policymakers—such as Rubio and Gallagher—perceive that a private technology company in China can exist in the global market at all without creating undue national security risks. I do not have the answer to this question. And it seems many policymakers don’t, either. The relationship between economic security and national security policy is a point of frequent debate, as is the relationship between technological protection and economic security. Yet the new bill puts front and center the importance of policymakers articulating some kind of position on this question: Is a private technology company’s existence in China sufficient to create an undue national security risk?

Other definitions in the bill put some constraints on its scope. For example, a “social media company” as defined in the bill is scoped to companies that have more than 1 million monthly active users “for a majority of months during the preceding 12 months.” The fact remains, though, that by this bill’s definition, any private business that is “domiciled in a country of concern” (or owned by another private business that is) would be considered an “entity of concern.” WeChat, the subject of the Trump administration’s second executive order on a foreign platform, is not explicitly listed in the bill alongside TikTok and ByteDance. While the company by some reports has millions of active U.S. users, it is not entirely clear whether WeChat could fall under the bill’s definition of a social media company.

Distinguishing Between Distinct Security Risks

As I have written previously for Lawfare, a persistent problem with the Trump administration’s TikTok executive order and other, subsequent proposals around foreign apps is the blurring together of distinct security risks. With TikTok, for instance, one can imagine several different risks that impact the security landscape, including the risk of data collection on U.S. government employees, the risk of data collection on non-government-employed U.S. individuals, the risk of TikTok censoring information in China at Beijing’s behest, the risk of TikTok censoring information beyond China at Beijing’s behest, and the risk of disinformation spreading on TikTok. In this bill, the top-line statement describes several risks associated with foreign social media companies: The governments that influence those companies surveilling Americans, learning sensitive data about Americans, and spreading influence campaigns, propaganda, and censorship.

The ANTI-SOCIAL CCP Act does a somewhat better job in articulating risks than the Trump executive order. In the bill’s top-line statement, it lists risks, but they are clustered together and not clearly defined. For example, the supposed difference between a social platform “surveilling Americans” and a social platform “learning sensitive data about Americans” is unspecified. Perhaps the distinction is driven by the term “learning”—in one interpretation, suggesting that the former is gathering raw data, and the latter is using algorithms to derive sensitive information about people—but that is not clear. The top-line statement also clusters together “spreading influence campaigns, propaganda, and censorship,” which similarly should be broken out.

The bill improves on this later, when describing the aforementioned four criteria for a company to be of concern. The bill clearly breaks out (a) the risk of a company sharing or being compelled to share data with a government or entity of concern and (b) the risk of a company having its content moderation practices substantially influenced by a government or entity of concern.

This is important, because failing to clearly distinguish between alleged security risks is a problem for several reasons. First, the risks are different. A foreign government requiring a company to hand over data on particular foreign users is different from that government using the platform to algorithmically push pro-regime content—which is also different from that government requiring the platform to take down regime-critical speech, and so on. Articulating a security reasoning requires distinguishing between these different risks. The more the U.S. government conducts reviews of foreign investment, technology, data, and other issues, the more important articulating a security reasoning becomes to allow public scrutiny of decisions, to convey to companies a belief in greater accountability, and to help minimize the risk that decisions are made politically without substantive national security justifications.

Second, failing to properly distinguish between the risks suggests a failure to conduct a rigorous risk assessment. This is not necessarily to say there is no risk associated with TikTok’s widespread use in the United States, for example, but to say that risk is a matter of likelihood (how likely a scenario is to happen, contingent on factors like an actor’s opportunity, capability, and intent) and severity (how bad it would be if said scenario happened). The likelihood of a foreign government requiring a foreign platform to hand over large data sets could be different from the likelihood of that government requiring that platform to continuously censor content. Breaking out the risks allows for a more granular analysis. It also enables analysts to create, if applicable, a priority order of risk. Perhaps one risk is far more likely than the others, and the response should thus be designed around addressing that outsized risk. In this way, the bill does a better job than Trump’s TikTok order in clearly separating out, in the list of its four prohibition criteria, the risk of government-compelled data access and the risk of government-compelled content manipulation.

Perhaps most importantly, part of distinguishing between risks in policy is linking specific mitigation actions to specific risks at hand, and this is one place where the bill could be greatly improved. If the proposed policy solutions are not calibrated to the risks, they may not achieve the desired results. Take the example of a complete ban on TikTok in the United States (temporarily setting aside speech and other concerns). That action would not impact every possible security risk in the same way. There is no other major, Chinese-based social media platform in the U.S. with TikTok’s reach. If a ban was instituted, that would certainly change the risk landscape vis-a-vis content censorship, because American citizens would not be able to use TikTok in the United States. Similarly, it would arguably change the risk landscape vis-a-vis TikTok algorithmically promoting Chinese government-favorable content.

However, that same action (a ban) would not meaningfully change the data risk landscape. TikTok does collect volumes of data on its users (much like every other social media platform), but the United States’s incredibly weak data privacy and security regulations mean a vast amount of information on Americans—from political preferences and demographic information to real-time GPS data and data on military personnel—is widely available for purchase on the open market. The Chinese government has many vectors through which it can gather data on Americans, including data brokers, software development kits, real-time bidding networks for online ads, and more—not to mention scraping and hacking data, too. Put simply, banning TikTok won’t protect Americans’ sensitive data. Just because a policy action could work for one risk or set of risks does not mean it works for them all.

Nowhere in the bill does it allow for particular actions to be taken in response to particular risks. The bill proposes what is effectively a template, static response—a complete ban—to a foreign platform in the United States where the platform and its use meet the listed security criteria. This raises the question of whether a one-size-fits-all approach to distinct content moderation, data privacy, and other risks is most appropriate and most sustainable over the long term. One could imagine, for example, a different policy framework that has a spectrum of possible responses to foreign platform security risks, such as a ban in some cases where risk mitigation measures are deemed to be wholly insufficient—and in others, some kind of middle ground that imposes a set of unique content or security requirements on a company.

Conclusion

The Biden administration, to its credit, had begun to move away from the Trump administration’s dysfunctional and legally overturned approach to TikTok. On June 9, 2021, Biden signed Executive Order 14034, entitled Protecting Americans’ Sensitive Data From Foreign Adversaries. The order revoked Executive Orders 13942 (the so-called TikTok ban) and 13943 (the so-called WeChat ban). It also revoked Executive Order 13873, which Trump signed on Jan. 5, 2021, just before leaving office to prohibit U.S. persons from engaging in transactions with Chinese companies Alipay, CamScanner, QQ Wallet, SHAREit, Tencent QQ, VMate, WeChat Pay, and WPS Office, citing concerns they could “permit China to track the locations of Federal employees and contractors” and “build dossiers of personal information” by gathering data that Beijing could access. Importantly, the June 2021 Biden executive order stated explicitly that “the Federal Government should evaluate these threats through rigorous, evidence-based analysis and should address any unacceptable or undue risks consistent with overall national security, foreign policy, and economic objectives, including the preservation and demonstration of America’s core values and fundamental freedoms.”

Simultaneously, CFIUS and other security review bodies appear to be conducting more and more reviews as there are concerns raised as well about national security creep in cross-border investment reviews. CFIUS’s reported conversations with TikTok are another example of how the interagency committee might consider mitigation agreements that allow a particular company to describe how it has addressed security risks, rather than outright forcing companies to undo transactions. Concerns about national security creep are valid and especially important to ask in a democracy. In tandem, the individuals involved in those reviews should have a great interest in transparency, accountability, and targeted risk assessment. After all, a risk framework that says the risk to the United States is the same for every single company in a given country (such as China), for example, does not help to identify the most urgent cases for review—or the places where action is not needed and could unnecessarily consume limited security review resources.

This is why this bill is so significant. Not only does it propose to reattempt the Trump administration’s ban on TikTok, while concerningly skipping around limitations on IEEPA, but it also lays out a set of definitions and criteria that could be applied to other foreign social media platforms in the future. Security concerns about foreign technology companies are clearly not going away, which means it is all the more imperative for legislators and their staff to design substantive, nuanced risk assessment frameworks that help distinguish between real security risks and situations where risks are conflated and responses are not properly tailored.