Categories
Audio Sources - Full Text Articles

U.K. government purges “legal but harmful” provisions from its revised Online Safety Bill

Listen to this article

By Mark MacCarthy

On November 28, the U.K. government announced major changes to its Online Safety Bill. The legislation has languished in Parliament since June, and the changes are intended to smooth its forward passage. The proposed draft amendments, released two days later, demonstrate just how difficult it is for governments to regulate harmful online content, even in a nation where free speech protections are more limited than the First Amendment. The proposed revisions would:

  • Strip the provisions related to material that is legal but harmful for adults.
  • Provide mechanisms for users to avoid exposure to certain material defined in the bill, such as hate speech or encouragement of self-harm.
  • Criminalize encouragement to commit self-harm, nonconsensual “deepfake” pornography and “downblousing.”
  • Require social media companies to remove content only if it is illegal or violates their publicly announced standards, to have in place systems to enforce their publicly announced standards, and to provide an appeals process for users whose content has been removed.

These draft changes are a mixed bag. The due process and transparency measures are all to the good. The requirement for companies to take steps against content they say they will constrain is also a valuable consumer protection measure. On the other hand, the changes weaken the bill’s tough stand against harmful online material while maintaining a problematic requirement for social media companies to take certain steps in connection with material the government itself has identified. Just keeping harmful material out of the feeds of people who do not want to see it is obviously an ineffective way of protecting the public from the effects of information disorder. Moreover, by imposing a new duty not to act against online material unless it violates a company’s published standards, the bill might make it more difficult for companies to respond to new online speech challenges until after the damage has been done.

The U.K. government has forwarded its package of amendments to the relevant committee of the House of Commons, which is considering them in a process that started on December 5. Further amendments are possible during this legislative process, which should take a couple of months. The U.K. government expects the bill to be passed out of the House of Commons in January.

The Background

Some press reports suggested that the references to “legal but harmful material” were targeted for removal because the bill required social media companies to “stamp out” this material even though it remained perfectly legal under U.K. law. But this interpretation is a misreading of the earlier version of the bill.

The earlier bill did require the Secretary of State to designate categories of content that would be considered harmful to adults. The fact sheet accompanying the bill noted that these categories might include abuse, harassment, and exposure to content encouraging self-harm or eating disorders as well as misogynistic abuse and disinformation. Parliament would have had to approve the designations by the Secretary of State.

The earlier bill would have also required companies to conduct risk assessments in connection to such material, take one of four steps in dealing with it, including the possibility of leaving it on their systems, and describe in their transparency reports how they treated this material.

Under the earlier bill, platforms that choose to carry legal but harmful material would be required to develop “systems or processes” available to users that are designed to “reduce the likelihood” that the user will encounter harmful content or “alert the user” to the harmful nature of the material.

The Revisions

Various groups, including some senior conservative officials and some free speech groups, objected to the very existence of a government-defined category of “legal but harmful speech,” even if the platforms were not explicitly required to remove this material. The message, they felt, was clear enough: The government wanted this material limited or removed from social media, even though it was material that could legally be carried in other media such as books, newspapers, or magazines. Apparently, this concern was enough to hold up the bill.

As a result, the U.K. government’s just-proposed amendments would deprive the Secretary of State of the power to define legal but harmful material and would remove all duties related to that content, including risk assessments, coverage in transparency reports, and the requirement to take one of four specified measures in connection with the material.

But the proposed amendments retain the duty of user empowerment, requiring companies to adopt and maintain measures that would allow users to control their exposure to certain categories of information. The bill explicitly defines these categories, including material relating to suicide, deliberate self-injury, eating disorders, or abuse or incitement of hatred toward people because of their race, religion, sex, sexual orientation, disability, or gender. Moreover, the enforcing regulatory agency Ofcom, the traditional media regulator, must produce guidance which contains examples of the content that the agency thinks is included (or not included) in each of these categories and is thus subject to the requirement for user empowerment.

The U.K. government’s announcement about the new amendments is misleading in its sweeping statement that “the Bill will no longer define specific types of legal content that companies must address.” The new amendments explicitly mention certain types of legal content that social media companies must address under the duty to provide user empowerment. Under these proposed amendments, social media companies have no duty to provide users with tools to shield themselves from controversial political speech, for instance, but they do have such a duty with respect to hate speech. This suggests that some legal speech is more worthy than others in the eyes of the U.K. government. The free speech advocates who objected to the role of the Secretary of State in defining “legal but harmful material” in the older version of the bill will not be happy with this new statutory designation of certain legal speech as requiring special user-empowerment measures.

The government also intends to add measures to the bill that would criminalize material that encourages users to commit self-harm. This change was introduced in reaction to the death of 14-year-old Molly Russell, who died in 2017 after viewing certain harmful online material. Despite this criminalization measure, Molly’s father, Ian, objected to the amendment removing measures related to legal but harmful content as did the opposition Labour Party. Lucy Powell, Labour’s culture spokesperson, said this would give “a free pass to abusers.”

The U.K. government also announced its intention to criminalize nonconsensual “deepfake” pornography and “downblousing.” This criminalization measure would include explicit images taken without someone’s consent through hidden cameras or surreptitious photography, as well as explicit images or videos that have been manipulated to look like someone without their consent. These changes should make law professor Danielle Citron happy. Her latest book calls for exemptions from Section 230 of the Communications Decency Act for revenge porn.

The new amendments include measures designed to promote speech, including a duty “not to act against users except in accordance with terms of service.” Under this new provision, companies “will not be able to remove or restrict legal content, or suspend or ban a user, unless the circumstances for doing so are clearly set out in their terms of service.” They will also be allowed to remove content that is against the law. The new amendments also contain a requirement for an “effective right of appeal” when a user’s post has been removed or limited.

In addition, as described in the government’s announcement, the new amendments have a further consumer protection measure. When social media companies set out their content rules, they must “keep their promises to users and consistently enforce their user safety policies.” If a company outlaws “racist and homophobic abuse or harmful health disinformation,” for instance, then it must have in place systems and processes to “tackle” this banned content. The new bill retains its enforcement mechanism, allowing Ofcom to fine companies up to 10% of their annual turnover.

A Mixed Bag

The prospects for the bill at this point are not clear. Despite opposition from the Labour Party, its chances of moving forward have improved. But the changes have disappointed many who hoped for a more coherent and forceful approach. U.K. journalist Chris Stokel-Walker speaks for many when he calls the revised bill “a beacon of mediocrity.”

I think that’s an overly harsh judgment, but there is something feckless about the bill’s fundamental approach of allowing disinformation, hate speech, and racism to flourish online provided only that social media companies find a way to keep this material out of the feeds of people who don’t want to see it. It is not as though we’ll be able to protect ourselves from the harmful effects of the online information disorder by cultivating willful ignorance of its existence.

In addition, the U.K.’s new direction does not include two important measures I recommended in an earlier TechTank commentary. The first, which is a provision for researcher access to social media company data, is vital to verify whether any of the other measures are doing any good and to discover other ways to address harmful content online. The second, a provision for greater involvement of civil liberties groups, would go a long way toward ensuring that government overreach or collusion with the regulated industry is kept in check.

Parliament still must approve these new measures and will be able to add provisions of its own during its consideration over the next several months. There is adequate time to reconsider some of the problematic measures still in the Online Safety Bill as proposed by the U.K. government and to add some vitally needed provisions for researcher access and civil society involvement.

2021-10-25T163208Z_1809440298_RC2DGQ9IAC
fblike20.png pinterest20.png twitter20.png email20.png rss20.png