Welcome to this extra special edition of the Carnegie UK Trust Online Harms update, where we are focusing on the publication (at last!) of the Government’s draft Online Safety Bill (OSB).
Below, we provide links to our initial assessment of the draft and what might happen next, and we round up the Parliamentary and public debate and commentary, on all sides of the argument, from the last week or so.
Our next regular update, bringing together further developments along with our usual round-up of the latest news, research and developments relating to Online Harms policy in the UK and further afield, will follow in w/c 5th June.
What’s been happening?
The road to regulation
There’s no denying that the publication of the draft OSB is a significant milestone: 145 pages long, supported by 123 pages of explanatory notes and nearly 150 pages of impact assessment, all of which can be found here. The government’s publicity material was keen to emphasise the world-leading and history-making opportunities, as well as highlighting the measures that will protect freedom of expression and journalistic content; for example, in the press release and in Oliver Dowden’s op-ed in the Telegraph on the day of the Bill’s publication.
We now wait for the Scrutiny Office to confirm the appointment of the Chair and Members of the Joint Scrutiny Committee. While silence has descended in Parliament on speculation about the runners and riders, Oliver Dowden has confirmed that pre-legislative scrutiny would last for 12 weeks,though there will be a pause for summer recess during its work.
Our initial observations
The draft Bill is a framework Bill, providing the legislative architecture from which a large number of pieces of secondary legislation, codes of practice and pieces of guidance will flow; and while it does describe a largely coherent regulatory approach, it’s complicated, potentially. There is no overarching duty of care – which was the approach we’ve taken, drawing on the example of the Health and Safety at Work Act, in our work over the past three years – but three separate thematic “safety” duties of care and three counterbalancing partial exemptions from the duty. Each of these has been dealt with separately for user generated content services and repeated in a different form for search engines, leading to much repetition and cross referencing. The illegal harms are well described and strong; the harms to adults less so.
To be of relevance to the duties, the content must meet certain thresholds. For criminal law, in addition to terrorism and CSEA offences, it is that there is a crime where the intended victim is an individual or the Secretary of State specifies the offence falls within scope. For other content, it is that the content gives rise to psychological or physical harm: the threshold for this is of great interest – if this is too high then this part of the regime won’t bite – but it is only described vaguely in the Explanatory Notes and needs detailed scrutiny.
The powers of the Secretary of State go much further than expected. Part of this is over-reach (for instance ensuring OFCOM's guidance is in line with ‘government policy’), part is a function of using secondary legislation to give Parliament a say. This will also, doubtless, be a big focus of PLS.
The expected exclusions – described in December’s Full Government Response – largely remain: for example, disinformation that causes harms to individuals (say self-harm advice or anti-vaxx content) is covered; but a Russian disinformation attack during an election does not seem to be. A notable exception is the last-minute announcement (in the press release, but too late to be drawn through to the Bill), that online scams caused by user-generated content will be included. We can expect this to continue to be a contested area in the weeks and months ahead.
Our initial response on the day of the publication is here and Will Perrin has contributed to an article for CIGI comparing the UK proposals to those in Canada. We will publish more detailed analysis shortly. Meanwhile, useful commentary pieces from very different perspectives were published by Tech Crunch and, yes, Good Housekeeping, while this informed and detailed Regulate Tech podcast discussion, featuring Lord Richard Allan - formerly of Facebook and prior to that a Parliamentarian who sat on the scrutiny committee for the Communications Act 2003, which established Ofcom - is worth a listen.
What are others saying?
Children, young people and vulnerable groups:
Children’s rights campaigners 5 Rights had previously published their set of five tests for the Online Safety Bill, and their founder Baroness Kidron used her speech in the Lords debate to highlight the flaws evident in the design of the draft Bill, which “spends the bulk of its pages on rules that pertain to content. This undermines the stated ambition to tackle risk at a systemic level, as it leaves only cursory mention of the algorithms, functionalities and operating practices that drive user experience”. Supporting the Bishop of Oxford’s earlier intervention, Bns Kidron also called for the Government to move quickly and to bring forward minimum standards and codes of practice – on areas such as age assurance, safety by design, child impact assessments and algorithmic oversight – in order to protect children as soon as possible.
The NSPCC flagged concerns over the lack of action to address the cross-platform nature of abuse and the delay to bringing in the provisions to hold senior managers accountable until after the regulation has been implemented and reviewed. Their Chief Executive Peter Wanless said “this landmark piece of legislation risks falling short if Oliver Dowden does not tackle the complexities of online abuse and fails to learn the lessons from other regulated sectors. Successful regulation requires the powers and tools necessary to achieve the rhetoric. Unless Government stands firm on their promise to put child safety front and centre of the Bill, children will continue to be exposed to harm and sexual abuse in their everyday lives which could have been avoided.”
Catch 22’s response raised concerns over whether the Bill would keep up with fast-evolving technology; while Parentzone’s explainer queried whether Ofcom’s powers would be enough.
The Molly Rose Foundation gave the Bill a cautious welcome but called on the Government to avoid watering it down; the Samaritans’ view was that the draft Bill does not go far enough in protecting people from self-harm and suicide content, particularly if smaller platforms are out of scope; and the Epilepsy Society also cautiously welcomed the proposals, subject to seeing more detail on the scope for user protections.
Meanwhile, campaigners for greater protection for children from online pornography have criticised the draft Bill for lack of action to address the gap left by the proposed repeal of Section 3 of the Digital Economy Act, which would have brought in age-verification for porn sites. John Carr from the Children’s Charities’ Coalition of Internet Safety warned that many porn sites would be out of scope of the Bill, and Baroness Benjamin – who has long campaigned for age-verification and had recently sent an open letter to the Prime Minister calling for the reinstatement of the DEA provisions, backed by 60 co-signatories – said “the elephant in the room is what happens to protect children over the next three to four years … A seven-year-old will be a teenager before this new law comes into force.” The IWF has also criticised the omission in the Bill, citing increasing volumes of self-generated indecent images of children and the impact of pornography on their perceptions of healthy relationships. The Age Verification Providers’ Association pointed to the apparent signal from Oliver Dowden, during his appearance at the DCMS Select Committee, that he would be likely to accept amendments to the Bill in this area as it went to PLS.
Misinformation and disinformation
Prior to the publication of the OSB, Full Fact published 10 tests for the Bill to tackle misinformation, including the need for a code of practice on misinformation and suitable safeguards for freedom of expression. The lack of clarity in the draft OSB on misinfo/disinfo that causes societal or democratic harms is likely to be an area of significant focus when the Bill enters Parliament.
Consumer harms and scams
The coalition of consumer groups who, along with Carnegie UK Trust, had called the week previously for online scams to be included in the OSB scope had partial success: a government concession to include fraud caused by user-generated content was announced at the time of the Bill’s publication, but there was no mention of online, paid-for advertising. Initial responses here from CIFAS, PIMFA, Which?, while this coruscating commentary from Paul Lewis in the FT is worth a read. On the other side of the argument, the IAB UK – unsurprisingly – welcomed the exclusion of online advertising and, while looking forward to the promised consultation from DCMS later in the year, expected that amendments and changes to the Bill would follow as pressure continued to mount on DCMS.
The FCA’s Chief Executive, Nikhil Rathi, was pretty bullish in front of the Treasury Select Committee on the day of publication, in relation to the scope of the OSB and the links with online advertising: “We would like the online safety Bill, which is welcome, to have user-generated content subject to it in relation to fraud—we would like that extended to online advertising. We would like the financial promotions order exemptions to be looked at very seriously—the threshold and the self-certification”.
In response to questions, Home Office Minister Baroness Williams said in the Lords Queens Speech debate that “online fraud will be included in the scope of the Bill”; however, in the Commons, Digital Minister Caroline Dinenage set out a more nuanced, three-pronged approach: “working closely with law enforcement, technology companies and banks to tackle online fraud at source”, with the Home Office to publish “an ambitious fraud action plan”; the Online Safety Bill to “tackle any kind of fraud that is facilitated through user-generated content”; and then “consulting on tougher advertising regulation” later this year. We wait for further clarification on fraud from DCMS in due course.
Online hate, abuse and intimidation
In a joint op-ed in Grazia, Seyi Akiwowo (Glitch) and Danny Stone (Antisemitism Policy Trust) welcomed the draft Bill but pointed out there is a long way to go: “whilst regulated services are to ‘mitigate’ or ‘prevent’ children accessing illegal content, it is only to be ‘minimised’ for adults. For us grown-ups, most of the risks on these services are to be managed through Terms and Conditions set by companies, for which there is no minimum standard, and which in some cases are so minimalist that they have enabled hate to flourish”. They also flagged the potential impact of the exemptions for free speech, the focus on larger platforms while “services known to inspire hatred and lead to offline harm like 4Chan, 8Chan, Bitchute or Gab, escape what are necessary safeguards” and the complete omission of measures to tackle anonymous online abuse.
Hacked Off's view was that the exemption for news platforms would mean "safe havens for those responsible for racism, hate and abuse online" when posting below-the-line comments on online news stories.
Ellen Judson’s blog for Demos focused on the impact of the exemption for “content of democratic importance”: “cordoning off certain topics or viewpoints in a ‘bubble’ as ‘protected no matter how harmfully expressed’, is not a democracy-enhancing strategy. Democracy more urgently needs online spaces where people can speak out without fear of violence against them, and where disinformation campaigns are not able to run rampant to stoke division and fear. And violence and disinformation are often inherently linked to political debates”.
This recent analysis by Politico set up the context for the response from free speech advocates and campaigners well. And, despite the duties brought in by the Government to protect free speech, exempt journalistic content and protect “democratically important” content, free-speech advocates – in particular, the members of the Save Online Speech coalition, co-ordinated by Big Brother Watch – were unanimous in their responses.
Global Partners Digital welcomed the move away from a single overarching duty and the focus on risk-assessed, proportionate assessments, but still warned of the risks to free speech from the very broad definitions of harm and the risks that the child protection safeguards will lead to entire platforms being moderated.
Writing in the Spectator, Big Brother Watch’s Mark Johnson claimed that the UK would become “a world leader amongst democracies when it comes to censorship and state control”, viewing the exemptions only applying to journalist and politicians and thus “restoring the old gatekeepers of speech whilst bestowing upon the rest of us an online regime of restrictions and censorship”.
Open Rights Group called it a “Kafkaesque” plan: “Treating online speech as inherently dangerous and demanding that risks are eliminated under the threat of massive fines is only going to end up in over-reaction and content removal”; the IEA said the "draconian" OSB would give regulators "the power to censor virtually anything from Covid-sceptical content to re-runs of Muffin the Mule" and the Adam Smith Institute said the Bill was an “incoherent train wreck”.
The Free Speech Union welcomed the revisions and the introduction of exemptions since the Full Government Response but called it a “serious threat to online free speech” that “will need to be amended if the government is to avoid creating a censors’ charter”.
Meanwhile, in a detailed initial blog post, Graham Smith focused on what the definition of “a person of ordinary sensibilities” might mean in relation to determining the likelihood of “adverse physical or psychological harm” and identified a number of areas of “collateral damage” to free speech. He concluded that “the danger to legitimate speech arises from the misidentification of illegal or harmful content and lack of clarity about what is illegal or harmful”, warning too that it could become a “censors’ charter”.
To date, the major platforms have been largely silent in response – publicly at least, with very little in the form of statements or reactions.
Industry organisations have given initial, cautious reactions, with Tech UK’s response asking, amongst other things, for more clarity on where the thresholds lie between different categories of companies, which would be particularly important for companies that scale fast; Ostia (representing safety tech) commended the fact that Bill was “pro-innovation”; and the Age Verification Providers Association set out their desire for: "A well-regulated, independent, standards-based, open and competitive market for age verification will allow websites, apps and platforms to know the age (but not the identity) of their users to a level of certainty proportionate to the risk of harm its content presents”. Tech Against Terrorism focused on the impact of the proposals on smaller platforms who may be unable to comply with the range of obligations and duties, and called for more clarity on both the scope and the definitions.
The view from abroad: European and international commentary
You can catch up with Baroness Kidron’s evidence to the US Senate on protecting kids online here.
Consultations and inquiries
Ofcom consultation on its VSP guidance on dealing with harmful content: deadline 11th June.
view email in browser | unsubscribe | forward to a friend
Copyright (C) 2019 Carnegie United Kingdom Trust
Registered Charity No: SC 012799 operating in the UK, Registered Charity No: 20142957 operating in Ireland, Incorporated by Royal Charter 1917
You are receiving this email because we have previously corresponded about this area of our work or you opted in to receive updates.