Skip to content
Law Society of Scotland
Search
Find a Solicitor
Contact us
About us
Sign in
Search
Find a Solicitor
Contact us
About us
Sign in
  • For members

    • For members

    • CPD & Training

    • Membership and fees

    • Rules and guidance

    • Regulation and compliance

    • Journal

    • Business support

    • Career growth

    • Member benefits

    • Professional support

    • Lawscot Wellbeing

    • Lawscot Sustainability

  • News and events

    • News and events

    • Law Society news

    • Blogs & opinions

    • CPD & Training

    • Events

  • Qualifying and education

    • Qualifying and education

    • Qualifying as a Scottish solicitor

    • Career support and advice

    • Our work with schools

    • Lawscot Foundation

    • Funding your education

    • Social mobility

  • Research and policy

    • Research and policy

    • Research

    • Influencing the law and policy

    • Equality and diversity

    • Our international work

    • Legal Services Review

    • Meet the Policy team

  • For the public

    • For the public

    • What solicitors can do for you

    • Making a complaint

    • Client protection

    • Find a Solicitor

    • Frequently asked questions

    • Your Scottish solicitor

  • About us

    • About us

    • Contact us

    • Who we are

    • Our strategy, reports and plans

    • Help and advice

    • Our standards

    • Work with us

    • Our logo and branding

    • Equality and diversity

  1. Home
  2. For members
  3. Journal Archive
  4. Issues
  5. June 2023
  6. How should we regulate AI?

How should we regulate AI?

As the risks as well as the opportunities of artificial intelligence are increasingly debated, the author compares and contrasts the UK’s and EU’s current proposed approaches to its regulation
19th June 2023 | Hannah Gardner

Recent developments in artificial intelligence (AI) have had a huge global impact. There is a heightened understanding of both the risks and the opportunities the technology could have on our society. Regulators have a responsibility to balance the need to protect society against the risks associated with AI while continuing to encourage innovation. Closer to home, the developments present us with the perfect opportunity to compare the key differences between the UK’s and the EU’s current proposed approaches to regulating AI, and the possible challenges and benefits of each.

What are the risks?

Most governing bodies agree that there is potential for harm to society as a consequence of lack of responsible use of AI, and that such harm should be mitigated against with appropriate rules and regulations.

To help understand which areas could be impacted, the major values which AI threatens were set out in detail in the OECD, the Organisation for Economic Co-operation & Development’s Recommendation of the Council on AI, approved by member countries in 2019. They include:

  • human rights;
  • fairness (including potential for bias and discrimination);
  • safety (damage to both physical and mental health);
  • privacy;
  • security;
  • societal wellbeing (including threat to democracy);

Both the UK and EU approaches to regulating AI have these values at their core.

Two approaches

The UK Government’s AI White Paper, published in March 2023, sets out guidance for existing regulators, with the aim of supporting innovation while still addressing key risks. The paper suggests that the Government may introduce a statutory footing in the future, requiring regulators to follow the principles contained in the paper, but is not currently introducing new legislation.

This is a marked contrast to the EU AI Act, which is currently under discussion in the European Parliament and aims to be the first global comprehensive AI regulatory framework, built to protect individuals and establish trust in AI systems.

Here we will explore the five most interesting differences between the two frameworks.

How should AI be regulated?

The UK Government is taking a broad, principles-based approach, covering:

  1. safety and robustness in the assessment and management of risk;
  2. transparency and explainability – a consumer should understand when AI is being used and how it makes decisions;
  3. fairness – AI should not discriminate or create unfair market outcomes;
  4. contestability and redress – there should be a mechanism to change or reverse harmful decisions made by AI; and
  5. accountability and governance.

You may be familiar with these principles – they are based on the OECD Principles, which have also influenced data protection laws and are intended to ensure consistency and flexibility across the industry.

Many may however prefer the clarity of the EU’s prescriptive framework, setting its position in legislation and covering AI throughout the life cycle of a system, from the data it is trained on to testing, validation, risk management, and supervision post-market.

Moving into the detail, the EU Act will cover four levels of risk to measure AI systems: unacceptable, high, limited, and minimal.

With a nod to our above values, “high risk” AI includes that which could harm health, safety, fundamental rights or the environment. Developers of specific high risk systems, called generative foundation AI models (like GPT), would need to disclose that AI has been used to generate content, and publish summaries of the copyrighted data used to train them.

AI which poses an unacceptable level of risk to safety will be prohibited, for example predictive policing, emotion recognition, social scoring and real time public biometric identification systems.

In contrast, the UK is not currently proposing to prohibit any specific form of AI.

How centralised is the approach?

While the EU will be putting obligations on everyone, both users and developers of AI, the UK is placing the responsibility to follow its guidance on our regulators, recognising that certain kinds of AI technology can be used in different ways with varying levels of risk. The UK therefore looks to monitor the specific uses of AI, rather than the technology itself.

To understand this in practice, let’s consider facial recognition, which as a population we are generally comfortable with in the context of securely logging into our iPhones. However, we would have concern for our privacy should such AI be used for broad public surveillance purposes. Regulation of facial recognition in the context of broad surveillance therefore is the UK’s outcome-based approach.

To do this, the UK proposes to leverage the expertise of existing regulators to apply the guidance to their own sectors, such as financial services, human rights, healthcare and broadcasting. The intent is that existing regulators such as the Information Commissioner’s Office, Financial Conduct Authority, Medical & Healthcare products Regulatory Agency, Competition & Markets Authority, Equality & Human Rights Commission, and Ofcom, are best placed to take a “proportionate approach” to regulating AI.

The UK does recognise that there is a risk of diverging approaches, and so proposes that guidance for regulators on how best to collaborate shall be provided for in an AI Regulation Roadmap to monitor and coordinate the implementation of the UK’s principles.

The EU is not taking the sector specific approach, and instead intends to create a prescriptive horizontal regulatory framework around AI to capture all use cases. The newly developed European AI Board will oversee member states, who will nominate their own regulatory bodies to ensure laws are enforced. This arguably gives more clarity for industries assessing whether or not they are following the rules, but could lack the nuance needed to measure proportionally the damage an AI system can do in a specific context.

How is it to be overseen?

The EU proposes a new European AI Board to oversee the implementation of the AI Act and ensure it is applied consistently across the EU.

The UK Government has not ruled out the creation of an independent body long term, but is not currently establishing a new AI regulator, instead relying on governmental central support functions and expertise from the industry. The white paper argues that a new regulator could stifle innovation, whereas many will seek comfort in the EU’s unified board to guide them.

How to define AI?

This is no easy task. The EU has taken the approach of drafting an overarching definition. Recent AI developments, however, have meant that proposals are already being made to amend the definition to ensure that some new models (such as those underpinning ChatGPT) are captured, which suggests that it could already be too narrow and lacks the adaptability to stand the test of time.

In contrast, the UK’s white paper presents a non-statutory definition of AI, which is to be measured on its adaptability (how it is trained and learned) and its autonomy (how much human control is involved). Separate regulators will be relied upon to interpret the definition, which risks inconsistency, and its broadness could allow other types of technology to be captured. However, like the principles, it is designed to be high level and flexible to adapt to future technological advancements.

How to deal with liability?

What has drawn the attention of many is the EU AI Act’s proposal of fines of up to €30 million, or 6% of annual turnover, higher than those imposed on GDPR breaches. The EU AI Liability Directive (non-contractual, civil liability mechanism) and the EU Product Liability Directive (rules for redressing harm caused by defects in products which integrate AI systems) will be built to underpin the Act.

The UK’s view is that it is too early to say how liability should be managed. Instead, penalties will be dealt with at a sectoral level. This avoids an additional overarching liability regime for industries to be cognisant of, although two companies could receive different outcomes from breach of the same principles depending on who they are regulated by.

What’s next?

The UK AI White Paper consultation is open until 21 June 2023, following which the Government intends to issue its response and AI Regulation Roadmap. There are many risks which the white paper has not covered (such as ownership of IP and control of data), so we can expect to see more white papers on these issues. The UK has acknowledged that it may need to adapt its regulatory approach as the technology evolves, so we could even see something closer to the EU framework here in the future.

Meanwhile, the EU AI Act faces its plenary vote this summer, with final approval expected by early 2024. The Act’s implementation will be significant, and impacted organisations shall have a grace period of two years to ensure compliance with the rules. Any services used in the EU which rely on the output of an AI system will be caught by the EU Act, so not only is the EU’s framework potentially setting a global precedent, it will also have an extraterritorial impact as many (including UK) companies will need to follow the EU rules.

Many businesses are leaning towards the UK’s flexible approach, which gives more breathing space for innovation, while others prefer the clarity and security the EU approach will provide the industry. There is likely no one perfect approach as elements of both work well, and we will continue to watch as both frameworks move to their next stages and beyond. It will be fascinating to see how future AI developments impact these approaches, and how the industry reacts.

The Author

Hannah Gardner is a legal counsel (Outsourcing, Technology & IP) with The Royal Bank of Scotland, part of the NatWest Group

Share this article
Add To Favorites
https://lawware.co.uk/

Regulars

  • People on the move: June 2023
  • Book reviews: June 2023
  • Reading for pleasure: June 2023

Perspectives

  • Opinion: Jen Ang
  • President's column: June 2023
  • Editorial: Half baked
  • Viewpoints: June 2023
  • Profile: Paul Gostelow

Features

  • AI and the workplace of the future
  • How should we regulate AI?
  • Animals, ESG and climate change: the solicitor’s role
  • Rethinking those ts and cs
  • Show us the money: immigration for the better off
  • Accounting for suspicion
  • When law school starts earlier

Briefings

  • Criminal court: Dangerous or careless?
  • Corporate: Bill gives CMA consumer enforcement powers
  • Agriculture: A question for the Land Court?
  • Intellectual property: Who owns AI generated copyright?
  • Succession: Variation by an attorney?
  • Sport: Participation in LIV Golf ruled out of bounds
  • Scottish Solicitors' Discipline Tribunal: June 2023
  • Data protection: Meta's mega matter
  • In-house: Scanning wider horizons

In practice

  • Public policy highlights: June 2023
  • Trainee CPD goes O Shaped
  • Bill with a high price
  • The Eternal Optimist: Solving the trust equation
  • Risk: Top tips for trainers and trainees
  • Tradecraft tips: June 2023
  • AML: Source of funds – have we moved forward?
  • Ask Ash: Chill at first sight
  • OPG update: June 2023

Online exclusive

  • Civil actions: raising the IP address curtain
  • The potential risks of using ChatGPT at work
  • Managing long-term sickness absence
  • Green leases – here to stay
  • AI in healthcare: how could liability arise?

In this issue

  • A match made in Heaven!
  • Cyber risk: are you properly tested?

Recent Issues

Dec 2023
Nov 2023
Oct 2023
Sept 2023
Search the archive

Additional

Law Society of Scotland
Atria One, 144 Morrison Street
Edinburgh
EH3 8EX
If you’re looking for a solicitor, visit FindaSolicitor.scot
T: +44(0) 131 226 7411
E: lawscot@lawscot.org.uk
About us
  • Contact us
  • Who we are
  • Strategy reports plans
  • Help and advice
  • Our standards
  • Work with us
Useful links
  • Find a Solicitor
  • Sign in
  • CPD & Training
  • Rules and guidance
  • Website terms and conditions
Law Society of Scotland | © 2025
Made by Gecko Agency Limited