US, Britain, and EU sign first international AI treaty for responsible development

Written by
Edited by
Written on Sep 6, 2024
Reading time 5 minutes
  • The AI Convention, adopted by 57 countries, aims to protect human rights & promote responsible AI innovation.
  • The Council of Europe, distinct from the EU, played a key role in drafting and negotiating the AI Convention.
  • The AI Convention complements the EU AI Act, providing a global framework for AI governance.

The European Union, the US, and the UK have signed the world’s first legally binding international treaty on artificial intelligence (AI) and related systems, known as the AI Convention.

Adopted in May after years of negotiations among 57 countries, the treaty aims to address the risks posed by AI while promoting responsible innovation.

Some experts argue that the treaty’s broad language and caveats could undermine its effectiveness.

While this development marks a significant milestone in global efforts to regulate AI, questions remain about its practical impact and enforcement.

AI treaty adopted by 57 countries

Copy link to section

The AI Convention, the first of its kind, focuses on protecting human rights for those affected by AI systems.

This agreement is separate from the EU’s AI Act, which came into force last month and imposes strict regulations on AI development and deployment within the EU.

Negotiated by 57 countries, the AI Convention reflects a global commitment to ensuring AI technologies do not undermine fundamental values such as human rights and the rule of law.

The Council of Europe, an international organisation distinct from the EU, spearheaded the treaty. With a mandate to safeguard human rights, the Council includes 46 member countries, comprising all 27 EU member states.

The treaty’s adoption follows years of discussions, beginning with a feasibility study in 2019 and culminating in the establishment of a Committee on Artificial Intelligence in 2022 to draft the text.

Experts call for stronger enforcement mechanisms

Copy link to section

The AI Convention allows signatories to adopt or maintain legislative, administrative, or other measures to implement its provisions.

While the treaty’s primary focus is on ensuring AI systems align with human rights protections, critics argue that the broad language and numerous exemptions could limit its effectiveness.

Francesca Fanucci, a legal expert at the European Center for Not-for-Profit Law Stichting (ECNL), who contributed to the treaty’s drafting process, has expressed concerns about its enforceability.

Fanucci noted that the “formulation of principles and obligations” in the convention is “overbroad and fraught with caveats,” raising questions about legal certainty and effective enforcement.

One major criticism centres on the exemptions allowed for AI systems used for national security purposes and the perceived disparity in scrutiny between private companies and the public sector.

The treaty reflects an attempt to balance the need for innovation with the imperative to protect human rights and uphold ethical standards.

Britain’s justice minister, Shabana Mahmood, described the convention as a “major step” in ensuring AI technologies can be harnessed without eroding fundamental values such as human rights and the rule of law.

The UK government has indicated it will work with regulators, devolved administrations, and local authorities to appropriately implement the treaty’s new requirements.

Difference between the EU AI Act and the AI Convention

Copy link to section

The newly signed AI Convention is distinct from the EU AI Act, which already imposes comprehensive regulations on AI systems within the EU’s internal market.

The AI Act categorises AI applications based on their risk levels—unacceptable, high, limited, and minimal risk—each with corresponding requirements for compliance, transparency, and governance.

In contrast, the AI Convention provides a framework for international cooperation and guidance, but with a broader set of principles that some argue lack specificity.

How will nations enforce the principles?

Copy link to section

While the AI Convention has been hailed as a significant step towards a more regulated AI landscape, the criticism around its perceived loopholes and generalised principles suggests that further refinement may be necessary.

Fanucci and other legal experts argue that without more robust and clear provisions, the treaty may struggle to enforce meaningful protections against potential abuses of AI technologies.

The need for international cooperation in AI governance is evident, but the challenge lies in creating a legally binding framework that effectively balances innovation with accountability.

As AI technologies continue to evolve rapidly, the effectiveness of treaties like the AI Convention will likely depend on future amendments, more stringent guidelines, and the political will of its signatories to enforce them.

The treaty’s impact will largely depend on how signatory countries implement its provisions and address the criticisms raised. The global AI landscape is constantly evolving, and the need for adaptive and enforceable regulations is crucial.

As the UK and other nations work towards embedding the treaty’s principles into national law, the effectiveness of this pioneering effort will be closely watched by policymakers, businesses, and civil society groups worldwide.