By Elisabeth Sullivan
Public Affairs, Account Executive
AI has been the topic of the year. From businesses and workers adapting their ways of working to tech giants and leading minds grappling with ethics and implications of the rapidly emerging and diverse technology, governments and policymakers worldwide have been racing to get ahead—or just keep up—with the dynamic landscape.
However, the fundamental questions of how, when, and how much to regulate have muddled the path, as different countries come out with different approaches, leading to a fair level of competition on who will dominate the game. Not to mention the big question: why have countries decided to take steps towards regulation now, when AI technology has been in play for years?
The EU is finalising its AI Act, the world’s premier legislative stance. The U.S.—home to some of the largest AI companies in the world—released an AI Executive Order, looking to establish new standards of safety. The UK, however, has decided against prescriptive regulation at this moment in a bid to “unleash innovation”, as identified in its AI white paper back in March. This is an interesting and somewhat controversial approach for the UK to establish itself as a leader in the AI space, an important cornerstone in the Government’s overarching goal of becoming a technological superpower.
Instead of developing primary legislation which would address the short-term risks from “narrow” AI in our everyday lives, Prime Minister Rishi Sunak announced in June that he would be inviting world leaders, industry giants and civil society thought leaders to an AI Safety Summit to discuss the long-term, existential risks associated with “Frontier AI”.
Held at the iconic Bletchley Park—symbolising the UK’s historical significance in revolutionary technological innovation—the event aimed to encourage collaboration on managing the risks of Frontier AI, i.e. the “cutting edge” technologies which pose some of the greatest opportunities and potential threats.
On the first day, representatives from 28 countries including the U.S., China, Australia and the E.U. signed the new ‘Bletchley Declaration’, agreeing to work together on AI safety research by supporting an “internationally inclusive network of scientific research” and sustaining “an inclusive global dialogue”. Subsequently, leading tech firms—including the likes of Meta, Google and OpenAI—signed an agreement to allow regulators to test their latest artificial intelligence (AI) products before releasing them for public use, marking another notable achievement.
However, the agreement is non-binding and the declaration is as light-touch as the UK’s approach to regulation, meaning no specific proposals have really been agreed. Rather, the main achievement of the summit has, in some ways, been to build momentum and kick-start an international discussion.
Somewhat curiously, the summit also openly demonstrated a dichotomy of national collaboration and competition —as seen with the U.S.’s announcement of its own new AI Safety Institute and executive order in the same week as the summit. Publicly, the two nations have obviously agreed to work together on the issue, but it is hard not to view the U.S. announcement as slightly snubbing the UK’s attempt to establish itself as the global leader on AI.
That being said, one of the most commonly used words to describe the summit since it ended has been a “coup”—for bringing in China, one of the true leaders in AI, and having them sign the declaration; for receiving Elon Musk, who is as influential as he is controversial; and for simply achieving anything of note, despite concerns and criticism leading up to the event that the absence of notable players, such as Macron and Trudeau, would hinder the summit’s objectives. And the trend has been set—next year, South Korea will host a virtual meeting followed by the next AI summit to be held in France.
The broader regulatory landscape
Sunak said in a speech preceding the summit that the UK would not “rush to regulate”. And this has been true—albeit drawing some criticism for being a bit too light touch compared to the rest of the world and missing the chance to position the UK as a world leader.
Following the “pro-innovation” white paper published in spring of this year, which also launched a three month consultation on the topic, the UK has hesitated to introduce any specific regulation or legislation and has yet to respond to the consultation.
The proposals essentially would empower and leverage the expertise of existing regulators to take a sector-specific approach to AI regulation, and to issue guidance to businesses setting out their expectations about the use of AI within their remit. Additionally, rather than defining “AI” or “AI system”, the paper instead defines the concept with reference to the characteristics of adaptivity and autonomy. This was done in order to “future-proof” the proposed framework against new technologies and remain regulatory flexible, although also drawing criticism for potentially leading to legal uncertainty.
However, the Government is facing pressure from industry, civil society and even within Parliament not to wait too long and risk falling behind the rest of the world. The focus of the summit has been to look at the big risks of future AI technology, which are fascinating, scary, and controversial, with very little mention of the more short-term, everyday risks which can arise from AI that is already in use.
The rise of Chat-GPT in late 2022 and the widely accessible generative AI technology seemed to be the wakeup call those who have been calling for greater restrictions for years were waiting for. At the same time, the discussion on whether—or how—to regulate AI recalls the advent of the internet. Governments have, over the past few years, been introducing new legislation to curb the power of tech companies—especially social media—to ensure that users retain rights to privacy and data protection, as well as to protect competition.
But this arose out of a significant backlash, as individuals and civil society pushed back against an unprecedented proliferation of harms. It would have been challenging, at the stage that we are with AI now, to predict the nature and extent to which the online world would come to dominate our everyday lives, but experts believe that it will gradually become a greater part of our everyday lives. And harms have already arisen – from inaccurate facial recognition software, to questions around data privacy and intellectual property.
The world is holding its breath for something good to come out this week. Over the past few decades, society has become newly aware of how, alongside all of the wonderful opportunities, unrestricted technology has the potential to cause genuine harm. herefore, establishing an international code is an essential step towards a cohesive environment in which governments and regulatory bodies successfully hold companies to account.
The summit may have pushed countries to declare their positions, but the UK still has its cards in hand. If the country is going to create a domestic regime which can participate with other nations in terms of business, trade, data protection, security, and other areas, perhaps waiting to see how the dice roll around the world is an intentional move. But what the UK Government seems to be missing is an awareness of the everyday harms that will be perpetuated—such as biassed facial recognition software.
Regulation or not, the UK seems to be swimming against a global tide, and whether this strategy will pay off in its goal to become an “AI superpower” is yet to be seen.
Stay up to date
Receive all the latest news, events, and insights on B2B tech, marketing, and communications with Clarity’s free monthly newsletter.
Why thinking ‘Now, Near, and Next’ is so important when doing business in a downturn
Fearless tactics to achieve your strategic success
As a consultancy, our full-funnel marketing and communications solutions are designed to fearlessly deliver business results across multiple industries and service areas.
Looking for a partner to help you reach your goals? We’d love to hear from you.