Amid concerns about the spread of false information, the CMA will examine the underlying systems of AI tools
The Competition and Markets Authority (CMA) in the UK has initiated a review of the artificial intelligence market, citing concerns over the potential dangers posed by AI tools that could lead to the propagation of false or deceptive information. This announcement follows a global trend of regulatory bodies increasing their scrutiny of AI technology. The CMA has specified that it will investigate the underlying foundation models of AI tools, including ChatGPT.
Kamala Harris, the Vice President of the United States, has invited the CEOs of the top AI firms, including ChatGPT, Microsoft, and Alphabet (Google’s parent company), to the White House on Thursday to discuss measures for addressing the safety concerns surrounding AI technology.
The US Federal Trade Commission, responsible for regulating competition, has recently announced its increased focus on how AI technology, particularly new generative AI tools, could affect consumers. Meanwhile, the Italian data watchdog lifted the temporary ban on ChatGPT last week after OpenAI resolved concerns regarding data usage and privacy.
Last week, the share price of UK education firm Pearson plummeted by hundreds of millions of pounds after US-based Chegg revised its financial forecasts and warned of the impact of ChatGPT on customer growth. Against this backdrop, the CMA noted the potential of large language models that support chatbots like ChatGPT and generative AI tools like Stable Diffusion to revolutionize various aspects of personal and business activities.
Sarah Cardell, the CEO of CMA, acknowledged the potential benefits of AI for UK businesses and consumers, but she also emphasized the importance of protecting people from misinformation. Both ChatGPT and Google’s competing service, Bard, have been known to provide inaccurate information based on user queries. In addition, NewsGuard, a group that combats misinformation, has raised concerns about nearly 50 AI-generated “content farms” consisting of chatbots posing as journalists.
As part of her review, she will examine the potential evolution of markets for foundation models, as well as opportunities and risks for competition and consumers. Additionally, she will develop guiding principles to promote competition and safeguard consumers.
The CMA has been tasked by government officials to evaluate how AI development and implementation can align with five core principles: safety, transparency, fairness, accountability, and the ability for new entrants to compete with established players in the AI industry.
According to Verity Egerton-Doyle, the UK co-head of technology at Linklaters law firm, the CMA has an opportunity to take a leading role in the global discussion on AI-related issues. Egerton-Doyle stated that it was not surprising that the CMA is looking at AI, as it has been evident for some time that the agency is interested in understanding the role of competition law in this field. The review will likely examine whether AI should be a criterion for designating firms as having strategic market status, and therefore subject to bespoke regulations under the UK’s upcoming Digital Markets, Competition and Consumers Bill.