Regulations on AI Landscape in the Age of Chatbots

12.09.2023

Introduction

In today’s fast-paced age, chatbots, including advanced models like ChatGPT, have become ubiquitous tools. They serve various functions from writing poetry to solving math problems and even engaging in conversations with people. They transform AI technology and help us to complete complex tasks with great ease. However, they also pose serious dangers resulting from their uncontrolled use.

Tech business leaders have been raising awareness about the potential dangers and risks of these chatbots.[1] OpenAI CFO Sam Altman, and CTO, Mira Muratti, also acknowledged these dangers in an interview with ABC News. In this interview, Altman emphasized that he was particularly worried about these models being used for large-scale disinformation. He added, "The model will confidently state things as if they were facts that are entirely made up,".[2] 

Fake News Generated by ChatGPT

According to The Washington Post; as part of a research study, a lawyer in California has asked the AI chatbot ChatGPT to generate a list of legal scholars who have sexually harassed someone. Jonathan Turley’s name was on the list. The chatbot said the event had happened while on a class trip to Alaska, citing a March 2018 article in The Washington Post. But no such article existed, and there has never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student. [3] Both the cited article and the accusation were misinformation generated by ChatGPT.

The Washington Post’s article, and other news reports such as the report published by NewsGuard[4], showed that the abovementioned concerns were valid, and unsurprisingly the need for a legal regulation addressing the risks related to AI is on the agenda of countries that are active players in international  AI competition.[5]

Regulations on Artificial Intelligence

As AI technology advances rapidly, countries aiming to be active in AI competition are trying to create a safe and predictable environment for artificial intelligence technology within their country, both for entrepreneurs and users, by crafting various strategy reports and regulatory drafts. However, they adopt different national strategies against the risks related to AI.

The UK’s strategy is to establish a sector-focused, principle-based, regulator-led system by electing existing regulators to determine the rules by considering principles such as safety, security and contestability.[6] Similarly, the U.S. federal government’s strategy for AI risk management is risk-based, sectorally specific, and highly distributed across federal agencies.[7] On the other hand, the EU  has decided to regulate AI with comprehensive legislation. To this end, they introduced overarching legislation at the EU level, the AI Act.

As the first set of rules for AI that specifically outlines requirements for developers of “generative AI” such as ChatGPT, which has been a key concern for regulators, the AI Act can serve as a prominent approach to adopt for regulating AI. Although several countries are developing their own responses, they will closely watch the AI Act in tailoring their approaches.

The EU Approach to Regulating AI and Generative AI

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of regulatory frameworks for AI.[8] At the final stage, the European Parliament approved the proposal on June 14, and it will now proceed to the final “Trilogue” stage.

As mentioned before the AI Act has been created with a risk-based approach. Specifically, the AI Act categorizes AI systems into four levels of risk: unacceptable risk, high risk, limited risk and minimal or no risk.[9] Every category means different obligations for providers and users depending on the level of the risk. These obligations are proportionate to the level of risk that a system poses. Moreover, and importantly, some specific provisions will be applied to generative AI systems such as ChatGPT. These risk categories and  generative AI provisions can be summarized as follows:

1. Unacceptable Risk

Unacceptable risk AI systems are considered a threat to people and will be banned by default.[10] They include:

  • Cognitive behavioural manipulation of people or specific groups.
  • Social scoring: classifying people based on behaviour, gender, socio-economic or personal character
  • Real-time and remote biometric identification[11]


2. High Risk

AI systems that negatively affect safety, health, environment or fundamental rights will be considered high risk. They include:

  • AI systems fall into eight specific areas that will have to be registered in the EU database such as education and vocational training, employment, worker management and access to self-employment.
  • AI systems that are used in products under the EU’s product safety legislation. This includes toys, aviation cars, medical devices and lifts. [12]

3. Generative AI

The AI Act with its differentiated approach (classification system) wasn’t sufficient for assessing which categories generative models belong to since they serve various functions. To this end, the Europen  Parliament revised the AI Act by introducing specific rules on foundation models with respect to data governance and copyright rules as well as safety checks before any public release. [13]Accordingly, generative AI systems will be obligated to:

  • Disclose that the content was generated by AI
  • Design the model to prevent it from generating illegal content
  • Publish summaries of copyrighted data used for training [14] 

4. Limited Risk

Limited-risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions.[15]

Conclusion

In today's AI-dominated era, particularly with Chatbots, it's undeniable that technology is advancing rapidly. While authorities are actively trying to regulate these technologies, there's a crucial need to find a balance between promoting innovation and establishing necessary rules. The EU is a key player in this regulation race, though the AI Act has some shortcomings, such as its classification system, which potentially complicates the enforcement of rules for the authorities. Nevertheless, the AI Act remains an important tool in governing AI, and it constitutes a milestone in this journey, shaping the landscape of AI regulation. However, it is just a starting point. As technology continues to evolve, regulators must embrace open-mindedness and be ready for new approaches.


[1] In this regard, Future of Life Institute open letter : https://futureoflife.org/open-letter/pause-giant-ai-experiments/

[2] https://abcnews.go.com/Technology/openai-ceo-sam-altman-ai-reshape-society-acknowledges/story?id=97897122

[3] https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/

[4]  In this regard: https://www.newsguardtech.com/special-reports/newsbots-ai-generated-news-websites-proliferating/

[5] İlay YILMAZ, Av. Can SÖZER, Ecem ELVER, “ Current Developments on Artificial Intelligence: An Analysis In The Ligh Of Actions Taken in the European Union and the United States”, Dergipark, 2021

[6] https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[7] https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/

[8] https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[9] https://www.cnbc.com/2023/05/15/eu-ai-act-europe-takes-aim-at-chatgpt-with-landmark-regulation.html

[10] https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[11] Ibid.

[12] Ibid.

[13] https://www.iris-france.org/175800-generative-ai-defies-regulatory-attempts/

[14] https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[15] Ibid.

This website is available “as is.” Turkish Law Blog is not responsible for any actions (or lack thereof) taken as a result of relying on or in any way using information contained in this website, and in no event shall they be liable for any loss or damages.
Ready to stay ahead of the curve?
Share your interest anonymously and let us guide you through the informative articles on the hottest legal topics.
|
Successful Your message has been sent