Legal Liability for the Actions of Artificial Intelligence
Artificial intelligence (AI), which has found itself a wide area in our daily lives also started to get into law’s area. Who will take responsibility when a robot vacuum machine leaks images of people from that house or when an artificial intelligence such as robots harms their colleagues while working in a factory? Before analyzing the responsibility of artificial intelligence in such cases, we will examine this topic under the headings that will help us to answer these questions.
Concept of Artificial Intelligence
In general terms artificial intellingence finds itself in machines, it is specific to machines and this type of intelligence doesn’t exist in humans and animals[1]. The concept of intelligence mentioned here is created by humans that’s why it does not exist in nature on its own. Artificial intelligence has been classified as ‘machine learning’ and ‘deep learning’ artificial intelligence in terms of learning[2] . Machine learning is the basic form of learning in artificial intelligence, they advance themselves through various algorithms (trial and error learning ). Deep learning artificial intelligence deals with more complex problems and big data. There are various personality criterias for artificial intelligence in doctrine but the level of advancement is crucial when it is come to personhood. We acknowledge that there are four types of artificial intelligence[3] . First, we have Reactive AI, this type of AI is the least advanced AI which reacts only to specific commands and they’re usually expert in a single working. Calculators and weather apps on our phones, chess games on computers are examples of this. The second type, Limited Memory AI, is slightly more advanced version of Reactive AI. This type of AI has a certain level of memory capacity and through trial and error, it improves itself by making reasonings from its past experiences by means of its memory. Personal digital assistant applications on our phones operate on this principle. The third type of intelligence is Theory Of Mind AI. This level artificial intelligence can communicate with people, find itself a place in social life and be able to empathize with people. In Star Wars the droids such as R2-D2, BB8 and at the present time Sofia of the Hanson Robotics Company are examples of this. The last type of AI is called Self-aware Artificial intelligence. They are the closest type to human beings. At this level they are aware of themselves like a human, at this time we only see this type of AI in movies and series such as Eva of Ex Machine.
The Concept of Personality in Law and the Artificial Intelligence
The concept of person is defined in law as someone who is able to have rights and obligations[4]. When we talk about the concept of person most people perceive it as if we are talking about a human but due to social life the legislator formed legal personalities which is called ‘association’ and ‘foundation’[5]. An association is established by group of people in order to achieve a specific purpose and a foundation is established by group of people to achieve economic goals. We should note that the legislator regulated personality in ‘numerus clausus’ principle and we can’t form new personalities on our own. In the light of this information, we can’t define artificial intelligence as a person in law, they are the subjects of law not the actors. However, through technological advancements artificial intelligence takes place in every aspect of daily life, therefore this concept has been examined by legists. In daily life we see that artificial intelligences which uses deep learning systems develop themselves very quickly and can make unpredictable moves[6]. We can’t charge the artificial intelligence because of their actions even though their actions are unpredictable, because they are not recognized as a person in present law, so instead of that we charge the creators of them, data providers and sellers and other people. This situation may lead to unfair results. At this point discussions about the concept of personality and criterias for personality become more important. It is accepted that there is a few criterias must be met if we want to define artificial intelligence as a person in law[7]. These are ; the ability to interact with environment (the ability of communicate ), to have a life purpose, personal plans and a consciousness and the ability of living together with society. Even though these criterias give us hints about personality of artificial intelligence, if we only rely on them, it may lead to wrong conclusions. Because we shouldn’t compare basic form of artificial intelligence with more advanced artificial intelligence with same criterias, this would be misleading. The biggest step forward in terms of responsibility is taken by the Legal Affairs Commitee of the European Parliment on 27.01.2017 with a detailed report[8]. The most eye-catching matter on this report is the approachment of ‘Electronic Person’ [9]. In this report it is accepted that even though we view artificial intelligence as a subject of law, the more they get advanced the more they become unpredictable but we also can’t make them responsible for their actions because they are not defined as a person in law. Even though this could be acceptable for their basic forms we can’t treat more advanced algorithms with same way. Because of this The Commitee decided to form a new another personality in law which is specific to artificial intelligence and called ‘Electronic Person’. According to this report liability of artificial intelligence should be established according to their level of sophistication, and there must be causal relation between the situation and the action of artificial intelligence. The responsibility which attiributed to artificial intelligence will be determined due to its capacity and will be shared between artificial intelligence and its creators. In this context, there will be an insurance fund for the liability of robots and compensation will be provide from them , and legal and political desicions to be taken in terms of responsibility should be made by scientists and legists who is working in this field.
Examples From Today
Authorities of Saudi Arabia declared that they have granted citizenship to Hanson Robotics Inc’s Sophia which makes Sophia the first robot to have a citizenship just like people.[10]
In Japan, authorities granted residency to an artificial intelligence called Chatbot Mirai and this makes Mirai the first AI bot who has a resident in Japan[11].
A regional Australian mayor took legal action against ChatGPT and claimed that the artificial intelligence spreads false claims about him when people want to get information about him. According to Mayor, ChatGPT claimed that he got involved in a bribery offense and confessed to it[12].
Similar situation happened between authors of the Game of Thrones series and ChatGBT. Authors of the series sued ChatGPT for copyright infringement and claimed that ai used their series as its data and improved itself by processing the series[13].
Lastly
As a consequence of the above evaluations, we should understand that through technological developments artificial intelligence has become vitally important actors of daily life. As a result of this if we want a fair law system their responsibility should arise as their algorithm gets sophisticated. Sometimes a basic type of artificial intelligence and sometimes artificial intelligence with more advanced systems could be the actors of various problems. At present we know artificial intelligence is not recognized as a person in law so in these cases, producers, data providers, data sellers and anyone who got involved in this circle takes the responsibility of the actions of artificial intelligence. This could be fair only for the actions of artificial intelligence with basic advancement, we can’t say same thing about the actions of artificial intelligence with more advancement.
The examples of this topic shows us that these cases opened against to artificial intelligence, the responsibilty for these kind of accusations will be taken by its developer, OpenAI Company will be face with accusations in the court. For this reason even though there are still jurists who refuses to recognize artificial systems as an actors of law, daily life and developments make it mandatory that we should recognize them as actors.
Sources
- Akkurt, S. S. (2019, June). YAPAY ZEKÂNIN OTONOM DAVRANIŞLARINDAN KAYNAKLANAN HUKUKÎ SORUMLULUK. Uyuşmazlık Mahkemesi Dergisi, (13), 39-59. https://doi.org/10.18771/mdergi.581875
- Bak, B. (2018, July). Medeni Hukuk Açısından Yapay Zekânın Hukuki Statüsü ve Yapay Zekâ Kullanımından Doğan Hukuki Sorumluluk. Türkiye Adalet Akademisi Dergisi, (35), 211-232.
- Kara Kılıçarslan, S. (2019). YAPAY ZEKANIN HUKUKİ STATÜSÜ VE HUKUKİ KİŞİLİĞİ ÜZERİNE TARTIŞMALAR. Yıldırım Beyazıt Hukuk Dergisi, (2), 363-389. https://doi.org/10.33432/ybuhukuk.599224 (Last accessed: 24.04.2024)
- BBC NEWS Türkçe. “Game of Thrones’un Yazarından ChatGPT’ye telif hakkı davası” https://www.bbc.com/turkce/articles/crgxglx5xl9o (Last accessed: 24.04.2024 )
- AA. “Avustralya’da, belde başkanı hakkında yanıltıcı bilgi veren ChatGPT’ye dava açıldı” https://www.aa.com.tr/tr/dunya/avustralyada-belde-baskani-hakkinda-yaniltici-bilgi-veren-chatgptye-dava-acildi/2864161# (Last accessed: 24.04.2024 )
- Newsweek. “Tokyo: Artificial Intelligence ‘Boy’ Shibuya Mirai Becomes World’s First AI Bot to Be Granted Residency” https://www.newsweek.com/tokyo-residency-artificial-intelligence-boy-shibuya-mirai-702382 (Last accessed: 24.04.2024 )
- BBC News Türkçe. “Dünya’nın ilk robot vatandaşı Suudi Arabistanlı” https://www.bbc.com/turkce/haberler-dunya-41780346 ( Last accessed: 24.04.2024 )
- European Parliment. “ REPORT with recommendations to the Comission on Civil Law Rules on Robotics” https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html (Last accessed: 24.04.2024)
- Wikipedia. (2005). “ Yapay Zeka”. https://tr.m.wikipedia.org/wiki/Yapay_zek%C3%A2 (Last accessed: 24.04.2024)
[1] https://tr.m.wikipedia.org/wiki/Yapay_zek%C3%A2 (Last accessed: 23.04.2024)
[2] BAK, B. (2018). Medeni Hukuk Açısından Yapay Zekanın Hukuki Statüsü ve Yapay Zeka Kullanımından Doğan Hukuki Sorumluluk. Türkiye Adalet Akademisi Dergisi, (35), 211-232.
[3] BAK, B. P.211-232
[4] https://www.mevzuat.gov.tr/mevzuat?MevzuatNo=4721&MevzuatTur=1&MevzuatTertip=5 (Last accessed: 23.04.2024)
[5] https://www.mevzuat.gov.tr/mevzuat?MevzuatNo=4721&MevzuatTur=1&MevzuatTertip=5 (Last accessed: 23.04.2024)
[6] Bak, B. Syf211-232
[7] KARA KILIÇARSLAN, S. (2019). Yapay Zekanın Hukuki Statüsü ve Hukuki Kişiliği Üzerine Tartışmalar. Yıldırım Beyazıt Hukuk Dergisi, (2019/2), 363-389. https://doi.org/10.33432/ybuhukuk.599224
[8] https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html (Last accessed: 23.04.2024)
[9] AKKURT, S.S. (2019). Yapay Zekanın Otonom Davranışlarından Kaynaklanan Hukuki Sorumluluk. Uyuşmazlık Mahkemesi Dergisi, (13), 39-59. https://doi.org/10.18771/mdergi.581875
[10] https://www.bbc.com/turkce/haberler-dunya-41780346 (Last accessed: 23.04.2024)
[11] https://www.newsweek.com/tokyo-residency-artificial-intelligence-boy-shibuya-mirai-702382 (Last accessed: 23.04.2024)
[12] https://www.aa.com.tr/tr/dunya/avustralyada-belde-baskani-hakkinda-yaniltici-bilgi-veren-chatgptye-dava-acildi/2864161# (Last accessed: 23.04.2024)
[13] https://www.bbc.com/turkce/articles/crgxglx5xl9o (Last accessed: 23.04.2024)