Turkish Law Blog

AI Personhood: A Futuristic Approach

Gizem Alper Gizem Alper/ Elisabeth Haub School of Law at Pace University
11 October, 2018
1010

AI is becoming highly autonomous by day and acquiring more and more cognitive abilities. It cannot be denied: AI has already become a part of society. Since ethics and legal rules are the underlying ingredients of a functioning society; where does AI fit in to all of this? This is a dilemma that is being tackled by many; from philosophers to lawyers, from tech experts to visionary leaders.

One thing that is clear is that the current set of rules are inadequate and they need to be revised – fast and in a futuristic mindset. One of the revolutionary concepts that is already “out there” is to grant AI personhood. In other words, AI is to be accepted as a separate entity; AI will be entitled to acquire rights and assume liabilities. If AI will –or perhaps already has- come to a point that it can act autonomously, make its own decisions and judgements, then why not grant personhood?

This has already been discussed by the European Parliament and a Report has been issued on February 16, 2017 with recommendations to the Commission on Civil Law Rules on robotics. In the Report, it is suggested that a specific status for robots could be created and that they could be deemed as “electronic persons”. On the other hand, in the US, Shawn Bayern [The Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems, 19 Stan. Tech. L. Rev. 93, 101 (2015)] has stated that under current US laws, there may be a possibility for AI to acquire legal personhood.

Although AI may acquire cognitive abilities and perhaps even personhood, it is designed to serve a purpose and –at least for now- has an owner. But what if AI becomes extremely powerful, how can we control its behaviour and assume legal accountability? In other words, what can be done if AI becomes highly autonomous and does not act in the interests of its owner and the purpose for which it was designed for?

I believe that the good old “ultra vires” could be at your service for such purposes. Ultra vires, a Latin term meaning “beyond powers” has been used in many legal fields from constitutional law to corporate law. Now, the “ultra vires” can be applied to numerous aspects of AI- from biased algorithms to contractual relationships of AI. Ultra vires may be an “umbrella principle” to tackle all legal issues in which AI acts highly autonomous and “exceeds its powers”.

Imagine a robotic AI, it is tasked to negotiate and sign contracts on your behalf. It starts to act highly autonomously and suddenly you end up to be a party to a contract that was never your intention. Or in a different scenario, what if you end up being a party to a contract which objectively and substantially harms you financially? What will be the way out? An improvised interpretation of the ultra vires doctrine under corporate law may be the tool to allow you to declare such contracts null and void.

One other main issue surrounding AI is privacy. With “help” from big data, AI is acquiring personal, confidential and sometimes proprietary information. There are privacy laws in place to tackle legal misuse. But are these rules adequate when it comes to misuse or accidental reveal of such information by AI itself? If privacy laws do not provide protection in a given scenario, the “ultra vires” can be an option to redress harm. If AI was to reveal or use such information out of context, it can be said that it has exceeded its power. If privacy laws cannot provide redress and when liability cannot be attributed to the creator, the good old ultra vires may be the legal tool for redress: Either the creator or AI itself may be liable depending on the individual circumstance.

All in all, if some sort of personhood was to be attributed to AI, AI could be a party to legal claims. However, a problem to be tackled is the limits of its liability, in other words: How and to what extent will AI redress damages? Will it be through an insurance scheme as recommended by the European Parliament or will it be through some sort of “capital” similar to that of a corporation? But who knows, maybe AI will have the power to acquire its own assets and thus have its own finances, similar to that of mankind.

 

Leave a comment

Please login or register to comment

Comments