Turkish Law Blog

The Prospects for Successful Regulation in the Public Interest for Artificial Intelligence

Seda Ilik Seda Ilik/ Ernst & Young
12 October, 2018
2178

1.     REGULATION

 

1.1. Regulation and its Reasons to Exist

 

Although the term ‘regulation’ has been defined in a number of ways, it is generally defined as a set of rules made by state intervention, thus in the sense of governmental activity.[1] The term ‘regulation’ can be used in the following senses as well: as deliberate state influence[2], as all forms of social and economic influence[3] or as a specific set of commands.[4]

The existence of different meanings and uses of ‘regulation’ can cause difficulties in understanding it. These difficulties can create a confusion while defining regulation and in finding the best term for each case. For the purpose of this paper, regulation will be used in the widest sense to refer to control stakeholders. This control could be exercised by government, code, self-regulatory standards as well as contractual relationship.[5]

One of the reasons to regulate is to prevent so-called market failure[6], when the market fails to address a certain issue that may cause harm to the public interest.[7] Regulation is also to protect human rights and promote social values, thus ensuring public trust.[8] Moreover, it can also help the development of economic sector and grow more efficiently.[9]

As a conclusion, regulation is viewed generally with its objectives and used to achieve goals considered important for society.

After stating the definition of the term ‘regulation’ and the reasons behind regulation, it is time to analyse what makes a regulation successful.

 

1.2. Successful Regulation

 

Many experts have tried to come up with criteria on to ensure efficacy and efficiency of a regulation. One of those criteria is utilitarianism which means regulation is successful when it maximizes wealth and well-being of society.[10] The utilitarianism criterion is not enough, there are other criterions to follow. Firstly, there must be a necessity for the state to regulate such as the correction of market failure or the promotion of human rights. The regulators should be socially and legally legitimate to conduct the regulatory process and to represent all stakeholders. The regulation should state its objective clearly; the rules must be consistent to promote legal certainty and focus on the issue at stake as well as being flexible to remain effective in the changing circumstances. Also, all the regulatory process must be conducted transparently and be available to the public. These processes must be fair, clear and create public confidentiality.[11]

The regulator should consider that some players have more political, economic and social power than others and shape the regulatory framework in a way which reflects that power balance otherwise it may lead to regulatory failure. ‘The public interest is not always well represented by the government or corporate interests especially in as dynamic and generation dividing a set of technologies.’[12] For example, companies may not invest in “friendly AI” initiatives because this would favour largely third parties rather than company itself.[13] For this reason, multi-stakeholder governance is always necessary.[14] The interests of weaker players also needs to be taken into account adequately to achieve successful regulation. Moreover, to avoid what Marsden describes as a ‘Potemkin’ regulator, regulatory rules should be meaningful to those who are regulated and devised in an appropriate way. In the case of technology, ‘the regulatory responses must be cognizant of that technological reality’.[15]

 

2.     PROSPECTS FOR SUCCESSFUL REGULATION IN THE PUBLIC INTEREST FOR ARTIFICIAL INTELLIGENCE

 

This part starts by defining AI and continues stating current regulatory approaches of the UK, EU and US. It then, critically assesses the existing approaches. In the conclusion, the paper proposes that a regulatory approach in which governments play a leading role in a multi-stakeholder effort may be the best way to achieve successful regulation in the public interest.

 

2.1. What is AI?

‘Like the steam engine or electricity in the past, AI is transforming our world, our society and our industry’.[16] It is reshaping our lives, our environment, our interactions. Unfortunately, there does not yet appear to be widely accepted definition of AI even among experts.[17]

As Black notes, ‘the definitional chaos is almost seen as an occupational hazard by those who write about regulation’.[18] Any regulatory regime should define what exactly it is that the regime regulates; in our case, AI should be defined.[19] This paper does not propose a new definition of AI but instead briefly discusses the definitional difficulties that regulators have to consider.

Concepts of AI have shifted over time and today it appears that the defining AI is viewed from the perspective of its objectives –the concept of machines that work to achieve goals by acting rationally-[20]

Scherer defines AI for the purposes of his paper: ‘artificial intelligence refers to machines that are capable of performing tasks that, if performed by a human, would be said to require intelligence’.[21]This paper uses the term ‘AI’ also to refer tangible technology which includes software and hardware components in addition to Scherer’s definition.

 

2.2. What is the Current State: To Regulate or Not to Regulate?

In April 2018, the European Commission and in the UK, the House of Lords’ Select Committee released their reports on AI, particularly to consider the economic, ethical and social implications of advances in AI. The US has remained silent after the White House Office of Science and Technology Policy had released the US report on AI in 2016 in Obama’s administration. Since it is unclear, what will happen to this report’s findings under the Trump administration, I will just briefly touch the 2016 report to reflect the US approach on regulating AI, at least by the Obama administration and then assess the UK and the EU reports.

I examine how each report addresses their approach on regulating AI. The reports address various social, ethical and economic topics, specific areas of concerns or values to be upheld which are quite positive but end up with drafting ethical code. This raises the question: Is the ethical code enough? The analysis concludes that although ethics has a vital role to play we may need legally enforceable rights to tussle with AI.

 

2.2.1.     The US Report

The report considers self-regulation as a key approach, government role as a regulator is quite limited and states that broad regulation of AI would not be appropriate at this time and AI should be incorporated existing regulation.[22] Core et al thinks that ‘the report seems to be trying to fit new round pegs into old square holes.’[23]

 

2.2.2.     The UK Report

On April 16th 2018, the House of Lords’ Select Committee on AI released the UK report on AI including 223 pieces of written evidence and 57 piece of oral evidence. The report aimed to ‘consider the economic, ethical and social implications of advances in artificial intelligence’[24]

The Government announced new AI related bodies: the AI Council, the Government Office for AI, the Center for Data Ethics and Innovation and the Alan Turing Institute.

The Government Office for AI should act as the co-ordinator between these bodies.[25] In the AI Council, industry, academia, users, developers would be represented.[26] The role of the Centre for Data Ethics and Innovation is to review the current governance landscape. It is stated that this would be a “world-first advisory body” which would review the current “governance landscape” and advise the Government on “ethical, safe and innovative uses of data, including AI”.[27] This is quite positive step which can bring together governments, industry, civil society, research community to establish multi-stakeholder agreement. Also, the Alan Turing Institute will become the national research center.[28]

In the report, there are three different recommendations from witnesses on regulating AI. Some argue that no new AI-specific regulation is required, some argue that regulation is needed immediately and some recommended a more cautious and staged approach regulation.[29]

Those who suggest that no new regulation is needed commented that the existing law is sufficient and can apply development and use of AI.[30]

Some claim that immediate action and regulation is needed in order to avoid unintended consequences but they did not make clear what form of regulation should be considered.[31]

Some suggest a staged and more cautious approach with a large element of co-regulation and standard setting.[32]

As a consequence, the Committee suggests that blanket AI-specific regulation, at this stage, would be inappropriate and existing sector-specific regulators are best fit to analyse their sectors regarding regulation which may be needed. There is a role for the Government Office with the Centre for Data Ethics to identify existing legislative gaps, if any.[33]

Also, the report states that the many of concerns regarding personal data will be addressed on Data Protection Bill and GDPR. It put emphasis on data protection impact assessments as a new regulatory tool considering that it will help controllers to meet individuals’ expectations of data protection by assessing the risks to individuals before processing activity.[34] However, as the controllers are not necessarily in a position to assess risks, the reliance on data protection impact assessment to protect fundamental rights of individuals is raises complex questions.[35]

Many giants released their ethical guidelines for the use of AI but it is stated that ‘there is a lack of wider awareness and co-ordination, where Government could help’.[36] It shows the UK report establishes a clear role for government in the process. The Committee suggests that the introduction of a cross-sector ethical code of conduct or ‘AI code’ suitable for implementation across public and private sector organisations. This Code will be drawn up and promoted by the Centre for Data Ethics and Innovation, with input from the AI Council and the Alan Turing Institute, with a degree of urgency. In some cases, sector-specific variations will need to be created, using similar language and branding.[37]

The AI Code part of the report starts by emphasising the importance of public confidence.[38]

However, it is questionable whether the ethical code will be sufficient in building public trust. Rather, the proponents of regulation argue that the public trust in emerging technologies is directly affected by the amount of regulation and they cited aviation industry.[39]

The language used in the ‘AI Code’ part of the report is indicative of the sense of urgency. And finally, the report states “In time, the AI code could provide the basis for statutory regulation, if and when this is determined to be necessary.” This is an important approach which states government has a role in long-term strategy, instead of leaving it to industry and getting involved only when proposing new legislation. The Committee criticizes UK’s lack of clear short and long-term objective setting for policies in AI field.[40] Maybe this staged approach could be seen as a reflection of this criticism. It remains to be seen how the UK will use these values when proposing new legislation.

 

2.2.3.     The EU Report

In October 2017, the European Council had invited the Commission to put forward a European Approach to Artificial Intelligence.[41] On 10th April 2018, 25 European countries signed a Declaration of cooperation on Artificial Intelligence.[42] On 25th April 2018, the EU Commission published its Communication on Artificial Intelligence.[43]

The EU report starts by stating its approach and says: ‘The way we approach AI will define the world we live in.’[44] From the regulatory perspective, this is much needed approach to promote public good and to develop a world we wish to develop. It states such EU can be the champion of an approach to AI that benefits people and society as a whole.

The EU embraces AI on the basis of the Union’s values; respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights.[45] By relying on these values as a concept, it seems more likely to achieve successful regulation in long-term. It helps to develop better vision to handle the new challenges that AI brings such as responsibility, liability, the need for cooperation. This approach to these values provides the desired ethical concept.

When stating its initiative on ensuring an appropriate ethical and legal framework, the report starts by considering the EU values set out in Article 2 of the Treaty on European Union[46] and the EU Charter of Fundamental Rights, then mentions the Union’s high standards in terms of safety and product liability as well as its strong rules on the protection of personal data. Also it mentions the importance and urgency of series of proposals under Digital Single Market, such as the e-Privacy Regulation and Cybersecurity Act as citizens and business alike need to be able to trust the technology they interact with[47]

The Commission also calls for a European AI Alliance to be established to develop draft ‘AI ethics guidelines’ by the end of 2018.[48] This clearly shows that primarily the ethical framework desired both in the UK and the EU. It is stated that the guidelines will look at the impact on fundamental rights; including privacy, consumer protection, dignity and non-discrimination as well as addressing the issues of fairness, security, safety, algorithmic transparency and social inclusion.[49]

The report states while self-regulation could provide first benchmarks, the Commission will monitor developments.  Existing legislation could do the job but the Commission will monitor developments and if necessary review existing legal frameworks to better adapt them to specific challenges with due respect to Union’s values and fundamental rights.[50] The EU Commission seems to have arrived the same conclusion as the UK Select Committee.

Further the Commission suggests that the intellectual property issues should be reviewed with a view fostering innovation and legal certainty in a balanced way. Also, it is stated that the Commission is currently assessing whether the safety and liability frameworks are sufficient in light of new AI technology and some evaluations have already been conducted and the Commission will examine whether legislative change is necessary or not.[51] It remains to be seen how the Commission will respond to these findings.

 

3.     Successful AI Regulation

 Even if they have different views on the specific values that would steer AI in the right direction, both reports emphasise several common values; transparency, intelligibility, accountability, minimizing bias, public education, importance of research. Both identify the impacts of AI on the economy, national security, warfare, education and diversity. Finally, they conclude that ethical oversight is necessary to achieve wider awareness and co-ordination.

 Of course, the ethical framework is needed but it should be complementary to legislation. Regulators should differentiate the role of ethics and the law. Although the ethics proponents believe legislation is too slow to keep up with technology, legal proponents argue that ethical codes do not adequately represent interests of weaker players and are not legally binding.

As it can be understood from the approaches stated in reports, the governments do not intend to regulate AI for now and they rely on the idea that existing legislation is sufficient; and some patch-work could be done, if needed.

They consider the huge impacts of AI and state that human rights are at risk; however, they are not certain about whether a regulation has a reason to exist. They consider the unfitness of the current rules but none of them decide if new legislation is needed.

If human rights are at risk there is a reason for regulation to exist. Considering a regulation has a reason to exist; this chapter assesses the steps for successful regulation in the public interest.

The reports also say it is too soon to regulate. It is not too soon because ‘AI is not merely another utility that needs to be regulated only once it is mature; it is a powerful force that is reshaping our lives, our interactions, and our environments.’[52] Once it is regulated, it can be better addressed by each sector.

One of the problems is whether AI should have omnibus regulation or if it should be sector-specific. In the former, regulation would centralise the rules to deal with all forms of AI, while in the latter each sector would enact its own rules. AI is a quite complex subject and involves fields like computer science, mathematics, information technology and when its applications are considered, it involves much more fields of expertise, from natural to social sciences. Because of this fragmented nature of AI, centralised regulation on AI is the wiser choice. Also, because of the application of AI is quite wide, sector specific regulation would probably result in huge inconsistencies, different solutions to the same problem and many other difficulties. Since the nature of AI is borderless, harmonised regulation (such as a regulation across Europe) is the best way to achieve accomplishment.

The other issue is legitimacy of rule-makers. AI is a sector which requires expertise from many different disciplines. Also, it is an ecosystem which involves hardware, software and data, each one is dependent to others function.[53] Hence, the data used to feed AI algorithms should be high quality and unbiased.

Also, the key requirement for successful regulation in public interest is a regulatory agency which accumulates unique tasks, stands as a light-touch regulator, hard-edged regulator and serves as a standards-setting body[54] Andrew Tutt proposes that such a consumer protection agency should have three powers role of agencies in the regulation of AI. First, it should have power to categorize algorithms into regulatory categories. For example, the agency could classify algorithms according to their types based on danger, explainability, predictability and perform different levels of control. This classification could be used to determine levels of liability as well. It also could be used set certain requirements to companies based on their fields of operation. The agency also should have power to require previous approval to enter market until their safety has been proven through evidence-based pre-market trials[55] It should have broad authority to impose sanctions, usage restrictions on the use of certain kind of algorithms that can cause harm or even sufficiently complex algorithms. Also, the agency should serve as an expert regulator that develops guidance, standards, and expertise in partnership with industry to strike a balance between innovation and safety.

 

4.     Conclusion

The increasing of using AI has brought increasing attention about its regulation. The choice of regulation of AI will require care.

Firstly, the ethical codes proposed in reports is not enough. Individuals should not need to rely on ‘ethical codes’ which are not legally enforceable to know their fundamental rights are protected. According to what was stated in previous parts; a successful regulation is centralised to reduce inconsistencies and ensure harmonization; represent all stakeholders in balance, has general rules and principals. Furthermore, the same regulation should create new agency consisting of technical, ethical, legal experts who can identify best standards for best practice, issue more specific rules, monitor AI-based developments and address potential consumer protection issues.

As a consequence, the regulation that governments play a leading role in a multi-stakeholder effort may be the best way to achieve successful regulation in the public interest. While doing this, ‘we need to ensure that our new smart technologies will be at the service of the human project, not vice versa.’

 

 

[1] Robert Baldwin, Martin Cave and Martin Lodge, Understanding Regulation: Theory, Strategy, and Practice (2nd edition, Oxford University Press 2012) 2-3.

[2] ibid. -where the state, in a broader sense, acts in a way to mould certain behaviour or to steer the market-.

[3] ibid. - All other ways of setting rules, parameters, and influences, like market regulation, professional bodies, and voluntary organisations-.

[4] ibid. -where a set of binding rules are made by the government or any other regulatory agency-

[5] Ian Brown, Christopher Marsden, Regulating Code: Good Governance and Better Regulation in the Information Age (Massachusestts Institute of Technology 2013)

[6] ibid, 15.

[7] Stephen G. Breyer, Regulation and Its Reform (Harvard University Press 1982) 15-35

[8] Roger Brownsword, ‘What the World Needs Now: Techno-Regulation, Human Rights and Human Dignity’ in Roger Brownsword, Global Governance and the Quest for Justice, 4 (Hart Publishing 2004).


[9] Baldwin (n 1).

[10] Richard A. Posner, ‘Utilitarianism, Economics, and Legal Theory’ (1979) 8 (1) The Journal of Legal Studies 103.


[11] Better Regulation Task Force, Principles of Good Regulation (British Government 2003);

Peter Mumford, ‘Best Practice Regulation: Setting Targets and Detecting Vulnerabilities’ (2011) 7 (3) Policy Quarterly 36.

[12] Ibid.

[13] Nicolas Petit, ‘Law and Regulation of Artificial Intelligence and Robots: Conceptual Framework and Normative Implications’ (2017) 25

[14] Milton Mueller, John Mathiason, Hans Klein,‘Internet and Global Governance: Principles and Norms for a New Regime’ (2007) 13 Global Governance 237, 250

[15] Marsden (n 5)

[16] Commission, ‘Artificial Intelligence for Europe’ (Communication) COM (2018) 237 final

[17] John McCarthy, What is Artificial Intelligence? (2007) John McCarthy’s Home Page 2–3 http://www-formal.stanford.edu/jmc/whatisai.pdf accessed 17 May 2018. As McCarty asserts, ‘The problem is that we cannot yet characterize in general what kinds of computational procedures we want to call intelligent.’ The difficulty in defining AI occurs because of the ambiguity of intelligence. The term of intelligence tends to be tied to human characteristics. Russell and Norvig defines AI as ‘the study of agents that receive precepts from the environment and perform actions’ and put into four categories; thinking humanly, acting humanly, thinking rationally and acting rationally. Russell and Norvig cited the works of AI pioneer Alan Turing whose work forms the ‘acting humanly’ approach that is the premise of his ‘imitation game’ See Stuart J. Russell & Peter Norvig, Artificial Intelligence: A Modern Approach 1034 (3d ed. 2010); see Alan Turing, Computing Machinery and Intelligence, 59 Mind 433, 442 (1950).

[18] Julia Black, ‘Decentring Regulation: Understanding the Role of Regulation and Self-Regulation in a Post-Regulatory World’ (2001) 54 Current Legal Problems 103, 129.

[19] Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, (2016) 29 2 Harvard Journal of Law & Technology 353

[20] Scherer (n 2) 361

[21] ibid.

[22] The OSTP report states that: ‘‘The general consensus of the RFI commenters was that broad regulation of AI research or practice would be inadvisable at this time. Instead, commenters said that the goals and structure of existing regulations were sufficient, and commenters called for existing regulation to be adapted as necessary to account for the effects of AI. For example, commenters suggested that motor vehicle regulation should evolve to account for the anticipated arrival of autonomous vehicles, and that the necessary evolution could be carried out within the current structure of vehicle safety regulation. In doing so, agencies must remain mindful of the fundamental purposes and goals of regulation to safeguard the public good, while creating space for innovation and growth in AI.’’ 2016, p. 17.

[23] Corinne Cath, Sandra Wachter, Brent Mittelstadt, Mariarosaria Taddeo, Luciano Floridi ‘Artificial Intelligence and the ‘Good Society’: the US, EU, and UK Approach’ (2017) Springer Science Business Media Dordrecht 1, 9.

[24] House of Lords’ Select Committee on AI, AI in the UK: ready, willing and able? (HL 2017-19, 100-I)

[25] ibid. para 369

[26] ibid. para 351

[27] ibid. para 354

[28] ibid. para 370

[29] ibid. para 373

[30] ibid. For instance, TechUK claims that the concerns regarding AI generally are focused around how data being used in these systems and the current data protection legal framework is already sufficient.

[31] ibid. 379

[32] HL report states: Baker McKenzie, an international law firm, recommended a “proactive, principles-led intervention, based on a sound understanding of the issues and technology, careful consideration and planning” rather than reactive regulation, put in place after something goes wrong. They recommended that “the right regulatory approach ... is staged and considered” and the Government should “facilitate ethical (as opposed to legal) frameworks for the development of AI technologies” to support self-regulation in industry.

[33] ibid. para 386

[34] ibid para 412

[35] Raphael Gellert, ‘Understanding the Notion of Risk in the General Data Protection Regulation’ (2018) 34 Computer Law & Security Review 279.

[36] HL (n 27) 419

[37] ibid.

[38] ibid. The report states: The public are entitled to be reassured that AI will be used in their interests, and will not be used to exploit or manipulate them, and many organisations and companies are as eager to confirm these hopes and assuage these concerns.

[39] ibid. para 379 The Committe report states “It has long been considered that public trust in new technologies is directly affected by the amount of regulation that is put in place and so industries such as the aviation industry are often cited as examples where robust regulation increases public trust in an otherwise inherently risky process”

[40] ibid. para 389

[41] http://data.consilium.europa.eu/doc/document/ST-14-2017-INIT/en/pdf   

[42] https://ec.europa.eu/digital-single-market/en/news/eu-member-states-sign-cooperate-artificial-intelligence

[43] Commission, n (16)

[44] ibid, 2

[45] ibid, 3

[46] ibid.Treaty on European Union: Article 2 of the Treaty on EU: "The Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities". The Member States share a "society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail."

[47] ibid 15

[48] ibid 4

[49] ibid 16.

[50] ibid.

[51] ibid 17

[52] Cath (n 23)

[53] Luciano Floridi and Mariarosaria Taddeo, ‘What is data ethics?’ (2016) 374 (2083) Philosophical Transactions of the Royal Society A.

[54] Andrew Tutt, ‘An FDA for Algorithms’ (2017) 69 Administrative Law Review, 15

Leave a comment

Please login or register to comment

Comments