The In-House Edge: A Legal Eye on AI

27.11.2023
The In-House Edge: A Legal Eye on AI

1. How do you see artificial intelligence reshaping the landscape of in-house legal practices in the coming years?

First of all, I should say it was not unexpected. We have all seen it was coming in the recent years but we did not know the magnitude and impact, very naturally. And I think we still do not know, indeed cannot know. I always support the idea of being open to what is new and useful along wih being alert and vigilant; not just as legal professionals but as people who will eventually face with artificial intelligence (“AI”). In terms of the in-house legal practices landscape, we need to accept AI’s existence and try to benefit from it to the extent possible and to the extent it is compliant with the laws as well as company policies & guidelines. In terms of “benefitting” from AI, there are several different fields that AI could be of help to legal professionals. The generative AI is basically defined as the software that creates sophisticated and meaningful texts, images and even codes from the big data via large language model technology (“LLM”). The first line generative AI use cases could be considered as summarizing documents, meetings, drafting texts or emails, answering grounded questions, making translations, helping with ideas. But we always need to be alert and prepped to avoid possible hallucinations (the danger of LLMs fabricating information), incomplete knowledge results, biases, intellectual property rights, confidentiality and data privacy breaches. These are apparently not numerus clausus and can vary depending on the subjects, technologic tools and the sectoral facts.


2. How has Visa integrated AI into its legal department? Can you share some of the challenges and successes in this journey?

AI has been an integral part of the business at Visa for over 30 years to reduce and prevent fraud in the ecosystem- so not limited to a specific function in the company. To name it, Visa became the first network to deploy AI-based technology for risk and fraud management, pioneering the use of AI models in payments security – becoming one of the most tangible benefits of AI. [1]

Visa believes in responsible AI - using principles to safeguard:

  • Establishing consumer trust around AI, technology and data is vital for growth
  • Visa has created a governance structure that prioritizes ethics to help ensure the responsible stewardship of data

Visa has recently launched an AI ”co-pilot” function to be used within Visa network only – subject to strict controls as well as caveats that the content provided by the AI co-pilot does not grant a legal advice along with a recommendation to consult a legal professional at all times.

The pros can be outlined as time efficiency, direct exposure to many resources and practicality where cons could be set out as facing hallucinations,  violation of laws and confidentiality obligations challenge- which is why Visa is very cautious and evenminded not to leave any doors open for infringement and misrepresentation. As Legal, we are even more precautious to comply with the applicable laws when we are introducing a new technology within the company. We should always check the latest updated data on the system and remind ourselves that the AI can only help us with the data uploaded on the system until a certain date. Also, we need to be very alert on possible hallucinations as the software can generate made up numbers, links, quotes or other content as it collects and processes the data it has in the big data pool.


3. With the rapid advancement of legal technologies, how do you ensure that Visa's legal department remains at the forefront of adopting the latest AI-driven tools?

The generative AI unlocks new era of AI and we are piloting the use of generative AI capabilities for our employees so developers can benefit from sophisticated AI pair programming techniques at scale. To increase productivity, all employees also have access to a secure instance of GPT-4, with guiding principles and clear governance tools for use, including a pre-training.


4. Are there any particular ethical concerns that in-house legal teams should consider when implementing AI-driven solutions?

As mentioned, the hallucinations may create legal and ethical concerns so it is very important to implement the necessary training and governance tools before getting to use the AI. As you may know, there have been several cases against AI software companies on data privacy, confidentiality (stolen private information) or personal rights infringement grounds as well as defamation. Also, there had been a reported case which we’ve seen on media that a petition drafted by a lawyer using AI and submitted before the court in US was identified as containing many hallucinations.

Considering those, it is always wise to keep in mind that AI is a helping tool but not the entire solution which replaces human mind and touch.

5. To what extent do you believe AI should play a role in decision-making within the legal realm? Are there areas where human judgment is irreplaceable?

I will confidently refer to my explanations in the previous question. It may (and it is very welcome to) help the legal realm to prepare the decision making infrastructure and process; especially with the LLM technology over the big data but human judgment and control is irreplaceable – at least for a significant period. We may never know what could happen in a decade so better to keep the eyes open and always play it safe.

6. As AI becomes more prevalent in legal processes, what skills do you believe are essential for in-house lawyers to cultivate in order to stay relevant and effective?

There are two key elements to stick to while we use the AI:

i- Constantly monitoring the laws and regulations; any amendments or new legal texts possibly on the way because there are legal gaps on AI at the time being considering the fact that the technology and its real time launch is quite new.

ii- Considering that the content may entail hallucinations, it is always recommended to check, assess and decide method. We call the principle as ”human + machine principle”  which purports that the human touch cannot be avoided as there is no assessment based decision mechanism within the AI tools and final decision maker is the human. So in a nutshell; being open, creative, practical and staying alert & vigilant.

7. Looking into the future, how do you envision the role of in-house lawyers changing in response to the growth of AI and other technological advancements?

I think the in-house lawyers, as legal captains in the companies, could be able to work not against but in collaboration with the AI and other upcoming technological tools. The AI can be very efficient and helpful to help manage an in-house counsel’s work load and as said, it is very crucial to use the AI appropriately and carefully. AI can help with many use cases as outlined above and this could free up time in an in-house counsel’s agenda to focus on many other areas to develop and contribute; also one step furher, create more qualified works. Finally, I would like to underscore that any content or act without a duly control mechanism could become jeopardous and one of these control mechanisms in the companies are, for sure, the legal function; aka in-house counsels. So, in-house counsels should always be placed in the human side of the human + machine principle.


[1] 30 years of AI and counting | Visa    

Ready to stay ahead of the curve?
Share your interest anonymously and let us guide you through the informative articles on the hottest legal topics.
|
Successful Your message has been sent