Interview with Columbia Law Professor Eric Talley

Professor Talley speaks about his work with machine learning, what law students should do during the age of artificial intelligence, and whether society should regulate artificial intelligence on an international level.

Globe-1092964846-1024x576.jpg

By: Adam Bryla, Staff member

Eric Talley.  Photo:  Columbia Law School.

Eric Talley. Photo: Columbia Law School.

 

Professor Eric Talley is the Isidor and Seville Sulzbacher Professor of Law at Columbia Law School.  He is an expert in the intersection of corporate law, governance, and finance.  He is also a member of the Columbia University Data Science Institute and has published several articles applying machine learning tools to the legal realm.

This interview has been edited for clarity and length.

As someone who went to law school and then became a law professor, what originally inspired your interest in artificial intelligence, and how has it impacted your work today?

I was born and raised in Los Alamos, New Mexico, the birthplace of the atomic bomb.  I was in a small science town surrounded by families of mathematicians and physicists.  This definitely injected a good amount of quantitative learning into my education and laid the foundation for my future interests in machine learning.  While I wanted to go to law school, I wanted to be able to apply the quantitative skills that I was good at, so I ended up doing a joint J.D./Ph.D. program in economics.  The economics side of the program was extremely quantitative.  The work ranged from working on early neural network type stuff to numerical methods classes.  While it was not heavily advertised in the 1990s, this is where I realized the ties that existed between artificial intelligence and legal regulation.

Around 2008 is when I began applying the knowledge that I had acquired to legal research.  I began reading a lot of books about linguistics published by computer scientists, and it is also where I began working on a project where I wanted to see what happened to contracts with “act of God” clauses.  Many companies during the great recession tried using such clauses.  Around 2011, I published probably one of the first papers that used machine learning to analyze contracts.  It was extremely interesting working with a computer programmer and even learning some Python in the process.  Now, I have several articles that apply machine learning under my belt.

In the end, my road to artificial intelligence was mostly a combination of being a tech-oriented kid surrounded by a quantitative community and then getting into a program that allowed me to apply this stuff to the legal field.

We constantly hear about artificial intelligence entering different fields and either displacing workers or complementing the work that they do.  Have you seen any shifts in the legal field with regard to artificial intelligence?

There seem to be two trends going forward in machine learning as applied to the legal field:  data availability and domain expertise. When I started doing these projects 10–15 years ago, good data scientists were extremely hard to find.  This resulted in high costs securing one.  Now I think the tables have turned.  There are a ton of great data scientists available, but domain expertise is what is missing.  Because the most useful way to do good analysis with artificial intelligence deals with having good training data sets that are hand-labeled, the requires really good, old-school lawyering.  Only if you have a really good, labeled data set can you really automate a process. That is where lawyers can be in extreme demand.

In response to that, law students may worry that they will not be able to rise the wave of artificial intelligence, that they don’t have the technical knowledge for it.  Do you think law students should grow accustomed to the technical aspects of the future of law?

We’ve been here before when the internet first entered the legal field.  I started law school in 1991. In my first year, I had to physically fetch books and cases to find the information I needed. Within a year or two computers began providing WestLaw or Lexis, which allowed you to pull up cases and search law review articles from your computer.  The lawyers who did not learn how to use this technology as a complement to what they were doing got left in the dust quickly.

I think a lot of people fear artificial intelligence technology will supplant them as lawyers, but really most of these technologies are meant to complement you and make you better.  A really good lawyer should understand how predictive learning and artificial intelligence work along with some of the lingo.  They do not have to learn coding languages like Python to do it because there is a glut of data scientists with masters in computer science that can provide the expertise on that end at a low price.  In any case, I think some technical knowledge is necessary, otherwise, you are ignoring the impact these technologies are having and continue to have. Plus, having the knowledge can help you ride the crest of this wave, helping your career in the process.

On a larger scale, there isn’t much global regulation of artificial intelligence development.  Some people worry that, because technological development usually outpaces legal regulation, there will be a delay in regulation which might result in an unregulated field that can cause various types of harm.  Do you think countries should work together to regulate AI, or do you think having an unregulated atmosphere that produces faster progress might be better?

That is definitely an important question that governments should be focusing on.  Regulation on some level is needed given the scale of impact artificial intelligence may have.  One argument says that if you are a big enough fish in the tank, and you are regulating artificial intelligence, you will not need to create international regulations because most companies will just comply with your regulations due to the size of your market and the burden it would cause them to adjust to each country.  Additionally, a lot of countries’ laws apply on an extraterritorial basis, so companies still have to pay attention to them.

This argument, however, is still in dispute. It is much easier with coding and artificial intelligence to tailor your activities according to the regulations given in each jurisdiction.  This can result in unintentional, racially-biased algorithms that result in bank loan discrimination against blacks.  It can also result in dangerous, unregulated advancement in dangerous artificial intelligence software.  We do not want either to happen, and because the cost to adjust from jurisdiction to jurisdiction may not be that high, international cooperation may be necessary.

Responding to that, I know you have plenty of experience in teaching Game Theory—do you think, from a foreign policy perspective, that this technology might result in a new arms race and what should be done about it?

This is definitely a huge issue.  The science town that I grew up in, being the birthplace of the atomic bomb, was filled with this type of talk except in the nuclear context.  Almost everything that people talked about resulted in this type of debate.  It’s essentially a winner takes all contest because there is no incentive to slow things down.  But how do you do it?

In the nuclear context, it was easier because we dealt with big nuclear weapons that were hard to build.  Something a normal person could not create and store in his house.  Artificial intelligence to a large part is just code that anyone with a laptop can make.  It is much more difficult to see who or which country is doing it.  There is also more uncertainty about its impact, resulting in no “mutually assured destruction.”  This creates even less incentive to stop research in autonomous lethal weapons.  Especially because if you do agree to deescalate, you have another problem where you incentivize smaller, bad actors, not part of the agreement, to catch up and pass whatever developments you have made.

My guess is this new arms race will happen but will result in something similar to the spy race of the 1950s, where you had all these gadgets and decoys.  Because there is no stabilizing force, such as mutually assured destruction, there is more of an incentive to over-invest in any weapon-related to artificial intelligence.

While not intending to minimize the bad effects of an arms race, it is worth pointing out that there will be good things coming out of it.  One of the good things that came out of the nuclear arms race, out of this focused effort to build all these weapons, is an increase in technological innovation that had a huge positive impact on society.  The reason supercomputers were built was to simulate nuclear explosions.  An arms race creates a huge incentive to over-invest because you want to be at the front of the innovation.  This results in a lot of positive externalities that can definitely be good for society.

In the end, time will tell what will happen, and international cooperation will be the key factor in all of this.

Adam Bryla is a second-year student at Columbia Law School and a staff member of the Columbia Journal of Transnational Law. He graduated from Baruch College in 2018.

 
Jake Samuel Sidransky