Does Your Computer Need a Law Degree
Review of “The Formula: How algorithms solve all our problems … and create more.”
“The Formula” is a new book by the author of “The Apple Revolution,” Luke Dormehl, which tackles the rise of algorithms and artificial intelligence in art, politics, online relationships, and the law.
For the most part, science and business writers are enamored promise of algorithms- the ways computers take mountains of data and make unexpected connections to clarify or simplify our lives. But Dormehl, who writes for Fast Company, Wired, and other business and technology publications, is no starry-eyed techno-hype man. He is interested in the unanticipated complications and questions algorithms introduce into our lives.
The book provides a high level view of how algorithms are changing our world. For lawyers, there are obvious political and legal implications that will need to be hammered out in the coming years. One quarter of the book, called “Do Algorithms Dream of Electric Laws?” is dedicated to the question of computers and the law. In particular, the book touches on the ways algorithms are increasingly used in predictive policing and e-discovery.
Most interestingly, Dormehl talked to Richard Posner and other forward-thinking legal minds, offering some novel and interesting new thoughts on the topic. If you’re looking for a deep dive into how artificial intelligence or predictive coding is changing modern lawyering, you should look elsewhere. But The Formula provides a good introduction to many of the important issues surrounding computer technology in the law and the world.
LTN talked to Dormehl about the book, computer intelligence and the law.
LTN: Is it fair to say that one of your overarching themes is a warning against too much techno-optimism around machine learning technology? Are we overestimating the capabilities of computer intelligence?
Dormehl: I think you could argue that we’re still not at a level in computer science where the results could truly be classed as “intelligent” in the way that a human is. But there is a belief that computers can produce less biased, more objective results. That belief is nothing new. In the nineteenth century there was a lot of excitement about photography, because it was viewed as a medium that removed all human agency.
Now, of course, with well over a century of experience, we know that’s far from true. Who takes a photo? What angle are they taking it from? What lens are they using? Then there’s the whole Adobe Photoshop question. Photography is the furthest thing from a medium that requires no human agency. And now we’re saying the same about algorithms. In many cases, yes, they can provide results that remove certain types of human bias. But it is dangerous to say they are objective or neutral.
There have been recent studies into the social sciences, medical studies, and economic research that find many of the purported findings are likely false. What happens when bad data or bad assumptions are fed into computer models and predictive programs?
The simple answer is that if you put rubbish in, you get rubbish out. One mistake we can make is to assume that data mining is not an ideologically charged subject. But Big Data itself is based on a theory. Data collection poses inherent challenges, like whose data is being collected and what metrics and information the collectors are measuring.
LTN: In what way did you find that computers reinforce biases? How big is the concern that algorithms will inadvertently but unfairly profile individuals?
Dormehl: A lot of people are familiar with the concept of the filter bubble, in which algorithms like those used by Google will flatter our personal mythologies by feeding us results that marry up to our past history.
The problem can often have less to do with the algorithms than with the data and the application. An African-American Harvard University PhD named Latanya Sweeney performed an interesting, and rather shocking, study after she realized that her search results were accompanied by ads asking, “Have you ever been arrested?” These ads didn’t appear for her white colleagues. What she found was that the machine learning tools behind Google search were being inadvertently racist, by linking names more commonly given to black people to ads relating to arrest records. The problem here is that even though the algorithms themselves may not be biased, when you factor in the data and the applications, bias is easily introduced. And while ads may be a relatively small problem, when you consider that in light of particularly law enforcement applications, it becomes a cause for concern.
LTN: As you discussed in the book, an important issue for lawyers is the use of machine learning or predictive coding to find relevant documents in large data sets. But can computers truly replace human reviewers? Isn’t there a role that lawyers must play to analyze sample documents and fine-tune the results?
Dormehl: Yes, and if I had my time over again, I’d say that training as a computer scientist and lawyer would be one of the smartest decisions you could make. Training sets are certainly required, which means that someone needs to go through and correct the machine learning tools at the beginning. After that, the idea is that the computer can take over.
You’re absolutely correct that this is an area that can leave a lot to be desired in some cases, although humans are also not always above reproach. The trouble is that as the amount of discovery data that needs to be analyzed increases, it becomes less and less practical to both client and law firm to have large teams of people going through it by hand. And it would be remiss to say that machine learning tools aren’t becoming smarter all the time. Currently, there’s absolutely a role for humans in the discovery process. But it’s dangerous to suggest that this balance won’t shift.
LTN: Is it possible to make computer algorithms more transparent? Or will we always have a black box problem, in that we don’t know how decisions are arrived at?
Dormehl: A large part of this is educating the public on what exactly algorithms are, and how they work. In Silicon Valley, algorithms can be protected with a fierceness normally reserved for missile codes. We need to ask what’s the best way for all involved is to have particular decisions explained to users.
There are clearly problems in calling for unanimous transparency, but I don’t think that it is a bad ideal to strive for. Seeking a full understanding should be our default option—and that’s good for everything from accountability to fairness.
LTN: One take-away from your book is that computers actually need more human input to function well. For example, you wrote about how administrative law programs benefit from as much input as possible from case workers to explain how the rules affect welfare recipients.
Dormehl: In the example you’ve brought up, having case workers familiar with the task would have been helpful when creating a new automated benefits system. I wouldn’t say humans are necessarily needed to make machines function better, though. Deep learning, for example, opens up new possibilities for machine learning without the need for a human to walk it through hundreds or thousands of training examples.
I think what we need to evaluate is the relationship between humans and computers; what each can bring to the table, and how handing over tasks to each impacts upon them. And this isn’t just about making machines better, either. If you look at employment, clearly we’re in the middle of a profound moment of social change where more and more jobs can be automated. In this case we don’t just need to impose some kind of top-down human input into the equation; we need to examine what humans can do that machines can’t.
LTN: To what degree will computers replace lawyers and judges? Can we replace the Supreme Court with the Suprem-o 3000?
Dormehl: The billion dollar question, right? I spoke with Judge Richard Posner about this for the book. He was interested in the idea that you could create profiles of judges’ philosophies based on their rulings and public statements, and that these profiles could be updated continuously to reflect the direction a judge was leaning. His concept was that this could be an invaluable resource for judges, so they could discover certain personal biases they might not be aware of—such as if they’re soft on criminals but tough on business fraud, for instance.
In theory, if you were able to build a conceptual model of Judge Posner that would be 99 percent accurate in forecasting how he would decide a certain case, you could rely on that to decide cases rather than the person himself. But we’re not there yet—and perhaps we never will be. As a nonlawyer, one of the great realizations I had while writing “The Formula” was the degree to which laws are not static entities that can easily be automated.
The judicial process is less about a kind of mechanical objectivity than it is a high level of subjective agreement. It takes a human to resolve multiple parties’ grievances, and to reconcile different interpretations of laws that are often written in such a way that their meaning can be argued. Machines can’t do that yet. They can help but we’re still a way away from the Suprem-o 3000, which is shame because that’s a fantastic name.
Jason Krause is a freelance writer based in Wisconsin. Twitter @jasonkrausehaus.