Ethics

Tip of the Iceberg: Assessing Ethics and Technology

Lawyers today cannot afford to remain inexperienced when it comes to technology. For starters, the American Bar Association has recommended, and the majority of states have imposed, an ethical duty of technological competence. But that’s only the tip of the iceberg. There are a whole host of fascinating ethical questions around technology, and unlike their corollaries in medicine or science, guidelines for the ethical use of technology are considerably less developed. Sooner or later, lawyers will be called on to help develop those rules.

Current events have driven us to start asking some of these tough questions. Who is responsible for damages caused by technology? How do we evaluate the decisions that artificial intelligence (AI) programs make? What standard of care should technological innovators take to prevent their products from being used to cause harm?

Ultimately, the legal profession is likely to be asking these questions and making these judgment calls. That means that lawyers need to understand developments in technology and discern what they mean under our legal system—starting with their own ethical duties.

Lawyers’ Ethical Duties Related to Technology

 In 2012, the American Bar Association amended its Model Rules of Professional Responsibility to include a duty of technological competence. Under Comment 8 to Rule 1.1, lawyers should “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” Thirty-one states have since adopted rules requiring technological competence.

Technology influences how lawyers meet their other ethical obligations as well. Rule 1.6 establishes the duty of confidentiality, but how do lawyers fulfill that duty when every day brings a new data vulnerability or reported breach? Comment 18 clarifies that lawyers who suffer a security breach don’t automatically violate the rule so long as they took “reasonable efforts to prevent the access or disclosure.” At a minimum, those reasonable efforts include attending to security measures for data storage and applications.

Yet lawyers shouldn’t limit themselves to learning the bare minimum about technology to comply with ethical standards—not when there are so many more interesting questions to consider.

Emerging Questions About Ethics in Technology

What decisions can people now make because of technology—and should we be making them? Technology has enabled us to do things that we wouldn’t have dreamed of 30 years ago. Whether it’s choosing your child’s genetic traits, modifying agricultural products to enhance insect resistance, or tracking a suspected criminal’s cell phone location for a criminal investigation, the implications of these technological advancements touch on numerous legal issues.

This assessment goes much deeper than evaluating the decisions that people are now empowered to make. What about when lawyers use analytics to improve and expedite legal inquiries, from eDiscovery and legal research to outcome predictions and complex document drafting? How should we evaluate the confidentiality of this information or the ultimate decisions that data-crunching programs suggest? What decisions is our smart technology making for us every day, and how should we evaluate those decisions?

The difficulty with AI-driven devices is that the program learns most of its decision-making rules by itself. That means that no human knows how or why the program reaches individual conclusions. Some worry that AI “accentuates human biases, while the black box algorithms can make it hard to understand whether a decision was reached ethically.” For instance, do programs used to automate employment decisions apply racially biased criteria? What about AI systems making medical diagnoses?

Fortunately, higher education programs are stepping up to develop courses on the ethics of technology. In the spring of 2018, Harvard University and the Massachusetts Institute of Technology offered a new joint class on the ethics and regulation of artificial intelligence. The University of Texas at Austin and Stanford University are also rolling out courses on ethics in computer science.

Lawyers have a duty to know what’s happening not just in legal technology but in technology everywhere. Eventually, someone is going to have to decide what our technological tools can do legally, ethically, and morally. It will be up to lawyers to ask the hard questions that get us to those answers.

About Paul Domnick

Paul Domnick
Paul Domnick is President of Litera Microsystems, having been President of Litéra Corp from 2014 to 2017. He brings unique insight into the utility of the Litera Microsystems’ risk management solutions having previously been CIO of Freshfields Bruckhaus Deringer for five years. There he was responsible for a global team of more than 300, covering all areas in IT & IS such as change management, information security, infrastructure operations and help-desk support, technical architecture, vendor management, application support, program and project delivery. Paul also has a background in the financial service sector, serving as group chief technology officer and deputy head of IT for Zurich Financial Services responsible for technology strategy, standards, enterprise and applications architecture across the entire business. Before taking on this role in early 2007, Paul was head of global IT sourcing for Zurich Financial Services.

Check Also

Computer Forensics

Computer Forensics: What Lawyers Need to Know

Here are 10 simple tips that every lawyer who collects or utilizes electronic evidence should understand.