You cannot have AI ethics without ethics-By Dave Lauer

LaRae LongSRI/ESG News, Trendspotting, Wealth of Networks, Advisors' Forum, Transforming Finance, Latest Headlines

“Ethical Markets is proud to post this important article “You Cannot Have AI Ethics without Ethics“ by our distinguished Advisory Board member,  Dave Lauer, CEO, Urvin.AI.  Dave’s expertise in big data analytics in financial services now digs deeper into how best to monitor and hold algorithms and their designers accountable. Dave has testified on these issues before the Senate Banking Committee, the SEC, and the CFTC on the implications of technology on modern market microstructure.  We rely on Dave’s expertise, since he participated in our seminar on the implications of high-frequency trading: “Perspectives on Reforming Electronic Markets and Trading”, November 2013 in New York City, along with executives from Cornerstone Capital, Honeybee Capital, Zevin Asset Management, Centered Wealth Management, Themis Trading and IEX, founded by  Brad Katsuyama, other Wall Street executives and experts in designing electronic trading systems as well as Simon Zadek, co-director of the UNEP Inquiry on the Design of Sustainable Finance “.  Dave’s dive into the ethics of AI and all decision-making algorithms could not be more timely!

~Hazel Henderson, Editor“

Opinion Paper

Published: 06 October 2020

Dave Lauer

AI and Ethics (2020)Cite this article

 

 

Introduction

Artificial intelligence has emerged as the preeminent technology of the twenty-first century, infiltrating nearly every industry and impacting our lives in obvious, but also increasingly subtle ways. Each industry and company is grappling with how to leverage this new technology to optimize or personalize their products or offerings, understand their business or clients better, or to unlock new sources of revenue and opportunity. In the midst of this innovation, the concept of AI ethics is often overlooked, paid lip service, or simply ignores the idea that you cannot have AI ethics in isolation from a broader and all-encompassing ethical approach.

Industries are experimenting with AI in a very difficult environment. Widespread adoption of AI is a relatively new phenomenon. Outside a small circle of math experts, the nuances of different approaches and techniques are not well understood. Even the curricula for data science degrees and certificates are primarily focused on the application of these techniques, rather than the math that underpins such models. For most executives, and especially for legal and compliance professionals, AI remains a black box.

In this paper, I will examine the reasons that ethical deployment of AI has been so elusive for so many high-profile organizations, and I’ll explain why there have been such egregious examples of unethical AI built and deployed into the world. I will draw on examples and lessons from other fields, such as medical ethics and systems theory, to demonstrate that AI ethics simply cannot exist without a broader culture of ethics. I will make the case that only organizations with a firm grounding in ethics, and an appreciation for the way complex systems behave can succeed at ethical deployment of AI.

Artificial integrity

Most AI projects fail to get out of the research lab, but many that do are soon embroiled in scandal. Let us start with a recent example, a new service called Genderify. Genderify set out to identify someone’s gender based on their name, email address or username. In hindsight, this service was probably a terrible idea to begin with in light of the current cultural discourse on gender and identity. Surprising few people other than the founders, Genderify made predictions like “Meghan Smith” was 60% likely to be female, but “Dr. Meghan Smith” was 76% likely to be male. Needless to say, they shut down their service completely within hours of launching.

It would be easy to attribute such a failure to a lack of AI ethics, or a lack of an appropriate ethical AI framework on Genderify’s behalf. But was AI ethics the failure here? What would the ethics of AI have told the principals of Genderify that any straightforward ethical framework wouldn’t have? Can any framework take a fundamentally unethical objective, and somehow make it ethical?

Of course, this example comes with some obvious red flags. But there are far more insidious problems that were too complicated to have foreseen ahead of time and just as difficult to diagnose after the fact.

Examples of such ethical lapses abound: Uber’s withdrawal from their autonomous vehicle development after killing a pedestrian; Facebook’s rampant algorithmic spread of misinformation and disinformation; Clearview’s illicit facial recognition surveillance and the backlash that followed.

In Microsoft’s development of Tay, a Twitter chatbot, and Harrisburg University’s attempt to develop technology to predict criminality, we have seen how racist training and racially biased data ultimately lead to racist AI models. Courtesy of Microsoft and Facebook, we have also seen how research image collections with sexist bias can be turned into AI models that link images of shopping, cleaning and cooking to women.

In each of these examples, we are confronted with ethical dilemmas. Some are more obvious and superficial than others. Each incident invites a series of questions. Why hadn’t they defined the right set of policies and procedures to prevent such an outcome? Why couldn’t Genderify see what, in retrospect, seems like such an obvious problem? In the case of Facebook (and many others), why are the steps that they are taking, and their vow to “fight the spread of false news,” proving to be ineffective? Where is the broken part?

The broken part fallacy

The fallacy of the broken part is a well-understood principle in complexity theory. When there is a malfunction, the first instinct is to identify and fix the broken part. If a plane crashes, which part malfunctioned? If an autonomous vehicle misidentified a white truck as simply being part of the sky, and drove right into it, where is the problematic code? In these examples (and countless others), the broken part is only the most superficial problem. “Fixing” the broken part will often fail to prevent a future problem, because these types of problems are systemic, ordinarily involving cascading or multi-system failures. In some instances, these failures may occur “when no parts are broken, or … seen as broken.” [2] Our impulse to “fix” a broken part is driven by our grounding in linear thinking and the search for the cause of an undesired effect.

Systems thinking, especially when it comes to system safety, demands that one examines the entire ecosystem in all of its complexity. “Systems thinking is about relationships, not parts.” [2] In nearly every case, this search for a broken part leads to a band-aid solution that attempts to address the problem without consideration of the complexity underlying the causes.

Discussions over technology-driven problems typically turn to talk of “bugs” or programming errors. However, according to noted systems engineering and safety researcher Nancy Leveson, “[n]early all the serious accidents in which software has been involved in the past 20 years can be traced to requirements flaws, not coding errors.” [5]

These errors often manifest themselves as coding errors or user interface flaws, but they are the consequence of poor requirements, poor governance and poor processes. This also describes the state of AI today. When AI ethics fail, we assign blame to inadequate or narrowly specified training data while looking past organization-wide ethical shortcomings.

Bad medicine

The alternative to a narrow ethical AI approach is a thorough examination of the entire environment that ultimately led to the problem or failure, including company management, legal and regulatory incentives, manufacturing practices, employee training, quality assurance, and so on.

In each of the aforementioned AI failures, the “search for the broken part” only served to obscure a more systemic ethical deficit. In each case, the failure to build ethical AI can be traced to an organization-wide failure of ethics. But how to go about overhauling ethics at the organizational level? Can a series of policies and checklists actually make an organization ethical? Can ethical AI exist in a vacuum separate and distinct from broader ethical questions? Can any narrowly defined ethical field exist in a vacuum from broader ethics? I hope to show why the answer to these questions is “Clearly Not.”

Perhaps a short foray into a far more mature field could be instructive. The practice of medicine has been grappling with ethical and moral questions since before the first Hippocratic Oath was taken. While this is not the place for an exhaustive exploration of medical ethics, the “metamorphosis of medical ethics” as Edmund Pelligrino terms it, provides an instructive lesson for AI practitioners. The field has evolved from the Hippocratic Oath, and its broad and relatively subjective expression of “genuinely ethical precepts, such as the obligations of beneficence, nonmaleficence, and confidentiality, as well as … prohibitions against abortion, euthanasia, surgery and sexual relationships with patients.” [7] The limitations of this approach were gradually recognized, especially as the approach became incompatible with a more modern, informed and equal society.

As such, a theory of “prima facie principles” was developed, and “adapted to medical ethics by Beauchamp and Childress’ Principles Of Biomedical Ethics.” [7] They settled on four principles for medical ethics—“nonmaleficence, beneficence, autonomy, and justice.” [7] These principles should generally sound familiar to anyone with experience in ethical AI frameworks.

But the medical field has been confronted with the shortcomings and subjectivity of putting these principles into action. For instance, the emergent idea of autonomy “directly contradicted the traditional authoritarianism and paternalism of the Hippocratic ethic that gave no place for patient participation in clinical decisions.” [7] Over time, this principle of medical self-determination has been largely accepted, especially in America. But today, we are facing a new contradiction. As unethical social media platforms proliferate misinformation and anti-science into the mainstream, such as the amplification of the anti-vaccination movement, the ethical importance of autonomy is in direct conflict with the ethical importance of truth. This underscores the key shortcoming of the prima facie framework. These principles are vague, and relatively static. They lack the dynamism to address major bioethical dilemmas such as “abortion, euthanasia and a host of other issues.” [7] As Pellegrino explains, “[w]hat is required is some comprehensive philosophical underpinning for medical ethics that will link the great moral traditions with principles and rules and with the new emphasis on moral psychology.” In other words, even in a field that has been grappling with ethical questions and issues for thousands of years, the attempt to define an ethical approach specific to the field, and divorced from broader ethical philosophy and questions, remains a moving target.

Much like biomedical ethics, AI ethics do not exist in a vacuum. Organizations that fail to grapple with basic ethical questions, or who have neglected to establish a culture of ethical and moral behavior, will not succeed. A fundamentally unethical organization, or the representative of an unethical industry, simply won’t have the capabilities to build and deploy ethical AI.

Systemic safety

This is because companies are complex organizations. They exist within complex ecosystems inhabited by regulators, customers and partners. Those which neglect to incorporate complexity theory and systems thinking into their consideration of AI ethics are doomed to fail. There has been a significant amount of study and work done to better understand the complex interplays of such ecosystems, in particular the large body of work around safety systems in industries such as automotive and aerospace. In fact, the fallacy of the broken part is based on work in these fields, as well as the all-too-human desire to assign simple explanations to complex issues.

Unfortunately, in the world of complexity theory, there are few linear relationships and few simple answers. The field of AI bears much resemblance to the practice of system safety in these industries, which focuses on several areas:

  • the complex interplay of incentives that are created from law, regulation, financial markets and for-profit businesses;
  • the fostering of a “culture of compliance,” powered by affirmative top-down leadership, bottom-up empowerment of the employees who are closest to the problem, and actual adherence to company policies;
  • the training of front-line employees who are designing and building these systems and who have the most first-hand experience and ability to impact implementation;
  • the sophistication and insight of empowered regulators who understand the industry, and co-evolve with the firms that they regulate;
  • the avoidance of prescriptive top-down solutions in favor of principle-based guidance and appropriate transparency for policing and enforcement.

While the establishment of an ethical AI framework for a company is an excellent and important step, frameworks that fail to account for system-wide complexities will struggle with relevance as the world shifts and changes, and as decisions are made in the face of scarce resources and competing incentives.

[READ MORE]