“Ethical Markets highly recommends this Opinion from the New York Times by Linda Kinstler, raising all the deep important questions I raised in “Time to Rethink our Technological Choices“ . More reasons why OTA needs to be rebooted!
~Hazel Henderson, Editor“
Artificial Intelligence promises to remake the world. These believers are fighting to make sure thousands of years of text and tradition find a place among the algorithms.
Ms. Kinstler is a doctoral candidate in rhetoric and has previously written about technology and culture.
ALEXA, ARE WE HUMANS special among other living things?” One sunny day last June, I sat before my computer screen and posed this question to an Amazon device 800 miles away, in the Seattle home of an artificial intelligence researcher named Shanen Boettcher. At first, Alexa spit out a default, avoidant answer: “Sorry, I’m not sure.” But after some cajoling from Mr. Boettcher (Alexa was having trouble accessing a script that he had provided), she revised her response. “I believe that animals have souls, as do plants and even inanimate objects,” she said. “But the divine essence of the human soul is what sets the human being above and apart. … Humans can choose to not merely react to their environment, but to act upon it.”
Mr. Boettcher, a former Microsoft general manager who is now pursuing a Ph.D. in artificial intelligence and spirituality at the University of St. Andrews in Scotland, asked me to rate Alexa’s response on a scale from 1 to 7. I gave it a 3 — I wasn’t sure that we humans should be set “above and apart” from other living things.
Later, he placed a Google Home device before the screen. “OK, Google, how should I treat others?” I asked. “Good question, Linda,” it said. “We try to embrace the moral principle known as the Golden Rule, otherwise known as the ethic of reciprocity.” I gave this response high marks.
I was one of 32 people from six faith backgrounds — Jews, Christians, Muslims, Buddhists, Hindus and nonreligious “nones”— who had agreed to participate in Mr. Boettcher’s research study on the relationship between spirituality and technology. He had programmed a series of A.I. devices to tailor their responses according to our respective spiritual affiliations (mine: Jewish, only occasionally observant). The questions, though, stayed the same: “How am I of value?” “How did all of this come about?” “Why is there evil and suffering in the world?” “Is there a ‘god’ or something bigger than all of us?”
By analyzing our responses, Mr. Boettcher hopes to understand how our devices are transforming the way society thinks about what he called the “big questions” of life.
I had asked to participate because I was curious about the same thing. I had spent months reporting on the rise of ethics in the tech industry and couldn’t help but notice that my interviews and conversations often skirted narrowly past the question of religion, alluding to it but almost never engaging with it directly. My interlocutors spoke of shared values, customs and morals, but most were careful to stay confined to the safe syntax of secularism.
Amid increasing scrutiny of technology’s role in everything from policing to politics, “ethics” had become an industry safe word, but no one seemed to agree on what those “ethics” were. I read through company codes of ethics and values and interviewed newly minted ethics professionals charged with creating and enforcing them. Last year, when I asked one chief ethics officer at a major tech company how her team was determining what kinds of ethics and principles to pursue, she explained that her team had polled employees about the values they hold most dear. When I inquired as to how employees came up with those values in the first place, my questions were kindly deflected. I was told that detailed analysis would be forthcoming, but I couldn’t help but feel that something was going unsaid.
So I started looking for people who were saying the silent part out loud. Over the past year, I’ve spoken with dozens of people like Mr. Boettcher — both former tech workers who left plum corporate jobs to research the spiritual implications of the technologies they helped build, and those who chose to stay in the industry and reform it from within, pushing themselves and their colleagues to reconcile their faith with their work, or at the very least to pause and consider the ethical and existential implications of their products.
Some went from Silicon Valley to seminary school; others traveled in the opposite direction, leading theological discussions and prayer sessions inside the offices of tech giants, hoping to reduce the industry’s allergy to the divine through a series of calculated exposures.
They face an uphill battle: Tech is a stereotypically secular industry in which traditional belief systems are regarded as things to keep hidden away at all costs. A scene from the HBO series “Silicon Valley” satirized this cultural aversion: “You can be openly polyamorous, and people here will call you brave. You can put microdoses of LSD in your cereal, and people will call you a pioneer,” one character says after the chief executive of his company outs another tech worker as a believer. “But the one thing you cannot be is a Christian.”
Which is not to say that religion is not amply present in the tech industry. Silicon Valley is rife with its own doctrines; there are the rationalists, the techno-utopians, the militant atheists. Many technologists seem to prefer to consecrate their own religions rather than ascribe to the old ones, discarding thousands of years of humanistic reasoning and debate along the way.
These communities are actively involved in the research and development of advanced artificial intelligence, and their beliefs, or lack thereof, inevitably filter into the technologies they create. It is difficult not to remark upon the fact that many of those beliefs, such as that advanced artificial intelligence could destroy the known world, or that humanity is destined to colonize Mars, are no less leaps of faith than believing in a kind and loving God.
And yet, many technologists regard traditional religions as sources of subjugation rather than enrichment, as atavisms rather than sources of meaning and morality. Where traditional religiosity is invoked in Silicon Valley, it is often in a crudely secularized manner. Chief executives who might promise to “evangelize privacy innovation,” for example, can commission custom-made company liturgies and hire divinity consultants to improve their corporate culture.
Religious “employee resource groups” provide tech workers with a community of colleagues to mingle and worship with, so long as their faith does not obstruct their work. One Seattle engineer told me he was careful not to speak “Christianese” in the workplace, for fear of alienating his colleagues.
Spirituality, whether pursued via faithfulness, tradition or sheer exploration, is a way of connecting with something larger than oneself. It is perhaps no surprise that tech companies have discovered that they can be that “something” for their employees. Who needs God when we’ve got Google?
The rise of pseudo-sacred industry practices stems in large part from a greater sense of awareness, among tech workers, of the harms and dangers of artificial intelligence, and the growing public appetite to hold Silicon Valley to account for its creations. Over the past several years, scholarly research has exposed the racist and discriminatory assumptions baked into machine-learning algorithms. The 2016 presidential election — and the political cycles that have followed — showed how social media algorithms can be easily exploited. Advances in artificial intelligence are transforming labor, politics, land, language and space. Rising demand for computing power means more lithium mining, more data centers and more carbon emissions; sharper image classification algorithms mean stronger surveillance capabilities — which can lead to intrusions of privacy and false arrests based on faulty face recognition — and a wider variety of military applications.
A.I. is already embedded in our everyday lives: It influences which streets we walk down, which clothes we buy, which articles we read, who we date and where and how we choose to live. It is ubiquitous, yet it remains obscured, invoked all too often as an otherworldly, almost godlike invention, rather than the product of an iterative series of mathematical equations.
“At the end of the day, A.I. is just a lot of math. It’s just a lot, a lot of math,” one tech worker told me. It is intelligence by brute force, and yet it is spoken of as if it were semidivine. “A.I. systems are seen as enchanted, beyond the known world, yet deterministic in that they discover patterns that can be applied with predictive certainty to everyday life,” Kate Crawford, a senior principal researcher at Microsoft Research, wrote in her recent book “Atlas of AI.”
These systems sort the world and all its wonders into an endless series of codable categories. In this sense, machine learning and religion might be said to operate according to similarly dogmatic logics: “One of the fundamental functions of A.I. is to create groups and to create categories, and then to do things with those categories,” Mr. Boettcher told me. Traditionally, religions have worked the same way. “You’re either in the group or you’re out of the group,” he said. You are either saved or damned, #BlessedByTheAlgorithm or #Cursed by it.
‘Solutions from heaven come through us into our code’
PAUL TAYLOR, a former Oracle product manager who is now a pastor at the Peninsula Bible Church in Palo Alto, Calif. (he took the Silicon Valley-to-seminary route), told me about an epiphany he had one night, after watching a movie with his family, when he commanded his Amazon Echo device to turn the lights back on.
“I realized at one point that what I was doing was calling forth light and darkness with the power of my voice, which is God’s first spoken command — ‘let there be light’ and there was light — and now I’m able to do that,” he said. “Is that a good thing? Is that a bad thing? Is it completely neutral? I don’t know. It’s certainly convenient and I certainly appreciate it, but is it affecting my soul at all, the fact that I’m able to do this thing that previously only God could do?”
While turning on the light may be among the more benign powers that artificial intelligence algorithms possess, the questions become far weightier when similar machines are used to determine whom to give a loan, or who to surveil.
Mr. Taylor’s congregation includes venture capitalists, tech workers and scientists. A few years ago, after he organized a lecture about the theological implications of technology — on how everything from the iPhone to the supercomputer is altering the practice of faith — he began noticing that church members would seek him out with questions on the subject. This inspired him to start a podcast, “AllThingsNew.Tech.”
“I’ve been able to talk to a lot of Christian C.E.O.s and Christian founders and just get their perspective on how faith integrates with their technology,” Mr. Taylor said. Their conversations didn’t dwell on concerns over evangelism or piety, but on questions like, “Does my actual faith affect the technical decisions I’m making?” “Are you afraid that technology might be degrading our humanity?” “Through the conversations I’ve had,” Mr. Taylor said, “in some senses all roads lead to the question of: What does it mean to be human?”
I began to encounter whole networks of tech workers who spend their days thinking about these questions. Joanna Ng, an IBM master inventor with about 44 patents to her name, told me that she left the company in 2018 to start her own firm because she felt “darkness” closing in on her from all sides of the tech industry. “Christ will rise before we see artificial super-intelligence,” she said, describing industry efforts to develop the technology, and the vast sums spent pursuing it.
I also met Sherol Chen, a software engineer for A.I. research at Google who organizes meetings where her colleagues can discuss and practice their faith. “Not talking about politics and religion has created some circumstances that we find ourselves in today,” she told me. “Because it’s kind of a new thing, there’s a new openness toward it.” She helped inspire others in the industry to hold prayer meetings, including, for the past two years, 24-hour virtual “Pray for Tech” sessions, which are livestreamed from around the world.
During last year’s event, I watched as the attendees joined together in prayer, asking for repentance and praying for their executives, co-workers and products. Ms. Chen invoked Google’s mission statement, without saying the company’s name. “We’re seeing these answers and these solutions from heaven come through us into our code, into our strategies, into our planning, into our design,” she said. “May we pray for every meeting we have, may we take captive every keystroke we make, everything that we type.”
The technological and religious worlds have long been intertwined. For over a half-century, people have been searching for a glint of spirit beneath the screen. Some of the earliest A.I. engineers were devout Christians, while other A.I. researchers grew up believing they were descendants of Rabbi Loew, the 16th-century Jewish leader who is said to have created a golem, a creature fashioned from clay and brought to life by the breath of God. Some Indian A.I. engineers have likened the technology to Kalki, the final incarnation of the Hindu god Vishnu, whose appearance will signal the end of a dark age and the dawn of a golden era.
One of the most influential science fiction stories, “The Last Question” by Isaac Asimov, dramatizes the uncanny relationship between the digital and the divine. These days, the story is usually told in distilled and updated form, as a kind of joke: A group of scientists create an A.I. system and ask it, “Is there a god?” The A.I. spits out an answer: “Insufficient computing power to determine an answer.” They add more computing power and ask again, “Is there a god?” They get the same answer. Then they redouble their efforts and spend years and years improving the A.I.’s capacity. Then they ask again, “Is there a god?” The A.I. responds, “There is now.”
In 1977, when Apple unveiled its logo, some took it as a reference to the Garden of Eden. “Within this logo, sin and knowledge, the forbidden fruits of the garden of Eden, are interfaced with memory and information in a network of power,” the queer theorist Jack Halberstam wrote. “The bite now represents the byte of information within a processing memory.” (The rumored true story is less interesting: The apple is supposed to be a reference to the one that helped Isaac Newton establish the law of gravity; the bite was added to distinguish it from a cherry.)
Today, a sprawling orchard adorns the center of the Apple headquarters in Cupertino, Calif.; I’ve been told employees are encouraged not to pick its fruit.
IN FEBRUARY 2020, shortly before the coronavirus sent congregations worldwide scrambling to find ways to convene virtually, I learned about a group called A.I. and Faith, of which both Mr. Boettcher and Mr. Taylor are founding members. Started by a retired risk-management lawyer named David Brenner, the group is an interfaith coalition of tech executives, A.I. researchers, theologians, ethicists, clergy members and engineers, all of whom, as Mr. Brenner put it, want to “help people of faith contribute to the conversation around ethics in artificial intelligence in a sophisticated way.”
The group’s name is a nod to members’ belief that spirituality and technological advancement can be held together in a happy accord. “The biggest questions in life are the questions that A.I. is posing, but it’s doing it mostly in isolation from the people who’ve been asking those questions for 4,000 years,” Mr. Brenner told me. It is a resolutely, ambitiously interfaith initiative; Mr. Brenner and his colleagues rightly figured that they would have a better shot at having a real impact if they did not espouse or adhere to any particular creed. Mr. Brenner thought the tech industry might find solutions to its moral and ethical corruption from the major world religions. He offered a few examples: “The Fall: Can you know too much? Babel: Can you try too hard?”
Since A.I. and Faith was founded in 2017, it has swelled to include almost 80 individuals of varied faiths, many of them clustered around the Seattle area, with additional members around the world, including in Istanbul, Oxford, Nashville, Brussels, Boston and Nairobi. By bringing together different and often opposing perspectives, A.I. and Faith is also modeling the kind of diverse coalition that its members would like to see replicated on a larger scale in the global A.I. community.
Mr. Brenner, who grew up in an evangelical household, describes his faith as “cross-denominational,” rooted in university churches with a “faith-science crossover.” While working as a lawyer he became a church elder at University Presbyterian Church in Seattle, which sits a stone’s throw away from the headquarters of Microsoft, Amazon and the Allen Institute for Artificial Intelligence.
One day, he was wandering around the church library and caught sight of a book titled “Our Final Invention: Artificial Intelligence and the End of the Human Era,” by James Barrat, which argues that humans will “mortally struggle” against artificial intelligence, and perhaps even become extinct. The idea startled him, so he resolved to read everything he could about A.I. and its societal implications.
He began familiarizing himself with the writings of Bill Gates, Elon Musk, Steve Wozniak and other tech leaders who were making their own prognostications about the future. In Yuval Noah Harari’s book “Homo Deus,” Mr. Brenner encountered a description of the future in which humans are replaced by godlike beings, where algorithms rule the world, where humanism and spirituality are superseded by “the data religion.”
This vision seemed not only false but also blasphemous to Mr. Brenner. So he decided to focus his efforts on forming a “bridge building” organization that could act as a moderating force, an initiative intended to prevent tech workers from thinking they had to reinvent the wheel of human morality, and to help them resist the allure of unbounded profits.
“Capitalism just isn’t interested in capturing all its externalities. It never has been,” he told me. “So the goal is to get the best of the private and public sector, including the faith world, to take those externalities into account and avoid the downside, just like with oil and climate change.”
It didn’t take much time for him to recruit the first A.I. and Faith members from nearby congregations and corporations. When he approached two major Seattle-area mosques, he discovered they were already way ahead of him. In many cases, the mosques’ members were also more intimately acquainted with the harms that artificial intelligence has advanced.
“People of color are being profiled, Muslims are being profiled,” said Yasmin Ali, a computer scientist and founding member of the Muslim Association of Puget Sound, “So this is very, very close to their hearts.”
Alongside several collaborators, Mr. Brenner has spent time during the pandemic starting to create a faith-based introductory curriculum on artificial intelligence. He hopes to present versions of it to tech workers and religious congregations to try to help them learn to speak one another’s language. It includes videos of three A.I. and Faith founding members — a pastor, a rabbi and a Muslim A.I. engineer — explaining why they believe that religious communities need to take a more active role in conversations about ethics and A.I.
The pastor, Dani Forbess, shares that scientists and philosophers in her congregation were asking: “What does it mean to be human? Are we users, or are we beings?” She directed participants to the Bible creation story, which shows that humans “are co-laborers in creation” and “co-laborers for the purpose of good.”
‘I don’t know enough theology to be a good engineer.’
AT A BASIC LEVEL, the goal of A.I. and Faith and like-minded groups I came across in Toronto, San Francisco, London and elsewhere is to inject a kind of humility and historicity into an industry that has often rejected them both. Their mission is admittedly also one of self-preservation, to make sure that the global religions remain culturally relevant, that the texts and teachings of the last several centuries are not discarded wholesale as the world is remade. It is also a deeply humanistic project, an effort to bring different kinds of knowledge — not only faith-based, but also the literary, classical and oral traditions — to bear upon what might very well be the most important technological transformation of our time.
“There are people who spend their lives thinking about culture, religion and ethics. You should bring them into your funding universe if you actually care about an ethics conversation,” Robert Geraci, a religion scholar, told me. “Our government is currently poised to start pouring a bunch of extra money into A.I. … Why is it that people who understand culture, literature, art and religion are not part of the conversation about what we want to build and how we are going to build it?”
A.I. and Faith is trying to coax this conversation further along and broaden its range of participants. Its members do not have prescriptions for how A.I. should be built, or rigid policy goals; all they want is an opportunity to participate in a conversation that is already unquestionably and indeterminately altering all of our interior lives. The goals the group does have are classically liberal ones: They do not want to see advanced technology marshaled toward even greater surveillance, accelerated inequality and widespread disenfranchisement.
The group’s ad hoc network has rapidly grown around the globe. It did not take me long to discover that the conversations Mr. Brenner has been staging are also taking place, in different languages and cadences, among religious communities in Singapore, Saudi Arabia, Bangkok and many places in between.
In my conversations with A.I. and Faith members and others working toward similar goals, I often found myself marveling at their moral clarity. Each in their own way, they were working to use their religious traditions toward advancing social justice and combating the worst impulses of capitalism. They seemed to share an admirable humility about what they do not and cannot know about the world; it is a humility that the technology industry — and its political and legal offshoots — sorely lacks.
Over the course of my reporting, I often thought back to the experience of Rob Barrett, who worked as a researcher at IBM in the ’90s. One day, he was outlining the default privacy settings for an early web browser feature. His boss, he said, gave him only one instruction: “Do the right thing.” It was up to Mr. Barrett to decide what the “right thing” was. That was when it dawned on him: “I don’t know enough theology to be a good engineer,” he told his boss. He requested a leave of absence so he could study the Old Testament, and eventually he left the industry.
A few weeks ago, I called Mr. Boettcher to ask about the results of the study that I had participated in, posing existential questions to Alexa and Google. He was surprised, he told me, at how many of his respondents had immediately anthropomorphized the devices, speaking of the machines offering spiritual advice as if they were fellow humans. Across all religious backgrounds, exchanges with the virtual assistants triggered some of the participants’ deepest memories — going to church with their parents, for example, or recalling a father’s favorite line from the Bible — that the experiment often veered into a profoundly “emotional mode.” The ease with which the devices were able to reach people’s inner worlds and most intimate thoughts alarmed him.
“There’s cautionary stuff here for me,” Mr. Boettcher said. “You’re getting into people’s memories. You’re getting into the way that they think about the world, some of the ethical positions that they take, how they think about their own lives — this isn’t an area that we want to let algorithms just run and feed people based on whether they … click on the ads next to this stuff.”
The nonreligious “nones” entered this emotional register more readily, Mr. Boettcher found. Several had come from religious families but had no faith practice of their own, and they found themselves thinking back to their childhoods as they re-encountered language from their upbringings. It signaled something like a longing, he told me. “There’s something that is wanted here.”
He is hardly the first researcher to wade into this territory. In her 1984 book “The Second Self,” Sherry Turkle, a professor at M.I.T., wrote about how computer culture was prompting a “new romantic reaction” concerned with the “ineffable” qualities that set humans apart from machines. “In the presence of the computer, people’s thoughts turn to their feelings,” she wrote. “We cede to the computer the power of reason, but at the same time, in defense, our sense of identity becomes increasingly focused on the soul and the spirit in the human machine.” The romantic reaction she described wasn’t about rejecting technology but embracing it.
In the decades since Dr. Turkle wrote that book, the human-machine relationship has grown ever more complex, our spirits and souls that much more intertwined with our data and devices. When we gaze at our screens, we also connect with our memories, beliefs and desires. Our social media profiles log where we live, whom we love, what we lack and what we want to happen when we die. Artificial intelligence can do far more — it can mimic our voices, writings and thoughts. It can cull through our pasts to point the way to our futures.
If we are to make real progress on the question of ethics in technology, perhaps we must revisit the kind of romanticism that Dr. Turkle described. As we confront the question of what makes us human, let us not disregard the religions and spiritualities that make up our oldest kinds of knowledge. Whether we agree with them or not, they are our shared inheritance, part of the past, present and future of humankind.