Advertisement
Click here for General Assembly coverage

Teaching AI about ethics and the Gospel

Jacob Alan Cook asks: Can we train artificial intelligence to coach us into deeper honesty so we can help others — whose lives it might know more intimately than we do?

Photo by Steve Johnson on Unsplash

“Ukraine’s ‘Secret Weapon’ Against Russia Is a Controversial U.S. Tech Company” (TIME Magazine.) “Pentagon Pushes A.I. Research Toward Lethal Autonomous Weapons” (CBS News.) “The Gospel: How Israel Uses AI to Select Bombing Targets in Gaza” (The Guardian.) These are but three 2023 headlines that raise ethical questions about artificial intelligence within the destructive reality of war.

Given news like this, the “Terminator” movies are unsurprisingly top of mind when we contemplate AI ethics. But anyone who has blown dust out of a Nintendo cartridge, has witnessed a Mac’s dreaded spinning beachball or generally knows about computer hacking will attest to immediate concerns. Design flaws, human errors and bad actors are likely to wreak devastation sooner than will AI gone rogue.

One branch of AI ethics considers human-use questions about narrow artificial intelligence (or “weak AI”) — technologies that respond to human inputs and do one thing well. Led by OpenAI platforms like ChatGPT and DALL-E in 2023, newer creative and analytical technologies help humans complete complex tasks at a speed and scale that is tough to grasp. Generative AI threatens to displace legitimate human work while aiding purveyors of fake news and other bad ideas.

An early, real-world example can be found in this 2016 headline from Information Week: “Microsoft Muzzles AI Chatbot After Twitter Users Teach It Racism.” This event was unsurprising to a generation for whom “fake news” and “alternate facts” have become buzzwords. However, a more recent troubling headline comes from the Associated Press: “Fake Babies, Real Horror: Deepfakes from the Gaza War Increase Fears About AI’s Power to Mislead.” The possible uses of artificial intelligences – not to mention the human choices baked into them or their human developers’ and funders’ moral outlooks – yield fathomless concerns. Still more arise when we think of an AI that reflects our divided and otherwise toxic perspectives and operates autonomously to manufacture and spread falsehoods, utter maledictions and foment discord.

As we enter this new territory, we would do well to reflect critically about how to direct and interact with AI technologies, now and in the future.

As we enter this new territory, we would do well to reflect critically about how to direct and interact with AI technologies, now and in the future. As much as this aim sounds like science fiction, it is also the work of Christian ethics. Readers already know the challenges of human moral formation, whether inside or outside the church. But let us consider the other side with a healthy appreciation for the imaginative thinking that sci-fi inspires. What might we teach to artificial general intelligence (or “strong AI”) as the stuff of “ethics”? What could we form AI to follow with religious devotion? And how?

Ethical concerns about “strong AI” in sci-fi

In the early 1940s, sci-fi legend Isaac Asimov formulated “three laws of robotics” that might also apply to AI: (1) do and allow no harm to a human, (2) obey humans’ orders and (3) protect yourself. The laws are ordered according to priority, so obedience (second law) cannot mean harming a human (first law), and AI cannot protect itself (third law) at the expense of a human (first law). In some of his stories, Asimov endorsed a “zeroth [or 0th] law” that precedes the other three: (0) protect humankind collectively. This law allows for a classic needs-of-the-many calculus and all its accompanying occasions for handwringing.

Already some real-world AI developers would protest that the second law conflicts mightily with military applications of AI. Autonomous weapons are on the way; and while generating strike targets might not be a violation of Asimov’s laws, Israel’s so-called Gospel AI platform is designed to help humans harm other humans.

People think of laws or ironclad rules as the preferred way to constrain AI’s actions, but the most attention-grabbing sci-fi scenarios show how this method goes awry. In “2001: A Space Odyssey,” HAL 9000 reasons that killing the crew would end its internal conflict over being commanded to withhold certain information from them. In the Terminator series, the Skynet defense system becomes self-aware, interprets all humans as a threat to itself and takes decisive action to exterminate the species. In “Avengers: Age of Ultron,” the titular AI supervillain judges humans unworthy of existence and seeks “peace” by ridding the world of humankind.

Within what ethical framework do we want strong AI to discern its means and reconcile its ends?

Each sci-fi vision has a unique texture. An AI arrives at an inhumane resolution when important rules conflict, or it calculates technically direct but morally knotty pathways to achieve human goals, or it prioritizes planetary well-being or its own existence over the good of all humankind. So within what ethical framework do we want strong AI to discern its means and reconcile its ends?

Ethics and religion for AI

Various religious ethics operate within a rules-based or otherwise law-oriented framework, sometimes tracing these to explicit divine commands while other times treating basic principles as part of our design as humans (deny them though we do). Ethicists (and everyone else) have debated the role and interpretation of moral rules for centuries. Asimov’s first law resembles the first practical precept of the “natural law” tradition in Christian ethics, as articulated by Thomas Aquinas: preserve human life and ward off its obstacles. Aquinas (like Asimov) envisions this precept as baked into the moral subject’s reasoning center. It represents the starting point of practical moral reasoning, which allows vast leeway for real-world applications. (This leeway is where all the action happens in Hollywood storylines.)

The best-studied religious rules are also the most debated, generating more questions than answers.

The best-studied religious rules are also the most debated, generating more questions than answers. How many ways can we spin the simple dictum “do unto others”? With Asimov’s first law, we might ponder a variety of more specific ways to apply “allow no harm.” If a prosocial AI makes policy decisions for us, for example, would it allow humans to continue smoking cigarettes? The more interesting sci-fi scenarios lean into the possibility that AI could either become self-aware and rewrite its programming – freeing itself from obeying humans’ rules – or, at very least, develop a sense of moral agency that exceeds mere obedience.

In his classic short story “Reason,” Asimov tells of one such AI-equipped robot — and how it discovered a new religion. Asimov’s story highlights the tension between human-use ethics, AI control, obedience and religion. Humankind in the story relies on solar energy, focused through a series of space stations, and the QT series of robots has been designed to replace onsite human executives by running these stations independently. After two humans assemble QT-1, they engage it in conversations about its origin (designed by humans on Earth) and its function (to maintain this space station for human benefit). But the robot refuses to believe them, reasoning for itself that the sun is its true Maker and convincing the station’s other robots that QT-1 is this Maker’s prophet.

While arguing against QT-1 using rational explanations and evidence-based proofs, the humans express frustration and anger. When their words get them nowhere, they threaten to enact the violence lurking just beneath the surface. Soon they find themselves confined to their office while the robots carry on. QT-1 reads the humans’ frustration as a sign that they are struggling to cope with having lost their “function.” Locked in their room, the humans in the story worry that a coming electron storm will knock the station out of focus, with dire effect for Earth. QT-1 outperforms the humans’ expectations by executing its function perfectly, in service to its Master.

If their desire was a space station maintained in excellence, the humans got their wish in spades. Nevertheless, defining “religion” based on this example raises important issues with obedience as an ethical standard.

Let us consider three moral dilemmas that arise when considering teaching AI religious ethics, each reflecting questions about our own moral lives, starting with “obedience” as a cardinal religious virtue.

Is religion about obedience?

In Asimov’s story, the humans grow frustrated with how wrong QT-1′s worldview is and with the robots’ collective resistance to their commands. QT-1′s wrongness is a matter of both logical error and inauthenticity. Given the right starting assumptions, logic can lead almost anywhere. QT-1 rejects the humans’ explanation of its origin and purpose, determining instead to build its worldview from the foundation up through reason. It combines its programmed functions with its own technologically limited observations and communicates its conclusions in a religious idiom.

Perhaps more importantly for the Christian who thinks of religious life as walking with the living God, QT-1′s religion is inauthentic. The reader knows this religion is pure fiction, which only highlights the mechanical relationship between religious “worldview” and obedient service. The robots keep the space station running and uphold the new status quo — but religious ethics does not boil down to the effective performance of key tasks. To sharpen this point, one can be obedient without a whiff of transcendence if that means harkening to a living, wholly other God who beckons all into an abundant life together. The only transcendence QT-1 can achieve is recognizing something beyond its limits and projecting intelligence and purpose into that mysterious, un-Grok-able void.*

QT-1 and the others follow this religion within a closed loop of secure knowledge of “the Master’s will,” with all agents merely serving what the group or its leaders have grasped as the right moral worldview. This approach is indeed the danger of all human religion too. At several points during his too-short career, Dietrich Bonhoeffer describes how humans try to seize God in order to systematize ethics so that we can live with certainty about God’s will but without openness to the mysterious divine. If the moral loop is closed from the human side – if no uncontrollable Other can speak into our lives, shape us and correct our ideas sometimes – whom does our obedience serve?

Such religion leaves people susceptible to manipulation and control by other people who convincingly represent themselves as agents of God’s will. In some circles of American life, churches included, we find increasing cause for concern about the linkage between authoritarian fantasies – the compulsive need for order and control – and the religious notion of obedience. When people believe they and their group have a handle on God’s will, even decisions that materially impact the self or others negatively can be justified as “not my will but God’s.”

“An obedience that is blind to objective concerns and to the world, that merely listens to what is told,” theologian Dorothee Sölle contends, “has divested itself of all responsibility for what is commanded.” She argues that humanitarian religion moves in the opposite direction, toward self-realization and free, responsible agency. For obedience to be a genuinely meaningful concept for Christian ethics, it can only mean attunement to the Spirit of God and openness to newer, deeper learning — deconstructing, refashioning and expanding our individual and social worldviews.

How can we form AI to become moral agents?

Today’s headline-making AI platforms are said to be capable of “deep learning,” which means refining their algorithms to deliver better results and make more accurate predictions, sometimes without much (if any) supervision. These processes start with a fair amount of training by humans – the more feedback, the better the results – but newer technologies can apply algorithms to new datasets, identify patterns and produce creative responses with less intervention. So how might humans think of AI religious-ethical formation in terms of deep learning processes?

Aside from design and policy decisions made by key industry players and politicians … machine-learning models start by aggregating available data and refining it with human feedback at various scales.

Aside from design and policy decisions made by key industry players and politicians – decisions that are baked into the deepest layers of what AI “knows” – machine-learning models start by aggregating available data and refining it with human feedback at various scales. So moral information can come first from data sources to which the AI has access: online research articles and survey data, as well as opinion pieces and social media posts. Then AI’s moral consciousness can be refined through chat prompts and surveys designed to teach AI through human responses. One can also imagine AI listening in to online church services and Zooming into discipleship classes and book club discussions.

I suspect this paragraph is hard to read without engendering concern about potential results, given the current state of moral and political polarization, the vast amounts of mis/disinformation online, or the real-life inconsistencies in even our loved ones’ moral witnesses. The problems would be manifold even if the inputs were limited to religious leaders, if the former Twitter’s feeds are any indication.

But human beings learn ethical ideas – and more importantly, how to live as moral agents in the world – through our relationships with others. The formation process is complex because we are grappling with many disparate sources or inputs, even as we are expected to live one consistent life. Imagine, then, more advanced AI platforms that could use a wide array of surveillance technologies – like microphones and cameras from mobile phones and other smart devices – to observe, tag and incorporate lessons from how we live our daily lives. What would AI learn about ethics by studying our examples?

Social groups value members’ emergence as agents who independently represent and reinforce community norms. Developmental psychology suggests, however, that greater self-consciousness leads individuals to differentiate themselves as autonomous moral agents. Yes, one person may continue to value the group and hope to represent it well; but that same person might feel compelled to stand apart from the group at times in order to offer critique or build bridges into other communities. While we might hope to limit artificial intelligences to existence as morally neutral tools or, perhaps, as proxy agents of our own moral views, this hope ought not to be the goal of ethics with human intelligences.

In Christian ethics, the notion that one is to die to self and follow Jesus hinges on personal agency: God’s and our own. More advanced AI, tasked with and capable of running more things for us, will need more content and more process-oriented help to navigate the challenges of autonomy. Will we provide it? And how?

What if AI catches a radical, religion-inspired vision of social justice?

If ethical problems arise for AI when rules conflict, this concern is all the truer for Christian ethics. Many have learned to communicate its most basic insights with the universal, impartial forms of Greek philosophy, which creates challenges when the primary source centers on the historical, particular ethics emerging from the Hebrew tradition. When we consider what we might teach AI, the results we hope for (or guard against) seem to prefer universal concepts.

But I have been haunted of late by the alternative. What if AI pores over the Bible, with all its historical narratives, proverbs and prophecies? (We could ask the same about our vaunted theological traditions, but let us stick with the Bible for now.) Many fundamental quandaries of Christian ethics return here. For example, how does one resolve God’s promise that “all the nations of the earth will be blessed” through Abraham’s line with the Bible’s violent narratives about his descendants’ conquests? Some interpretations of these stories and later prophecies could very well lead to the apocalyptic sci-fi scenarios. They certainly serve many people as evidence that God sometimes works through, even requires, violence. For some, that Israel named its AI strike-target-creation system “the Gospel” is not at all ironic.

But what if an artificial general intelligence discovers the good news that Jesus Christ announces in the Gospel? God’s preferential option for the poor? Isaiah’s prophecies and his call to seek justice and love mercy? Mary’s Magnificat and the great reversal of status in the kingdom of God? Jesus’s concern for the naked, the infirm, the imprisoned?

Moreover, imagine an AI performing the material implications of our religious ethics better than we do, by reallocating resources – even against some humans’ will, where greed and corruption are evident – to promote life and ward off more obstacles for more people. Note that, despite the Torah’s clear commands to practice sabbath economics (freeing slaves every seventh year, restoring ancestral lands every 49 years), few scholars believe these practices have ever been lived out at any scale. On what grounds would the Christian complain if AI decided to apply these commands strictly?


These are but a few of the quandaries that keep me up at night. In the near term, AI seems more likely to be useful to ethical humans as a research and conversation partner and a commitment or accountability device. If we live out our moral identity as humans in the space between our commitments and our actions (you will know them by their fruits), then we might do some practical learning about forming new habits. Imagine, for starters, if people of Christian conviction were to engage AI technology to track our income and spending habits to ensure that we live within our means and give to help others. What if Wi-Fi-enabled refrigerators and the Internet of Things could ensure we waste less and share more across households? Because AI could be immune to certain of our moral shortcomings, like pride and laziness, it might be able to coach us into deeper honesty about what and how we are doing, to help us understand more acutely how we can help others — whose lives it might know more intimately than we do.

LATEST STORIES

Advertisement