Advertisement

What is AI?

Artificial intelligence is as common as cell phones, but understanding it – even while using it – is another story, writes Dartinia Hull.

The term “artificial intelligence” (AI) slid into our vernacular in the 1950s. Since then, the term and the tech have become ubiquitous in the U.S., as common as cell phones. But understanding the thinking behind it – even while using it – is another story.

But –  What is it?

It’s not “just” a program, much in the same way the internet wasn’t “just” some linked websites. According to a 2007 paper written by computer scientist John McCarthy and posted on the Stanford University website, AI is “the science …  of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”

Aruna Pattam, a LinkedIn Top Voice and AI expert, put it more simply in 2021: “Artificial intelligence is the science of making machines that can think like humans. It can do things that are considered “smart.”

“Artificial intelligence is the science of making machines that can think like humans. It can do things that are considered ‘smart.’” — Aruna Pattam

Where is it?

According to a 2023 study by Pew Research Institute, most adults in the U.S. are aware of some of the common ways AI has become a part of everyday life — for instance, product recommendations from chatbots based on what you’ve previously bought. However, only 30% of adults in the U.S. “are able to correctly identify the six specific uses of AI asked about in the survey, underscoring the developing nature of public understanding,” Pew states.

Examples:

  • A playlist recommendation from your music app
  • Fitness trackers that record hours of sleep and exercise
  • The spam filter in email
  • A chatbot that answers questions immediately for customers
  • Security cameras that send alerts for unrecognized faces at the door
  • Product suggestions based on previous purchases

On a self-reported measure, Pew says 44% of Americans feel they never interact with AI – at all – and that might be true if they never log into email, or venture out anywhere that has security cameras, or take a product recommendation from a chatbot.

Understanding AI requires getting familiar with the accompanying dictionary, conveniently located in a Siri or Alexa near you. What does open source mean? (It’s modifiable code that’s publicly accessible. You can create your own AI, which is a heady thought.) What is ChatGPT? (It’s probably the most well-known generative AI chatbot. It can do everything from writing headlines – or a sermon – to planning vacations.) What’s a chatbot? (Say “hey Siri,” and there’s part of your answer; generative chatbots can also “write.”)

Alexa, is there time to buy bread and milk before the snowstorm hits? 

AI relies on thousands – millions, perhaps – of bits of informational programming from everywhere: books, blogs, newspapers, museums, churches, websites. Horoscopes. Vacation rental postings. That’s how it thinks. That’s how it thinks so quickly. Countless minds and music sites went into determining what should follow Maze featuring Frankie Beverly in my music app. It was the Women of Philly Soul. Beverly is from Philly.

“[AI] is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.” — John McCarthy

I used to work with a guy who ran a complex stock-market analysis program daily. He’d cross his arms and watch the green code flash on the screen. If anybody asked him to do anything else before the program finished, he’d refuse.

“When my computer’s thinking,” he’d say, “I’m thinking.” Since then, I’ve been in meetings where saying the computer was thinking was close to blasphemy. But — wasn’t it thinking, and in the way we’ve designed it to gather information and make logical turns and decisions toward rational outcomes?

AI was perhaps a logical step in the “Star Trek” direction that tech is taking us. And it’s not new. There is computer-driven art from the 1970s that could be considered precursors to today’s generative apps that construct content, “including text, imagery, audio and synthetic data,” according to TechTarget. Artificial intelligence is embedded now: the “intelligence” answers questions on your hardware store’s website, guides us through onboarding at a new job, helps us write succinct memos. AI is present in programs that make our lives simpler.

Don’t we all yearn for so-called simpler, easier times? And yet …

It is fallible, and not the One With All Answers. AI is complex enough – and intuitive enough – to make us wary.

According to Pew, “Americans are increasingly cautious about the growing role of AI in their lives generally. Today, 52% of Americans are more concerned than excited about AI in daily life, compared with just 10% who say they are more excited than concerned; 36% feel a mix of excitement and concern.” Can it really “think?” Whose thoughts is it thinking? Whose biases is it following? Which information is it choosing? That all determines the outcome, right? And: if an AI process follows a specific logic, what would happen if it decided it understood the collective of humanity better than the humans who gave it its brain?

Can it really “think?” Whose thoughts is it thinking? Whose biases is it following? Which information is it choosing?

Computer: Earl Grey. Hot

Artificial intelligence doesn’t (yet) have the ability to produce, out of the ether, a cup of piping hot tea for “Star Trek’s” Jean-Luc Picard. Right now, for many of us, AI is mostly good for the mundane or the brain-busting, and the in-between areas of thought.

What if AI assisted with some of our daily mental tasks, or took Einstein-level computations off our plates, giving us more time to be creative? To be with each other and truly see one another rather than looking past or away? What if, during that time, we began to see each other again as children of God? Or what if we discovered new ways of receiving God’s word? Organic possibilities are myriad.

To test some ideas on how this could look in a faith-based situation, I ran an idea for a children’s lesson through ChatGPT. Give me, I asked, a four-minute children’s sermon about the bravery of Queen Esther as she confronted her husband, the king, about the treatment of her people, and what this story says about bravery and God’s support of you as you stand for what’s right. Make this for a Presbyterian congregation.

This is part of what it returned:

Today, let’s embark on an exciting journey into the story of Queen Esther from the Bible. Imagine living in a grand palace as a queen, like Esther did, and discovering that something not very nice was happening to your friends and family. You see, Esther’s husband was King Xerxes, a powerful ruler. One day, Queen Esther learned that a rule was made that could hurt her people, the Jewish people. It made her very sad because she loved her people and didn’t want to see them treated unfairly.

Esther’s bravery comes into play. Instead of staying silent, she decided to do something about it. Can you imagine talking to a king, especially when he didn’t ask you to? It takes a lot of courage, doesn’t it? Esther knew she had to be brave and speak up for what was right. Esther’s courage teaches us an important lesson about standing up for others, even when it’s difficult. When we see something unfair happening, we can be brave like Esther and find a way to speak up for those who need our help. Now, let’s talk about faith. Esther had a deep belief in God, and that belief helped her be brave. Faith is like having a special friend in God who is always there for us. When we trust God and know that God loves us, it gives us the courage to do what’s right, just like Esther did. And guess what? When Esther went to talk to the king, she wasn’t alone. God was right there with her, helping her find the right words and giving her the strength to stand up for her people. This reminds us that, no matter what we face, God is there to help us, too — like a loving friend who supports us.

Not great. And not necessarily what a future pastor seeks when in seminary. Also — not a horrible start for a part-time pastor who has full-time pastoral duties and a full-time family.

Right now, for many of us, AI is mostly good for the mundane or the brain-busting, and the in-between areas of thought.

But doing a search for God and AI in a visuals app gave me Jesus with the good hair. White Jesus. More than a few of them, which supports what Eric Nolin writes on page 38: people of color in tech are few, so visuals and wording have an inherent bias. It’s not the programmed and pre-built app’s fault. It’s only what has been uploaded. For a pastor who has but two minutes before faith formation class begins, what does this mean for the ways we are able to consider and dispense the image of what’s holy and good?

I’ll stop now. The deeper you dig, the more you realize the topic and use of AI is as bottomless as the universe. Which, for most humans, means there’s little chance of avoiding AI. Which, for Christians, especially the ones who embrace thoughtful reform, means learning how to become “Star Trek’s” Uhura, who understood all the things about the tech keeping the Enterprise moving but knew when to consider adjustments.

She kept her eye on the big picture.

Which, for us, is not necessarily the tech, but learning to move forward while serving God as faithfully as possible with the fallible tools we’ve been given.

LATEST STORIES

Advertisement