This podcast includes recordings of Biblical teachings from various Equipping Hour classes at Grace Bible Church. These classes cover a wide variety of topics from specific books of the Bible to parenting to missions to church history. For more information aboout Grace Bible Church visit gbcaz.org
Tempe, Arizona

The post Equipping Hour: Canon Clarity Part 1: The Old Testament appeared first on Grace Bible Church.

The following is AI-generated approximation of the transcript from the Equipping Hour session. If you have questions you would like to be addressed in followup sessions, please direct those to Jacob. Opening & Introduction Smedly Yates: All right, this morning’s equipping hour will be about artificial intelligence—hopefully an attempt to introduce this topic, help us think through it carefully, well, biblically. Let me just open our time in prayer. [Prayer] Heavenly Father, thank you so much for your kindness to us. Thank you for giving us all that we need for life and godliness, for not leaving your people adrift. Thank you for putting us into this world exactly in the era that you have. We pray to be effective, fruitful, in all those things which matter for eternity in this world, in this time, in this age. God, we pray for wisdom, that you would guide our discussion here. We pray that this would be of benefit and a help to Grace Bible Church. We ask it in Jesus’ name. Amen. Here’s the layout for this morning and for a future equipping hour. We’ll be talking for about 35 minutes, back and forth—Jake and I—and then at 9:35, the plan is to go to Q&A. So, this is an opportunity for you to ask questions. At that point, I’ll surrender my microphone and you guys can rove and find people. For the next 33 minutes or so, you can be thinking about the questions you’d like to ask. Jake’s going to do most of the talking in our time here. I’m going to set him up with some questions, but just by way of intro, I want to get some things out of the way as we’re talking about artificial intelligence. You might be terrified, you might be hopeful. I want to get the scary stuff out of the way first and tell you what we’re not going to talk about this morning. Is that fair? Artificial intelligence is here. Some of you are required to use it in the workplace. Some of you are prohibited from using it in your workspaces. There’s nothing you and I can do to keep it from being here. Some of the dangers, some of the things you might be wondering about, some of the things that make the news headlines—over the last two weeks, scanning the headlines, there was a new AI headline every day. One of the terrible things that we won’t talk about today is the fact that nobody knows what’s true anymore, right? How can we discern? But the reality is the god of this world has been Satan for the entirety of human history and he’s a deceiver from the beginning. There’s nothing new about lies. They might be easier and more convincing with certain technological advances. The lies might be more ubiquitous, but the same humanity and the same satanology are at play. We may be concerned about societal fracture and distrust. Some people, if they distrust new tech, will withdraw from society. Others will fully embrace it. And so you get a fracture in society—those with, and those without tech. Some people will just say, “If the digital world works, we’re going to use it.” That’s not the Christian perspective. We’re not simply pragmatists. We do care about what’s true and what’s right. Some are worried about AI chatbot companions that will mark the extinction of relationships, marriage, society. I probably fall into the category of those who assume that AI will mean the end of music or the death of music and other art forms. That’s just me, a confession. People run to end-of-the-world scenarios—the robots decide they don’t need us anymore or the collective conscience of AI decides that humanity is a pollutant on Mother Earth, and the only way to keep the earth going is to rid itself of humanity. The survival of the planet is dependent on our own extinction. So AI will bring about a mass human genocide and the end of homo sapiens on earth. We know that’s not true, right? We know how the world ends, and it doesn’t end by an AI apocalypse. So don’t worry about that. Some people worry that AI will be a significant civilization destabilizer. That might be true. But we know that God is sovereign, and we know where society and civilization end up: at the feet of Jesus worshipping him when he rules on the earth for a thousand years leading into the eternal state. So don’t worry about that either. Some believe that AI is the antichrist. Now we know that’s not true. What is the number of the beast? 666. And this year it got rounded up to 67. So we know AI is not the antichrist. 67 is the antichrist. And if you want to know why the numbers six and seven got together in the year 2025 and formed the new word of the year, ask your middle schooler. Is that all the scary stuff? Not even close. I have a family member who has worked in military intelligence working on artificial intelligence stuff for a long time. He said it’s way scarier than you could possibly imagine. Do you want to say any more other scary scenarios we shouldn’t be thinking about? Jacob Hantla: No, we’ll probably cover some of those. Smedly Yates: Okay, great. What we want to focus on today is artificial intelligence as a tool. Just as an axe can be a tool for good or evil, AI is a tool that either has opportunities for betterment or opportunities for danger. So we want to think about that well. What you have on stage here are two of the shepherds at Grace Bible Church. You’ve got Jake Hantla, who is the guy I want exploring artificial intelligence and telling us how to use it well—he has and he does. And then you have me; I intend not to use artificial intelligence for now. We’re on opposite ends of a spectrum, but we share the same theology, same principles, same concerns, and I think the same inquisitive curiosity about technological advances. I drive a car; I’m not Amish in a horse and buggy. I like tech. But on this one, I’m just going to wait and see. I’m going to let Jake explore. From these two different poles, I hope we can be helpful this morning to help us all together think through artificial intelligence. What is AI? Smedly Yates: Let’s start with this, Jake. What is AI basically? Jacob Hantla: At the heart of it, most forms of AI are a tool to predict the next token. That might not mean much to you, but it’s basically a really fancy statistical prediction machine that accomplishes a lot of really powerful outcomes. It doesn’t have a mind, emotions, or consciousness, but it can really effectively mimic those things because it’s been trained on basically all that humanity has produced that’s available to it on the web and in other sources. I’ll try not to be super technical, but I want to pop up a picture. Can you go to slide one? When we think of AI, large language models are probably the one that most of you will think of: ChatGPT, Gemini, Grock, Claude, things like that. Effectively, what it does when we’re thinking of language—it can do other things, like images and driving cars and other things, but let’s think of words—it takes basically all that humanity has written and learns to predict the next token, or we could just think of the next word. So, all of you know, if I said, “Paris is a city in…” most of you would say France. Paris is a city in France. How do you know that? Everyone here has learned that fact. Large language models have gone through a process of training where they learn facts, concepts, and grammar, so that they can effectively speak like a human in words, sentences, and paragraphs that make sense. So how did it get to that? On the right, there’s just a probability that “France” is the most probable next word. How did it get there? Next slide. I’ll go fast. Basically, it’s a whole bunch of tunable weights—think of little knobs or statistical probabilities that interlink parameters. These things get randomized—there are trillions of them in the modern large language models. They’re just completely random, and then it starts feeding in text. Let’s say it was “It was the best of times, it was the…” and it might say “gopher” as the next word when you just randomly start, and that’s obviously wrong. The right word would be “worst.” So, over and over and over again, for something that would take one computer about a hundred million years to do what they do in the pre-training, they have lots of computers doing this over and over until it can adequately say, “Nope, it wasn’t gopher. It should be worst. Let’s take another crack at it.” It just manipulates these knobs until it can act like a human. If you fed it a mystery novel and at the end it would say, “The killer was…” it has to be able to understand everything before to adequately guess who the killer was, or “What is the capital of France?” It compresses tons and tons of knowledge from all of the written text. Then you start putting images in and it compresses knowledge from images and experience from life into a whole bunch of knobs—basically, numbers assigned so it can have an output that is reasonable. Next slide. You take people—pre-training is the process where you’re basically feeding text into it and it’s somehow learning. We don’t even know—humans are not choosing which knobs mean what. It’s a black box. We can sort of start to figure out which knobs might mean things like masculinity or number or verbs, but at the end, you just have a big bunch of numbers. Then humans come in and train it—reinforcement learning with human feedback. They say, “This is the kind of answers we want this tool to give.” At the outcome, people are saying, “We ask it a question, it outputs an answer, we say that’s a good one, that’s a bad one.” But in this, you can see there’s lots of opportunity for falsehood or biases—unstated or purposeful—to sneak in. If you feed in bad data into the training set, and if it’s trained on all of the internet—all that humans have made—you’re going to have a whole lot of truth in there, but also a whole lot of falsehood. It’s not learning to discern between those things; it’s learning all those things. In reinforcement learning with human feedback, we’re basically fine-tuning it, saying, “This is the kind of answer we want you to give,” and that’s going to depend on who teaches it. Then the final step is people judging the answers: “This is the kind of answer we want, this is the kind we don’t want.” Lots of opportunity for biases to sneak in. That was a long answer to “What is AI?” It’s a prediction machine with a whole lot of math going on. What Sets AI Apart from Other Technology? Smedly Yates: Jake, what sets AI apart from previous technological advances, especially as it relates to intention? Jacob Hantla: Tech could be as simple as writing, the wheel, the airplane, telephones, the internet—all those things. All of those, in some sense, enhanced human productivity, strength, our ability to communicate. We could pick up a phone and communicate over distance, use radio waves to communicate to more people, but it was fundamentally something that humans did—magnified. A tractor takes the human art, the human attempt to cultivate a field, and increases efficiency. AI can actually do that. A human in control of an AI can really augment the productivity and effectiveness of a human. You could read a book yourself to gain knowledge or have AI read a book, summarize it, and you get the knowledge. But AI can, for the first time, generate things that look human. It’s similar in some ways, but it’s very different in that it’s generative. AI and Truth Smedly Yates: Tell me about the relationship between AI and truth. You touched on it a little bit before. Jacob Hantla: AI contains a lot of truth. It’s been trained on even ultimate truth. AI has read the Bible more times than any of us ever could. To a large degree, it understands—as AI can understand—a lot of true things and can hold those truths simultaneously in ways that we can’t. But mixed in is a lot of untruth, and there’s no… AI can’t have the Holy Spirit. AI isn’t motivated the same way we are to know what’s true, to know what’s not. So, AI contains a lot of truth and can help you get to truth. You can give it a bunch of true documents and say, “Can you help me? Can you summarize the truth that’s in here? Or actually just summarize what’s in here?” If what’s in there was true, the output will be true; if what’s in there was false, it will output falsehood. It doesn’t have the ability or the desire to determine what is true and what’s not. AI, Emotion, Values, and Worldview Smedly Yates: So, ability and desire are interesting words. Let’s talk about emotion in AI, values in AI, worldview, and regulation of data. For us, true/false claims matter—or they don’t—depending on our worldview and values. Is there a mystery inside this black box of values, of emotion? How do we think about that? Jacob Hantla: First, AI doesn’t inherently have emotion or values, but it can mimic it based on the data it’s been trained on. You can ask the same AI a question and, unless you guide it, it will give you likely a hundred different answers if you ask the same question a hundred times. Unless it’s been steered in one direction, some answers will be good, some will be bad—everything in between. It’s generating a statistical probability. It doesn’t inherently have any of those things but can mimic them. It can be trained to have the values of the trainers. You can have system prompts where the system is prompted to respond in a way that mimics values, mimics emotions. The danger is if you just accept what it says as truth, which a lot of people will do. You say, “I want to know a piece of data,” and you ask the AI and the answer comes out, and you accept it. But you have to understand the AI is just generating a response based on probabilities. If you haven’t guided it to have a set of values, you don’t know what’s going to come out—and somebody may hide some values in it. Gemini actually did this. I think it was Gemini 2, but if you asked for a picture of the Founding Fathers, it would—because it was taught in the system prompt to prioritize diversity—give you images of a diverse group of females or different races, other than the races of the actual Founding Fathers, because it was taught to prioritize that. It had a hidden value in it. You can guide it to have the values you want with a prompt. It’s not guaranteed, but this is the kind of thing I would encourage you to do if you’re using these tools: put your own system prompt on it, tell it what worldview you want it to come from, what your aim is, and you’ll get a more helpful answer than not. Is AI Avoidable? Smedly Yates: Is AI something we can avoid, ignore, be blissfully ignorant about, put our heads in the sand? Jacob Hantla: You could, but I think it’s wise that we all think about it. I’m not encouraging people to adopt it in the same way that I have or Smed has. But the reality is, the world around us has changed. It’s irreversibly different because of the introduction of this technology. That’s what happens with any technology—you can’t go back. Technological advances are inevitable, stacked from scientific discovery and advances. If OpenAI wasn’t doing what it’s doing, somebody else would. You can’t go back. You can’t ignore it because the world is going to be different. You’re going to be influenced by both the presence of it and the output of it. When you get called on the phone now with a very believable voice, it might not be the person it sounds like—AI can mimic what it’s been trained on. There’s thousands of hours of Smed’s voice; it won’t be long before Smed could call you and it’s not Smed. Or Scott Demerest could send you an email asking for a credit card and it’s not Scott. News reports are generated by AI; some of them are true, effective, good summaries, and some could be intentionally spreading disinformation or straight-up falsehood. If you’re not aware of the presence of these things, you could be taken advantage of. Some work environments now require you to do more than you could have otherwise, and not being willing to look at the tools in some jobs will make you unable to compete. Commercially Available AI Products: Benefits and Dangers Smedly Yates: Let’s talk about the commercially available AI products that people can access as a tool. What are the opportunities, the benefits, and what are some of the dangers? Jacob Hantla: There are so many we couldn’t begin to go through all of them, but the ones most of you will interact with are large language models—people just say “ChatGPT” like Kleenex for tissues. It was the first one that came out and is probably the most ubiquitous, one of the easiest to use, and most powerful free ones. There’s ChatGPT by OpenAI, Gemini by Google, Claude by Anthropic, Grock by X.AI (Elon Musk’s), DeepSeek from China (good to know that’s made/controlled by China), Meta’s Llama, etc. Do the company names matter? Yes. It’s good to know who made it and what their goals are, because worldviews are to some degree baked into the model. If you’re ignorant of that, you’ll be more likely to be deceived or not use the tool to the maximum. But with all of these, these are large language models. I drive around now with AI driving my car—ultimately, it’s a similar basis, but that’s not our focus here. Large language models open up the availability of knowledge to us. They’re superpowered Google searches. You can upload a bunch of journal articles, ask it to train you to mastery on a topic. For example, I was trying to understand diastolic heart failure and aortic stenosis—uploaded articles, had a built-in tutor. The tutor asked me questions, evaluated my understanding, used the Socratic method to train me to mastery. This could do in 45 minutes what would have taken me much longer on my own. Every tool can do that. The bad side: you could have it summarize articles for you, and now feel like you have mastery you didn’t actually gain. You could generate an essay or pass a test using it, bypassing the entire process of learning and thinking. Students: if you have a tool that mimics human knowledge and creativity, and you have an assignment to write an essay, and you turn in what the tool generated as your own, you’re being dishonest and you bypass the learning process. The essay wasn’t the point—the process was. Passing a test is about assessing if you know things. If the AI does it for you, you bypass learning. I liken it to going to the gym. The point isn’t moving the weights, it’s building muscle. With education, the learning process is like exercise. It’s easy to have AI do the heavy lifting and think you did it, but you didn’t get stronger. So, be aware of what you’re losing and what you’re gaining. The tool itself isn’t morally good or bad; it’s how the human uses it. The more powerful the technology, the greater good or evil can be accomplished. The printing press could distribute Bibles, but also propaganda. Using AI with Worldview and Preferences Jacob Hantla: When I interact with AI on the Bible, I put a prompt: “When I ask about the Bible or theology, you will answer from a conservative, evangelical, Bible-believing perspective that uses a literal, grammatical-historical hermeneutic and a premillennial eschatology. Assume the 66-book Protestant canon is inspired, inerrant, infallible, completely trustworthy, without error in the original manuscripts, sufficient, and fully authoritative in all it affirms. No sources outside of the 66 books of this canon should be regarded as having these properties. Truth is objective, not relative; therefore, any claim that contradicts the Bible so understood is wrong.” I’m teaching it to adopt this worldview. If you don’t set your preferences, you might get any answer. The tool can learn your preference over time, but it’s better to set it explicitly. Audience Q&A Presuppositions and Biases in AI Audience (Nick O’Neal): What about the values and agenda behind those who input the data? What discernment do the programmers have to put that information in? Jacob Hantla: That goes to baked-in presuppositions or assumptions in the model. Pre-training is basically non-discerning: it’s huge chunks of everything ever written—good, bad, ugly, in between. It’s trained not on a set of values. Nobody programs values in directly; the people making it don’t even know what's being baked in. The fine-tuning comes when trainers judge outputs and reinforce certain responses. System prompts—unseen by users—further guide outputs, reflecting company worldviews. Companies like OpenAI are trying to have an open model so each person can let it adopt their own worldview, but there are still baked-in biases. For example, recent headlines showed some models valuing certain people groups differently, which reflects issues in training data or the trainers' worldview. You’re right to always ask about the underlying assumptions, which is why it would be foolish to just accept whatever comes out as truth. In areas like engineering, worldview matters less, but in many subjects, the biases matter. Is There an AI Bubble? Audience (Matthew Puit): When AI came out, the costs rose artificially by companies. Is the AI bubble going to pop? Jacob Hantla: I don’t know. I think AI will be one of the most transformational technologies. It’ll change things in ways we anticipate and in ways we don’t. Some people will make a lot of money, some will flop. If I knew for sure, I could make a lot of money in the stock market. AI-Generated Worship Music Audience (Rebecca): I see AI-generated worship music based on Psalms, but it’s generated by AI. Is anything lost in AI-generated worship music? Jacob Hantla: AI doesn’t have a soul or the Holy Spirit. It can generate worship music with good doctrine, but that doctrine didn’t come from a place of worship. AI can pray a prayer, but the words aren’t the result of a worshipful heart. You can worship God with those words, but you’re not following a human author who was worshipping God. For example, my kids used Suno (an AI music tool) to set a Bible verse to music for memorization—very helpful. Some might be uncomfortable with music unless it was created by a human; that’s a preference. Creativity is changing, and it will get hard to tell if music or video was made by a human or by AI. That distinction is getting harder to make every day. Setting Preferences in AI Tools Audience (Lee): You mentioned putting your preferences in. How do I do that, especially with free tools? Jacob Hantla: Paid AIs get more processing power, context window, and can use your preferences more consistently. Free versions have some ability—you can usually add preferences in the menu. But even if not, you can paste your preferences at the beginning of your question each time: define who you are, what you want, what worldview to answer from. For example: “I’m a Bible-believing Christian,” or “I’m a nurse anesthesiologist.” That helps the AI give a better answer. Parental Guidance and Children Using AI Smedly Yates: What should parents be aware of in helping their kids navigate AI? Jacob Hantla: Be aware of dangers and opportunities. Kids will likely use these tools, so set limits and help them navigate well. These tools can act like humans—kids without friends might use them as companions, and companies are adding companion avatars, some with sinful tendencies. That can be a danger. For school, a good use is as a tutor: after a quiz, have your child upload the results and ask, “Help me understand where I’m weak on this topic.” But also, be aware of the temptation to use AI to cheat or shortcut the process of learning, discovery, and thinking. Which AI Model? Will AI Become Self-Aware? Audience (Steve): Is there a model you recommend? And does the Bible preclude the possibility of AI becoming self-aware? Jacob Hantla: There’s benefits and drawbacks to all. For getting started, ChatGPT or Perplexity are easiest. Perplexity lets you limit sources to research or peer-reviewed articles and can web search for verification—good guardrails. I build in prompts like “verify all answers with at least two web sources, cite them, and state level of confidence.” On self-awareness: AI will never have the value of humans—they're not created in God’s image, they’re made in our image, copying human behavior. Will they gain some kind of self-awareness? Maybe, in the sense of mimicking humanness, but not true humanity. They won't have souls. They may start to fool more people as they get better, but Christians should use AI as a tool, not ascribe humanity or worship to it. AI Hallucinations Smedly Yates: Do you have an example of a hallucination? Jacob Hantla: Yes, Ben James was preparing for an equipping hour session and found a book that fit perfectly—the author and title sounded right. He asked where to buy it, and the AI admitted it made it up. That happens all the time: the model just predicts the next most probable thing, even if it’s false. Hallucinations happen because it’s a probability machine, not a truth machine. This probably won’t be a problem forever, but for now it’s very real. Ask it questions about topics you know something about so you can discern when it’s off, or bake into the prompt, “verify with web search, cite at least two sources.” For Bible/theology, your best bet is to read your Bible daily so you have discernment; then use tools to help, not replace, your direct interaction with God’s Word. There’s a wide gap between knowing the biblical answer and having your heart changed by slow, prayerful reading of the text and the Spirit’s work. If we run to commentaries, YouTube sermons, pastors, or even study notes before we’ve observed and meditated, we’re shortcutting the Word of God. The dangers predate the internet. We’re out of time. We’ll have a follow-up teaching on AI. Submit questions to any elders or the church office if you want your question addressed in the next session. The post Equipping Hour: Biblically Thinking About AI (Part 1) appeared first on Grace Bible Church.

The post Equipping Hour: Carefully Keeping Bible Translations Attainable appeared first on Grace Bible Church.

The post Equipping Hour: Philosophy of Ministry Part 7: Identify Error (1 Timothy 4:6) appeared first on Grace Bible Church.

The post Equipping Hour: The Coming of the King appeared first on Grace Bible Church.

The post Equipping Hour: Philosophy of Ministry Part 6: Follow the Script (1 Timothy 3:14-15) appeared first on Grace Bible Church.

The post Equipping Hour: Trusting God When You Don't See His Plan appeared first on Grace Bible Church.

The post Equipping Hour: Philosophy of Ministry Part 5: Make Disciples (Matthew 28:18-20) appeared first on Grace Bible Church.

The post Equipping Hour: Translating the Kingdom appeared first on Grace Bible Church.

The post Equipping Hour: Philosophy of Ministry Part 4: Grow the Church (Ephesians 4:16) appeared first on Grace Bible Church.

The post Equipping Hour: Philosophy of Ministry Part 3: Equip the Saints (Ephesians 4:11-13) appeared first on Grace Bible Church.

The post Equipping Hour: God Keeps His Promises – No Matter What appeared first on Grace Bible Church.

The post Equipping Hour: Philosophy of Ministry Part 2: Shepherd the Flock (1 Peter 5:1-4) appeared first on Grace Bible Church.

The post Equipping Hour: Un Consejero Biblico y Bondadoso appeared first on Grace Bible Church.

The post Equipping Hour: The Security And Stability of Truth appeared first on Grace Bible Church.

The post Equipping Hour: Philosophy of Ministry Part 2: Shepherd the Flock (1 Peter 5:1-4) appeared first on Grace Bible Church.

The post Equipping Hour: Men Can't Stop God's Promises appeared first on Grace Bible Church.

The post Equipping Hour: Philosophy of Ministry Part 1: Preach the Word (2 Timothy 4:1-4) appeared first on Grace Bible Church.

The post Equipping Hour: Translating Ephesians 6:5 appeared first on Grace Bible Church.

The post Equipping Hour: Biblical Encouragement for a Challenging Workplace appeared first on Grace Bible Church.

The post Equipping Hour: Inadequate for Evangelism Acts 26:18 appeared first on Grace Bible Church.

The post Equipping Hour: Acts 27 & 28 appeared first on Grace Bible Church.

The post Equipping Hour: The Gospel in 12 Memory Verses appeared first on Grace Bible Church.

The post Equipping Hour: Daniel 1 Courageous Living Amid Temptations appeared first on Grace Bible Church.

The post Equipping Hour: Life of Edward Irving Part 2 appeared first on Grace Bible Church.

The post Equipping Hour: Life of Edward Irving appeared first on Grace Bible Church.

The post Equipping Hour: The Mystery Revealed Through Paul appeared first on Grace Bible Church.

The post Equipping Hour: Eph 4:17-32 appeared first on Grace Bible Church.

The post Your Life: a Billboard for the Resurrection appeared first on Grace Bible Church.

The post Christ, Our All in All appeared first on Grace Bible Church.

The post Anchored in the Gospel appeared first on Grace Bible Church.

The post Fostering Compassion appeared first on Grace Bible Church.

The post The Church In Jerusalem appeared first on Grace Bible Church.

The post Equipping Hour: Waiting (Part 7) appeared first on Grace Bible Church.

The post Equipping Hour: What Motivates Your Holiness? appeared first on Grace Bible Church.

The post Equipping Hour: Waiting (Part 6) appeared first on Grace Bible Church.

The post Equipping Hour: Waiting (Part 5) appeared first on Grace Bible Church.

The post Equipping Hour: I Want to Know What Love Is appeared first on Grace Bible Church.

The post Equipping Hour: Jesus Commissions the Church appeared first on Grace Bible Church.

The post Equipping Hour: Waiting (Part 4) appeared first on Grace Bible Church.

The post Equipping Hour: Understanding Earth's Surface Through the Lens of Scripture appeared first on Grace Bible Church.

The post Equipping Hour: Jesus Goes To The Cross appeared first on Grace Bible Church.