I was watching a video on the Youtube Channel, “Gospel Simplicity,” where the question “Is ChatGPT a Theologian?” was addressed. There was some good talk about that with two major points. Real theology comes from a place of (1) faith, and (2) faith community. Theological reflection without faith is simply Biblical Studies or Religious Studies. Theological reflection done only in academia, disconnected from the church community tends toward novelty and emptiness. At one time I might have disagreed with these points, but I have come to see them as all too true. since ChatGPT and other AI programs cannot have faith (they draw from words of others to predict appropriate responses) and are not imbedded in the church, they cannot do “real theology.”
I agree with this, but it does beg a question:
If one were doing an online chat with a theologian, can one be 100% certain whether it is a human being or an AI?
I think the answer is NO. According to the Turing Test, the inability of a person (human being) to be able to distinguish between a computer and a human, is a good test for whether the computer “thinks” or has (human-like) intelligence.
While Alan Turing was, by pretty much any standard, a genius, the Turing Test has inherent flaws. The Chinese Box thought experiment can be used to point out that while intelligence is necessary to pass the Turing Test, the intelligence might exist outside of the confines of the computer itself (in this case, the intelligence is in the programmer). You can look up the Chinese Box thought experiment elsewhere. I have spoken of it before, along with so many others.
But I also like a different thought experiment… I will call it “The Island of Pipes.” Imagine a large island that is competely covered with water pipes, and all sorts of valves (gate valves, check valves, throttle valves, pressure relief valves… all sorts of valves). The system is integrated in such a way that there are a near infinite number of permutations to the set up. Perhaps on the island is a large dam full of water in the mountains and it provides the water pressure to drive water through this vast network of pipes. Now, following systems analysis, electrical circuits can be mimicked by mechanical and hydraulic systems. So let’s model a computer’s immensely complicated circuitry with this immensely complicated water system. We can even have water flow to points that would flip over colored panels to act like an absolutely immense display. So if the valves are aligned correctly (programmed), water flowing in will follow this programming and at the bottom give results that can be read off of that gigantic display. Such a system would be many orders of magnitude larger than a modern computer, and be many orders of magnitude slower. However, because computers are digital (on-off), a water system actually has some added subtley because it can potentially deal with subtle analog measurements in pressure and flow rate that digital cannot. (Or one can ignore those and simply focus on Open versus Shut. If someone was able to program the valve arrangements in such a manner so that when water flows into the system from the dam, the output may be coherent or seemingly even profound. The patient person who waits for a response from this vast network of pipes may be so impressed that it gives answers that seem so much like what a human would give. Now if that same person was asked whether he believes, then, that this vast network of pipes and valves thinks or has human-like intelligence, I am pretty sure that person would say No. Why might that be when we are more open to thinking that a computer “thinks”? My thought is that the speed of electronic computers along with the microscopic complexity of the circuits, and the mysterious, nearly magical, qualities of electrons seems leads us into a place where we are left with “human intelligence” or “pure magic.” When it is spread out over an island of pipes and valves, it may amaze but not lead to similar conclusions.
If the paragraph above does not make sense to you, or does not impress, don’t worry about it. I am trying to discuss a problem with the Turing Test. The bigger problem is simply the limits of mediation. I realize that I may be using a term improperly. Let me try saying it another way. When one person has a thought and expresses that thought to another person, the expression of thought is mediated through words. In other words, we cannot experience thought, we can only experience words (and other forms of communication through the sense). AI mimics words of humans by drawing from vast pools of people who share their thoughts through words. So let’s suppose that an Generative AI program was “schooled” on the theological words expressed by 10,000 theologians. When that program is asked to respond to a question, it will coalesce the human responses into a single voice. That voice sounds human because (1) its responses are designed to follow human communication rules, and (2) the voices it is drawing from are actually human.
Actually, let’s hit that first point. AI is designed to follow human communication rules. A few decades ago, computers were much more “primitive.” Somewhere in the late 70s or early 80s I saw a feature on TV regarding “fuzzy logic.” Classic logic follows very simple rules that allow for three options: True, False, or Cannot Determine. These are not necessarily very useful. Relatively few things are completely and unambiguously True or completely False. Also, Cannot Determine is of little use because most statements will have weight of evidence that points towards likely truth or partial truth, or false or partial false. Fuzzy logic attempts to deal with this problem and deal with probabilities of truth. In this a computer with a fuzzy logic program running, was instructed to identify an arch based on a certain design structure— like two vertical structures and a curved lintel on top joining the two vertical members together. Then the computer/program was queried if a different structure (two vertical members joined by a horizontal lintel) was an arch. The program said that it “thought” it probably was also an arch. The TV asked whether this demonstration proved that computers think. Of course, a question one might ask is that if we think the program is indeed thinking, why do we come up with that conclusion? Perhaps it is because the program was instructed to give answers like “think” and “probably.” On the other hand, what if the program gave the response this way: “The two vertical members align perfectly with the structure designated as an arch. The one horizontal member has a different shape, but joins the vertical members in the same way, and transfer forces in a similar manner. Therefore, there is a 72% commonality between the two structures.” Although that response actually demonstrate more analysis— revealing something akin to thought. It, however, sounds less human and therefore more suspect.
So what am I saying? I am saying that AI does not do theology. It has not, it does not, and in all likelihood as we understand the world we live in, AI cannot. However, because theology is mediated by language, an AI is likely to be able to successfully sound like a theologian. If an AI cannot pass the Theologian Turing test, it soon will.
However, the Turing Test is not really a test of intelligence. It is a test of our understanding of what makes us human— essentially a test of our theological anthropology.





