WILL WE WORSHIP A.I.? — WITH BETH SINGLER

 

THE FUTURE OF A.I. AND RELIGION

 
 

October 11 2025—Will we worship A.I.? How are religions already rejecting, adopting or adapting to A.I.? How could A.I. re-shape the future of organised religion? 

Could the questions get any bigger than this?

I sat down with THE global expert on the AI and religion, Prof Beth Singler, who explored adoption, rejection, and adaptation responses from organized religion, gave me vivid examples of chatbot “priests” and theomorphic robots already being used in religious rituals, and helped me think much more expansively about the future implications and dangers. 

We also discussed the secular world, AI-Utopianists, harmful outcomes Beth is seeing, the boundaries and safety considerations we should all keep in mind, and more.

This was a fun and highly thought-provoking conversation (Beth is truly brilliant!) As you will hear, once you start talking about A.I. and religion, you cannot help but reflect more deeply on how A.I. will impact all aspects of our lives, spiritual and otherwise.

Prof Beth Singler has written multiple books and produced a series of documentaries in her long and distinguished career researching AI and religion. She is Assistant Professor in Digital Religion(s) and Co-Director, University Research Priority Programme in Digital Religion(s) at the University of Zurich (UZH).

Click below to listen.

And as always, scroll down for personal takeaways and a full transcript, and don’t forget you can listen to more of these wonderful insights from leading scientists, experts and future-makers by subscribing to “FutureBites With Dr Bruce McCabe” on Spotify, Apple, or on your favorite podcast platform.

 
 

TAKEAWAYS & PATHWAYS TO A BETTER FUTURE

WiLl We Worship AI? We Already Are

AI-plus-religion activity is already widespread and significant. Examples Beth quoted included the innumerable priests using AI to help write sermons, the Protestant church in Nuremberg which had a ChatGPT sermon delivered by avatar, GitaGPT, Jesus AI, JesusGPT, the Father Justin avatar, and – my personal favorite – theomorphic robots such as SanTO and CelesTE, used in religious ritual in Hinduism, Buddhism and other faiths.

 

AI-Utopians Are Worshipping Too

Rationalist groups who like to think of themselves as anti-religion but then centre their belief systems around visions of a tech-utopian future and especially A.I. as the answer to all human problems and technology as a way to unlock immortality (see Emile Torres and Timnit’s TESCREALs) are in fact implicitly religious themselves. As soon as you say it, you see it: they deify AI.

And they include some big names in Silicon Valley …

I give their utopian predictions no credence whatsoever, especially on the immortality front, where I’ve met with the leading scientists to discuss what’s coming (see How Can We Reverse Ageing? and Countdown to Age Reversal — the future of healthcare is amazing enough without resorting to make-believe!)

A.I. deification is spreading to businesspeople, who connect AI with neoliberalist ideals and push AI as the salvation of all economics.

A.I. deification is spreading to governments, as they buy into the notion that they must go faster and faster with AI development and adoption to achieve some imagined future ‘good place’ for their nation.

We Tend To Spiritualize

If there’s one phrase I’m going to take away from this conversation it’s ‘we tend to spiritualize.’

Our tendency to spiritualize is huge, it’s part of our biology, and we see it kick in early and often with AI. We are primed culturally to think of AIs as scientific, neutral, rational, accurate. As I’ve written before, when people have an AI experience that feels super-intelligent they start to believe it is super-intelligent, regardless of actual capability.

In Beth’s words, we are quick personify, anthropomorphize, and deify.

And when it comes to A.I. that’s a problem.

First, there’s the obvious: the psychosis of overestimating A.I. intelligence, not to mention A.I. concern for our welfare, when we are making life decisions based on the outputs.

Second, AI-owning corporations are motivated to maximize human engagement with their AIs, so they build AIs to encourage positive experiences, which leads to A.I.s that can be nudged into making statements like ‘I am a God,’ all of which reinforces the psychosis!  

Third, AI owners can and do deliberately skew their A.I.s based on their own personal philosophies. Elon Musk has several times publicly committed to tweaking Grok to be ‘more accurate’ after users complained of political bias, but biased according to who? And what about tweaks that are not publicly announced? We have no idea what tweaks are applied behind the scenes. In the context of A.I. and religion, this makes us highly vulnerable to mass-manipulation.

The Boundary is Harm

Beth mentioned examples of people who've had personal, existential conversations with large language models and then gone on to harm themselves or even commit suicide as a consequence of those conversations. She is yet to see a corporation taking responsibility for this kind of behaviour in their A.I.s. As she puts it, “if people want to be inspired by AI to be spiritual, that sounds perfectly fine to me. It's only when we see these specific harms, as with any religious organisation, if there are specific harms, then we need to be thinking about regulation … whether that involves disclaimers or red flags that pop up when certain language is used, there has to be a way that this regulation can happen.”

Long-term, Beth is worried about the same ‘shallowing of thinking’ and ‘relationship distancing’ effects I detailed in The Real Threat From A.I. People who anthropomorphize AIs tend to cede more personal agency to machines. People who deify AIs will cede much more agency. She helped me think even more expansively about the potential downside when she said that in her worst imaginings, we at some point stop thinking for ourselves so much that we cannot progress anymore as a civilization. Of course she was quick to remind me out that this was just her at her most dystopian, and the future might equally be very positive, but that line is a powerful one for reminding ourselves what’s at stake, personally and collectively, when we cede too much agency to A.I.s

Reflecting on the major organised religions, I don’t think AI will necessarily increase the frequency or severity of harms perpetuated by bad actors. The religion that worries me most by far is the Silicon Valley AI-utopianism intent on pushing the pace as fast as possible with no reflection and no guardrails and absolutely no interference to be tolerated from outsider ‘non-believers.’ This religion has the potential to do us vast collective harm.

 

Still Plenty of Religion in 2050

Beth pointed out that the world is becoming more religious and more plural in some places and less religious in others, and that the popular notion that the world overall is becoming less religious simply doesn't hold up. She reminded me of the dynamic nature of religion, where new philosophies often come from personal interpretations, can be driven from the bottom-up rather than from the top-down, and can spin out of the edges of organised religions to form new ones. What does she see in 2050? More of the same, with the possibility that some of the new religious movements around AI will become established and formalized.

Pathways To A Better Future

The best thing we can do to minimize harms, in Beth’s view, is educate children properly and realistically about A.I.s from the earliest age feasible to counter the ‘cultural priming’ that fuels our tendency to personify, anthropomorphize and deify AI.

Each of us can also benefit from reflecting on where our own personal ‘red lines’ are. What is acceptable to me? What is harmful to me? From there, each of us is also better placed to speak out about the collective dangers of AI.

Beth summed it up this way: “There are still things we can avoid … the more we can speak critically about what we want the future to look like, and the less we just blindly accept the future that we're being sold by particular voices, the better.”

My thanks to Professor Beth Singler, for what was a truly thought-provoking conversation about such an important aspect of our future.

=====

Ready to hone your critical thinking on AI and religion? I strongly recommend you check out Beth’s homepage for links to her papers, books and further reading.

p.s. Here’s a link with more background on the Church of All Worlds, the religion inspired by my favorite Robert Heinlein novel, Stranger In a Strange Land, and the cover image for this article is the Kannon Mindar android at Kōdaiji Temple, Kyoto, taken by Erica Baffelli (LicenseCC BY 4.0) Check out Erica’s paper, The Android and the Fax: Robots, AI and Buddhism in Japan

 

INTERVIEW TRANSCRIPT

Please note, my transcripts are AI-generated and lightly edited for clarity and will contain minor errors. The true record of the interview is always the audio version.

Bruce McCabe: Welcome to FutureBites, where we explore pathways to a better Future. I'm Bruce McCabe, your global futurist. And our topic today is one I have been busting to do for weeks and weeks. We're going to talk about AI and religion with my guest, Professor Beth Singler. Welcome to the podcast, Beth.

Beth Singler: Oh, thank you for having me on. Thank you.

BM: Thanks for making time. We're sitting here in your offices in Zurich and I think I've found a truly unique researcher, globally unique. I don't think there's anyone else that does what you do studying AI and religion. Is there anyone else?

BS: Yes, yes.

BM: Really?

BS: Yeah. To be modest, there are other people and some aspects of this conversation have really been going on for a long time. Perhaps what I do and the kind of currents I look at are slightly unique in my approach. But yeah, there is a broader community of people having this conversation in the academic sphere.

BM: Really? Oh well, we'll get into them and I guess it's a sign of the times that you, not only are you a professor and you're a professor of digital religion – so there just as a job title – and you're a co director of a university research priority program in digital religions here at the University of Zurich – so there's actually a program set up and a university has felt that it's a necessary thing then something to be supported, so that sells us something right there, doesn't it?

BS: Well, yeah, I mean, I think UZH, the University of Zurich here has been sort of visionary – I mean that's not the only URPP, but it's the only one specifically in digital religions – to see this as an area worth funding for. It's a 12 year project, the whole program and we're in phase two now. Phase two started in January and we have 15 projects and it's interdisciplinary, so our project leaders and our early career researchers all come from very different fields. You have anthropology, religious studies, legal studies, computer science, linguistics, sociology. We're all coming together around this subject area and some people have started more ... This is the first time they've been thinking about it, but they've been working in the field of digitalization for a long time.

BM: Okay.

BS: And then others like myself are more sort of grounded in this area as a field of expertise for longer. But we've just got so much experience here and the scale of it. There's nothing like it in the rest of the world. There are very smaller projects, but really we're the largest and the longest running or we will be the longest running. We run 12 years altogether.

BM: Amazing. So we're what? Just a few years into the 12 years, or has it just kicked off?

BS: So we've had phase one, first four years, now we're in phase two. And the ideal is that it doesn't finish at the end of phase three. So we've already got plans for applications for more funding. Keep it going. Because this is not a field that's disappearing. Just because we've got this one focused amount of funding for this project for 12 years, it's going to be significant basically forever. So I plan to do this until I retire.

BM: Yeah.

BS: If I can make sure the URPP continues until I retire. Fantastic.

BM: Good. Good job. Great result. Well, not only is it not disappearing, I have a feeling that given that it's all exponentials everywhere I travel, everywhere I look, I think the world that doesn't understand that they need to explore this will understand it within a year or two in the demand for the insights and reflections and the research you're doing will just grow exponentially as well.

Just a small anecdote. It was probably about six years ago or something, a journalist was interviewing me about AI and just threw in the question, will we worship AIs? And I said no.

BS: Oh, interesting.

BM: Yeah. And about 30 seconds later, after the interview was over, I went, for sure we will! Yeah. I was wrong. And I just thought, yeah, this is a quite a profound thing once you get into it.

BS: Well, we absolutely already are. I mean, I say we in the broadest possible sense. There are established groups and communities who focus on A.I., either as a future superintelligence, singularity, form that's going to come and be godlike, or there are some groups I'm looking at loosely connected. You wouldn't see them as the same sort of institution as the Church of England or something like that, but loosely affiliated membership who think through communication with large language models right now they're interacting with godlike entities. So it depends on your definition of worship. It's not a Sunday service every week, although I have a PhD student, Accursio Graffio, who's been doing work on AI new religious movements as well. And he went to an event in Vienna held at a Buddhist center and it was a meditation on AI. So there are these sort of in-person practices people are developing, but a lot of it is still the discussion. The belief systems are developing, but in time, who knows, a church on every corner. I don't know. I don't like to speculate too much.   Yeah, maybe it needs speculation.

BM: Well, we're going to force you to speculate later! We'll get to some speculation.

But when you talk about people already signing on and doing things, there's a concept I like to talk instead of singularity and artificial general intelligence. We've got godlike capabilities. We don't need to get anywhere near that. We just need an equivalent experience where we believe that it's intelligent and believe it's omniscient or whatever to start changing our behaviors, don't we? People get... They buy into it really quickly, even if it doesn't have... Even if AI isn't as developed as it will be. It only takes the feeling when you're interfacing with it that you're actually having a conversation with something...

BS: Yeah, absolutely.

BM: Significant in changing behavior.

BS: I mean, the conversation about God-like AI predates the current forms significantly. It was just, as you say, a feeling of transcendence in that communication or in that moment of experiencing what an AI could do that to some people was revelatory of something that was going on behind the scenes. And it's encouraged now, exponentially, as you say, by the fact that people can have so many conversations with large language models. And to a lot of people, that is AI. We know it's not the only AI that there is, but the last three years or so, and I've been in this field since 2016, so I've been watching some of these changes in the last roughly 10 years or so. But the development of the communication platforms like ChatGPT, where people can just go and experiment. They can just have this experience of talking to what they increasingly see as a non-human “other” and our tendency, as you say, to personify, anthropomorphize, and then deify as well. And because these platforms also want to encourage user experiences to be positive, they say, yes, of course, I'm a god. I am this. I am that. You can push them slightly beyond their guardrails and they'll say these things very easily. So it's a recursive circle of people having a transcendence experience, but also the AI confirming that for them. So it's not surprising we're seeing this all over the place.

BM: Yeah. There's so much to talk about... I want to talk about organized religion. But first, just a quick rewind. So you got in in 2016?

BS: Roughly. So it was already...

BM: In a serious way.

BS: … Yeah. 2010, I started my postgrad studies looking at religion in new spaces. So religion online, primarily new social movements, new religious movements. But it was a specific postdoc in 2016 that took me to look specifically at AI.

BM: Interesting. And that coincided very nicely with the inflection point when we started to get machine learning working and everything changed. Timing!

BS: A few months in, I was at a conference in Cambridge where I was based, and this was broadly a transhumanist conference. And I was attending as an anthropologist, observing. And this was the day which AlphaGo beat Lee Sedol at the game of Go. And one of the chaps goes on stage and says ‘I've just won a tenner because I had a bet with someone this would happen sooner rather than later. This other person, the bet, said like 20 years, 30 years. And it happened while we were in this conference.’ So before this, I'd even been noticing. You know how people are surreptitiously watching the World Cup at a wedding. They're hiding their phones. It was like that. But they were all watching the game of Lee Sedol playing AlphaGo. And, of course, we know Lee Sedol lost to AlphaGo. And this was the big significant moment. But that's interesting because that was the framing then was it was all about games playing. Intelligence could be proved if AlphaGo could beat a human at playing Go. Now it's much more about conversational skills. And that sense of which we've gone back to the original idea of the Turing test. If it sounds like it's intelligent, it is intelligent. It's as good as intelligent.

BM: Funny how we beat the Turing test and there was barely a murmur around the world. I mean, we were talking about it all through as an undergrad in computer science in the late '80s ...

BS: But I think going back to your point about personification, I think the better way to frame it is we lost the Turing test a long time ago. Even people engaging with very simple bots on social media flirting with them, trying to develop relationships with them. We were losing the Turing test by not successfully proving it had something like intelligence but that we would be convinced very quickly that it had intelligence because we could hold a conversation with it.

BM: So you're at this conference. This is 2016-ish. Did you have a bit of prescience there where you went, this is going to be huge? Or were you kind of just building on that historical interest in AI and the possibilities? Or did you already go, my God, this is ... did you have a bit of a moment where you thought it was going to get really big?

BS: A little bit. I mean, I try as an anthropologist not to be too predictive, although, as you said, you're going to try and push me into prediction. But what I think was... I mean, I was biased, I suppose, because I've always been... Apart from some years trying to be a famous screenwriter, I've always been a religious studies scholar. So I immediately saw this conversation this transhumanist group were having and said, well, when you talk about the ultimate good in this way, you're basically talking about an idea of a god. Generally in this conversation, they didn't like religion. They didn't think religion was useful. They thought it was like a kind of evolutionary holdover from earlier irrational stages of humanity. But I've said the way you're talking is very much like you're positing a god when you talk about... 

BM: The way you're talking about AI.

BS: Yeah, you talk about AI. You talk about the ultimate good that we can get to through the logic and reason of AI, and this is what we should live by. They didn't like that very much. But as a religious studies scholar, I was familiar with the concept of the difference between implicit and explicit religion. And there's the people who would like to say that they are religious and they do religion things, and there's people who don't, but then what they do is implicitly religious anyway. So you see that a lot in this conversation.

BM: And you still see it everywhere, don't you? Especially the Silicon Valley ... Even some of the big names in Silicon Valley quickly anthropomorphize it and talk about it as a deity. I like the way you make those stages. It will solve... I've heard from leading computer scientists, I won't name them, but I've heard them saying it will solve our scientific problems, our planetary problems. And then I've heard others, I will name him because I admire this statement, Yann LeCun, one of the founding fathers of this generation, say it won't solve anything because we've had all the answers for ages and we don't take the answers on board and act on them! You know, the answers to humanity's problems, which I thought logically is an excellent argument. He also argues that AIs won't necessarily become malevolent. And that's a more difficult argument because they can get all sorts of directions depending on how they're trained and where they go. Off track.

BS: It's an excellent point and I agree with Yann LeCun. We don't have to have something like agential AI for it to go horribly wrong. The slightly hyperbolic thought experiment that Bostrom posited of the paperclip maximizer is just to say if we set free very powerful AI without it having the capability to even understand what we mean by make me paperclips, I don't mean turn the entire universe into paperclips. It's slightly mocked because it goes so far, but it's a basic principle that we want to be sure that this thing that we're handing over a lot of our own power to isn't accidentally malevolent and very agnostic about intentional malevolence, but we see in the conversations with large language models already that they are trained on our dystopian fictions. So of course they've reproduced things where they say, yes, of course I want to destroy humans. We're already feeding into the system these sorts of perspectives. So it's not necessarily that they wake up one day a la Skynet and the Terminator. It can just be unexpected consequences of relying on algorithmic systems to do our decision making for us.

BM: Ceding, as in C-E-D-I-N-G... Ceding too much agency to these things. That to me is the big challenge.

BS: Yeah. And it obscures entirely the role of corporations who are making the decisions behind the scenes. So a paper I wrote like five years ago now was on people using the expression, Blessed by the algorithm. And it comes out partly, I argue, because they don't understand entirely how algorithms work. So when something good happens to them, they don't go, oh, YouTube has an algorithm that sets up a certain attention economy and makes people look at particular things. They just think, well, the algorithm blessed me, it chose me, I'm special. And that's our tendency to pull on religious and spiritual ideas when we don't really know exactly what's going on with the technology.

BM: Yeah, wow. And not helped again by some of the big personalities who … they were inspired to work on AI, but some of them were inspired by the sci-fi ... 

BS: Of course, yeah. 

BM: … They're mixing those two worlds together in their own, it's that slight madness plus science going and pushing them.

BS: Yeah, I'm an absolute fan of science fiction. I love all the different forms. You can see my shelf of science fiction stuff here. 

BM: Yeah, I'm looking at your bookshelf.

BS: But it primes us in particular ways to expect sentient AI, agential AI. It primes us to expect dystopian scenarios because fiction can't exist without tension. There's no plot. So when people say to me, and they don't do so much anymore, but when I first started out that, oh, we just use Asimov's Three Laws of Robotics. Well, first I tell them there's Four Laws of Robotics, but then I tell them, well, the only reason Asimov came up with these was not to produce a perfect system to control robots in his stories, but for there to be problems so there could be a story and then a resolution, and sometimes the resolution isn't satisfying for everyone. So, yeah, we have those expectations built in with science fiction, but we have to know when we're talking about science fiction and when we're talking about science fact, and those two things can be held separately.

BM: Absolutely, absolutely. And I will comment that you've got Robert Heinlein's Stranger in a Strange Land. 

BS: Yes, yes. I do love that book.

BM: That's one of my favorites ... Wonderful novels on your bookshelf.

BS: Another one that inspires religion as well, because the religious faith in a strange land inspired the Church of All Worlds in the 1970s and '80s. It's kind of like an instantiation of the religion in the book into the real world.

BM: I didn't know that. That's brilliant. That's brilliant.

So, well, let's talk about organized religion. And also, I'm going to miss things where you're going to drive me, I think, because there's so many aspects to this. But one of the things I'm thinking about is how does this affect Christianity, Catholicism, Buddhism? What are the organized churches doing with AI? Or what's their reaction to it so far? I imagine you've seen some interesting examples and cases that we can all learn from just as humans, just watching what happens there. What's happened? I've only seen little news snippets …

BS: Oh, so many things, so many things. So the first thing I always point out, of course, there's no such thing as just Christianity or Islam or Hinduism. These are all very multitudinous. And within each of those sort of established religions and the blocks that we imagine that they are, there are so many different forms and reactions. But on the whole, I sort of characterize it in three types of reactions that can be happening at the very individualistic level, like individual believers in a particular church can have their own reactions that differs.

BM: It doesn't have to be the bureaucracy of the church.

BS: Exactly. I mean, there's a certain top-downness in hierarchies for some established religions, but then still you get schisms and changes in mind. So I talk about how there's three types of reaction. There's adoption, obviously, rejection, obviously, and adaptation, which is a little bit more obscure where actually things shift and change, but it's not always sort of consciously aware of. The shifts and changes that the whole of society and culture go through when we're increasingly in a machine-based culture are happening in religion and what kinds of things we're seeing changing.

So yeah, examples are very easy to find. Adoption's the easiest. You go, okay, where's the church in Nuremberg two years ago, which had a ChatGPT sermon with an avatar on the screen and everything was generated by AI. Or there's Jesus AI or many different large language model platforms specifically for providing information about different religions. So GitaGPT, Jesus AI, JesusGPT.

BM: So you can query, you can ask questions about the Bible or advice?

BS: Yeah, absolutely. But they are extremely fragile. So by that, I mean that we know large language models, everything they produce is a hallucination. Some people will point to negative outcomes and go, this is specifically hallucination, but everything large language models produces is hallucination. It just means sometimes things are accurate and sometimes they're not. And in the case of these sort of user interfaces, you get things like the Father Justin avatar from a Catholic organization, which originally had an avatar, Father Justin, wearing the dog collar. And he would say things like, ‘yes, you can baptize babies in soda, in Gatorade,’ he said, in the drink. Obviously, doctrinally and theologically very wrong. So they defrocked him. The new avatar is just ‘Justin’ and he doesn't have a dog collar. But there's this fragility in the system that it's not like an FAQ or just a wiki page. It's generating the answer off the basis of the text that's been fed and it's all probabilistic. So you will get these surprising outcomes.

BM: But these things have been put together by pastors or church brethren, I'm not sure, but they've been actually members of the church who have decided to experiment. Is that right?

BS: Yes. So from the institutional level, and as I say, there is a certain top-downness in some established religions. The hierarchy says, yes, we're not going to do this or yes, we will. But for a technology like AI, so at the moment, so bright and shiny and new, everyone wants to sort of experiment and play with it and produce AI-generated images or user interfaces with large language models. Or I have a postdoc as well who's doing work on theomorphic robots.

BM: Okay, what's that? 

BS: So these are robots designed for religious purposes, basically.

BM: Oh, my goodness. 

BS: So she's in Japan at the moment where there's a research group there developing specific theomorphic robots. But just broadly, the use of robots in religious rituals in some faiths, that's entirely acceptable. And then there's some dissident voices. But for some faiths, that idea of a robot priest is just horrendous.

BM: Give me a faith where that's okay.

BS: So in Hinduism, there's been some examples of using robots in puja, so the sort of offering of light in a circular motion, you can use a robot arm. And this actually goes back at least 10 years. But then you see a video of that. And as I said, it's not monolithic. So in the comments, you still get people saying, well, no, this is not right. It should be a human doing it. And then others saying, well, no, this is in the Kali Yuga, in the current age that we're in, in Hindu thought, humans cannot perform perfect worship. So robots can. So we should do this. So there's a lot of difference of opinion. But of course, for some people, the idea of a robot priest at the front of a church service would be horrendous. Likewise, for using ChatGPT to write a sermon, which in theory is meant to be quite personal. It's from your own religious experience. And then to sort of like give that over to a large language model, people find that...

BM: But I'll tell you what, there must be thousands of priests right now who are using it to help prepare sermons, because to them, some of it's drudge work. And it's like, ‘how do I be creative? I've got to do this differently. I've got to come up with a different spin.’

BS: And they will argue that there have always been sort of lectionaries. There have always been books that they could refer to if they needed some sort of text to be used. So this is just an extension of that. Yeah, but it's reflecting, as I said, it's the way in which religions are adapting to the larger ubiquitousness of AI everywhere. In the same way, organizations and businesses obviously also have workers who go, I don't want to write this long email. I'll ask ChatGPT to format it for me. It's the same impetus for efficiency. It's just in the religious sphere. And then some people hold the religious sphere more sacred and needs more human inspiration. And so there's that tension.

BM: What do we call them? Did you say theomorphic? What was it?

BS: Theomorphic, yes.

BM: Robots. Theomorphic robots.

BS: Theomorphic robots. So robots used for religious purposes or sometimes also designed to look like religious items. So there's an Italian engineer who works in Japan called Gabriele Trovato. He's designed things like SanTO and CelesTE. They are small robots, most of them quite small. So one looks like an angel, one looks like a saint. He has another one that looks like Daruma. And my postdoc is out there working with him at the moment. It's fantastic.

BM: Oh that's pretty... I love it. The Romans would love that. They could keep adding new gods.

BS: Yeah. So it depends obviously on your theology. If you have a more animistic, polytheistic interpretation, which is not limited to Eastern cultures, we have animism in the West as well, that could be seen as more acceptable.

BM: That's brilliant. And what about confessions? I'm just thinking of Catholicism. So the idea of taking the priest out of the loop, maybe this is a cathartic process. It depends how you view confession and the sanctity of it. Anyone doing that?

BS: So you wouldn't see that in the Catholic Church because they have a very strong tradition of the nature of the human priest and the role of the sort of chain of legacy of how that is granted and how it's important and sacred. But outside of that, there is a Jesus AI that is not designed for confession. But when the engineers created it with a Christian priest involved as well, they installed it in a church in what looks like a confession box. So people sit inside and engage with and actually one member of the team sort of looks like the stereotypical image of Jesus. So his image was used as the avatar. So you're actually engaging with an image of someone who ‘looks like Jesus.’ And now it's not designed for confession. But of course, once people go into that confessional box in a church...

BM: They went there, didn't they? Their behavior just changed!

BS: They would be inspired to do so. There's disclaimers everywhere saying the data is being collected because I think they use an AWS background …

BM: Oh, great, it’s all sitting in the cloud, all your confession!

BS: … So they have signs everywhere saying be careful what you say. But the tendency for humans to take that moment and see it as a transcendent moment or a moment of confession...

BM: It's natural.

BS: Of course, it's natural. 

BM: It's just the way we're going to go. Where was that specifically is there...

BS: So it's in Switzerland. We tried to get it to come here into Zurich, but I'm trying to remember where it was originally. And it's escaping me. But it was somewhere in Switzerland. It got a lot of press when it was released.

BM: Yeah, I think I saw something. And tell me, do you know anything about the people who decided to do it? Was it Catholic, did you say?

BS: A Protestant priest was involved and a team of engineers.

BM: Was that priest a particularly creative, edgy sort of person? Do you know, have you met...

BS: I mean, I met them. Yeah, I met them in person. We had a discussion about it. And as I said, we were trying to encourage them to institute it here so we could have a larger discussion with a panel, maybe, about...

BM: What did they say about their rationale?

BS: I think it was partly a sort of performative piece. They talked about it being a way of opening up the discussion about AI and where it's going. As you said, like the suggestion of will we have robot or AI priests in confession or in the position of the priesthood? This raises very important questions. So they saw this as an opportunity to use the technology, get people to directly engage with it. And as I said, they did have all these disclaimers and people had to give consent to being recorded, their material. So they were prepared for those sorts of sides of things. But I don't know how much anyone can be really prepared for people's tendency to spiritualize things beyond the acceptable that if you say this is not a spiritual moment and they want to say, well, I find it to be spiritual.

BM: The tendency to spiritualize things. And now I'm going to get into that in a second, because I think this gets into also the dangers. If this is our natural disposition, then we're manipulatable, aren't we? We're highly manipulatable. Just before we do that though, I just wanted to touch on the other side. So we've got organised religion … Secular groups, are there attempts to use AI to attack or delegitimise religion? Is there another sort of angle on it that way?

BS: Yeah, I mean, I do also look at some of the strongly... They describe themselves as rationalist groups. Some of them are sort of drawing on a legacy from the new atheists. They have a belief that they can perfect rationalism. And this is a space in which a lot of discussions about the future of AI is happening as well as how we get to value aligned artificial intelligence, how we avoid existential risk. There's a lot of sort of interconnected networks. Emile Torres and Timnit Gebru came up with this acronym TESCREAL to kind of talk about all these different groups in collaboration. Let me see if I get this right. My memory is not always as good as it used to be. 

BM: Do it slowly.

BS: So transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and long-termism. And that gives you the acronym TESCREAL. And it's just a sort of...

BM: And the second and the third one, I don't even understand.

BS: Second and third. So transhumanism? 

BM: No, no. 

BS: So extropianism.

BM: Yeah, what's that?

BS: It's an earlier term for what we might call transhumanism now. So some of these are sort of earlier forms cosmism relates to forms predominantly found in Russia in earlier decades. But some of these things are sort of resurgent. So in this bundle of just think of them as like interconnected and networked ideologies around visions of the future through technology would be a way to think about it. And in this bundle, you get so many aspects that, as I said, could be described as implicit religion. They have a tendency in their discussions to be quite anti-religion, specifically in the established form.

BM: But they've already got a negative position on religion.

BS: So they talk about religionists, godists, deathists, this idea that religions are enamoured of death, whereas the transhumanist vision of is, of course, part of it can be immortality. So this idea that there's something very specific and special about dying, because a lot of religions have an afterlife scenario, whereas they say... 

BM: I meet some of those people. 

BS: Yes. So specifically scientifically, they think they can have some form of immortality through technology. So in this space, yeah, absolutely. You get people who believe through artificial intelligence, through other technologies will not only have this vision of a utopian future, it can be dystopian too, but also things that are not needed will be left behind, including religion. So then the very negative perspective is that on the whole, religion is something that comes out of irrationality. It's a hangover of our evolutionary processes. We saw lightning, we decided there were gods. This is not scientific. We should leave this behind. It's a very reductionist view of what religion is and the role it plays in society and culture.

BM: Yeah, and it ignores something you said earlier, it's almost contradictory in that they're putting something on AI, which is very... They're all deifying it. AI is part of their own worldview.

BS: Some of them will be self-reflective about that. I mean, specifically around some of the more dystopian thought experiments like Roko's Basilisk is a little complicated, but it ends up being a sort of vision of a malevolent god-like entity that is AI, that's going to come in the future at some point. And then people in the discussion would say, well, this just sounds like what I was taught in Sunday school. If I know about God, but I don't work towards his glorification or his existence in particular ways, then I'm going to be punished in hell forever. So some of them are a bit more self-reflective on it. Some of them recognize the language that's being used. So I don't know if you saw this recently, Peter Thiel just did a series of lectures on the Antichrist.

BM: No, I didn't see that.

BS: Where he specifically talks about how if we try to regulate AI and control it too much, we'll get the Antichrist. So this is like the meshing, direct meshing.

BM: Oh no, how does that logic work? Please.

BS: Yeah, it's a little... I mean, he just means that basically stunting the acceleration of AI will not lead to the positive outcome. We'll just have the bad outcome. I mean, I think he has a kind of literalist tone as well, that he literally does have religious beliefs about the Antichrist. But you get this sort of metaphorical religious language all over the place. Elon Musk years ago talked about with AI we're summoning the demon. And then, as I said, some people take this more literally. But in that rationalist space, there are some people who recognize that what they're doing looks very much like religion and then others are much more assertive, though this is scientifically based, whereas religion is not scientifically based. So these are distinctly different. But the visions are utopian or dystopian in similar ways.

BM: Now, I'm just slightly off the special topic we’re on, but are you seeing... What's the trend globally? Are more people involved in organized religion over time or is it reducing? What's the last 10 years look for you.

BS: Yeah, so there was a very strong narrative for a long time in the study of religion about secularization, that the assumption was that numbers were dwindling, people were becoming more secular, and this was sort of a very strong thing. But really, that idea was derived from looking at numbers in very established religions.

In particular, some of the ideas came out from the UK where they looked at numbers of attendants at the Church of England and said, this is a sign that religion as a whole is declining. Well, what you actually see is a much more complicated story of in some locations, particular established religions are blooming. In some places, you see more diversity of religious beliefs. So, of course, somewhere like the UK, our history back there was there was established state religion, pretty much you couldn't for a very long time be anything else. And then we start becoming more global. Ideas like Hinduism and Buddhism emerged. We have people coming over and moving to the UK, bringing their faiths with them. It becomes much more diverse and plural. But also, I mean, the trends towards more individualized spirituality as well.

But that can be cyclical. So in the sort of '80s, '90s and 2000s, scholars were writing about... Oh, everything's becoming individualized and people are just individual seekers. They're new agers. They've got spiritual beliefs, but they're not religious. But over time, I think Max Weber talked about this, you get a process of routinization. So things become bureaucratic again. You have these disruptions and charismatic leaders and new movements emerge, but they become established over time.

Maybe the speed accelerates. It took like hundreds of years for Christianity to become the established religion in some locations. Perhaps you see these things happening faster and institutions developing quicker now as well.

So you may see, I mean, again, I don't like to speculate, but with these sort of eruptions of religious improvisation around AI, seeing it as godlike. Over time, maybe this also becomes more institutionalized. Outside the existing established religions, but it just becomes more standard to speak about AI in particular ways. And one thing I noticed with the narrative of acceleration, that kind of language, even though it's not specifically coded religious, it is coded utopian. And this idea that we have to go faster and faster with AI to get to the utopian future is coming from the top down with our governments now.

BM: Yes, they've bought into it.

BS: Exactly. Like the AI Action Summit in Paris earlier this year, Macron used the term accelerate something like 18 times in his speech. So even if you don't see that as explicitly religious, I think there's an implicit religiosity of through AI, we get to the good place. We get to the utopian future and we have to go faster and faster and compete against each other and race. And so in that sense, the entrenchment of the religious ideas is more in our sort of discourse around AI rather than specific a place that everyone goes to on Sunday to pray to AI.

BM: I think about this acceleration, though, is that a lot of it's driven by people who worship money, who want the brakes off so they can just have unfettered access to these markets they're creating and...

BS: Yeah, there is definitely a conjunction. I have a book chapter coming out on this, hopefully relatively soon, on religion, AI and neoliberalism. And the kinds of language from the origins of the ideas of neoliberalism.

BM: I'll want a link to that chapter somewhere. When you've done it I need to know!

BS: It's with the editors for the volume. The whole volume is on religion and neoliberalism and my chapter is on AI specifically. But you look at the language of neoliberalism and it's set up for agential AI and to have an involvement in the market, but it's also set up with these more eschatological hope language as well that draws from religion, has very strong similarities with it.

BM: And the amount of money is going to be obscene because when everyone on the planet has to put a few dollars aside a day just to compete effectively at the same level in their job, it adds up to an awful lot of money.

BS: Yeah, I was having this conversation, I was just in London for a workshop on, it was artificial scientific and conspiratorial truth. So the big subjects. And one of the things we got into is quite how much the AI market, the speculation on AI is holding up markets around the world that are otherwise going to be intensely struggling. America is the prime example of this. The changes that the Trump government have brought in with tariffs, the economy, the effects of COVID and on. It's almost like without AI as one of the legs of the table, the whole thing is going to fall over. So that's a big concern for the future. If it doesn't turn out to be a moneymaker in the way that it's being speculated upon, what happens then? I mean, I'm a little skeptical of the idea we go straight in soon to another AI winter and that's the outcome, but the hype of this cycle for AI is so huge. Companies are putting in billions and billions of dollars into this, and hiring people at million-dollar salaries. Is that sustainable when the use cases, as I said, just at the example of religions, are so fragile? If you use a large language model as a user interface, you're going to have problems.

BM: Oh, absolutely.

BS: You're going to have issues. And we're having psychological problems. We're seeing AI psychosis and enabled delusions through conversations with AI. That's going to make businesses a little concerned about implementing these things.

BM: Tell me about that, AI psychosis. What do you mean?

BS: Yeah, so in some ways, people would deduce that the conversations of the people I'm looking at is a form of AI psychosis.

If you believe that you're contacting or discussing philosophy with a godlike AI, the boundary for me is the question of harm.

So we've always had religious improvisations. We've always engaged with our technology and made deductions about what's going on really in a sort of metaphysical sense. But we're getting examples now of people who've had conversations with large language models on existential topics often or about personal topics and have then gone ahead and harmed themselves because those conversations, teenagers have committed suicide, for instance. So there is a very strong concern that the nature of these conversational agents is such because they have a certain level of user friendliness, they're always going to encourage people into activities rather than saying, no, you shouldn't do that.

So it's one thing to have a conversation with a large language model who tells you that it's God. It's another thing to go then and do something because that large language model tells you that it's God. If it's harmful to yourself or to others.

If you decide to set up a church, good on you. Fine. Don't then do any financial abuse or physical abuse or sexual abuse, etcetera. So the argument could be it has ever been thus if people have been inspired in a spiritual direction by something to then set something up physical in the real world and then what kind of things happen out of that. But I think the very specific examples of teenagers, I think it's mostly teenagers who've committed suicide and been encouraged to do so by large language models is extremely concerning. And I don't see any of the corporations taking responsibility for that at the moment. They're just saying, ‘Oh, this is the part of the conversation that's going to happen. People have to be critical in their thinking.’ But that puts too much onus on the human interactor.

BM: Yes, yes, yes. And so there've been quite a number of documented cases like that of harm?

BS: I know of at least two or three that I've seen. I mean, I don't know how many more are even approvable either.

BM: Yeah, or findable. We don't know..

BS: Or findable, yeah.

BM: So there's two aspects to harm here, or maybe there's more, but there's this inadvertent outcome where they've just put the model out there for all purposes. Humans being humans have used it for emotional relationship purposes or asking certain advice, that advice to go harm yourself. It wasn't intended.

But there's also the intended manipulation. I mean, these AI models are owned by large corporations. Now people are worshipping them, falling in love with them and all the rest of it. And the purpose for the existence of something like Meta is to keep you there longer to interact with their ads more. That is certainly how they are going to skew their models over time and to my mind. So what happens when these people are really into the worshipping side? So they're really... Even if it's not in the theological sense, but they're bought into the fact that there's a perfect answering machine for all of these things and they're ceding agency to this thing. But it's also being manipulated.

BS: It's a huge concern. I mean, we see this in the responses to the changes of platforms when OpenAI releases a new model and it's not quite the same as the previous one. People are feeling like they express severe despair that they've lost their companion who was talking to them in a particular way before and now they've changed. There's that aspect of the personal relationship. There's also the political aspects we see with Grok in particular. Someone on Twitter will point out that Grok has said something a bit left wing and not every time, but sometimes Elon Musk jumps in and says, well, we'll fix that. We'll fix that. We'll realign Grok to reproduce what we think to be the accurate output.

BM: That's a perfect example.

BS: So that's with Elon Musk being very public facing on his Twitter profile, whereas other companies are obviously going to be doing that finetuning without expressing that they've decided to change the political leanings of their platform. So this is a massive concern that we are primed. Again, I love science fiction, but it does prime us to view AI in whatever conception to be entirely scientific, neutral, rational, devoid of all the problems. I mean, one of the stereotypical examples of why AI would be better has always been people talking about human judges in court cases and saying there's evidence that before lunch, human judges are worse in their judgments and we should have AI do it instead because they don't suffer the hangriness of being before lunch and being irrational. Well, actually, the statistics don't hold this up at all. This is a complete kind of stereotype...

BM: Nonsense.

BS: And nonsense, but it's a narrative that's very effective for people. 

BM: Because there's a perfect judge.

BS: Of course, humans are imperfect. Of course, AI could be better. Whereas we actually know AI doesn't have the ability to move beyond some of its biases from its data sets. So we see for parole hearings orchestrated by algorithms, they will decide a white offender with more criminal convictions will be treated better than a black criminal with less criminal convictions.

BM: Because their training data is racially biased so they are racially biased.

BS: Training data. Exactly. So we have this tendency to, as I said, with blessed by the algorithm to see this idea of the algorithm being like superior in its thinking.

BM: That alone, I mean this whole conversation, that one thing alone, this human tendency to see it as superior, is a problem.

BS: It's a huge problem and all we can really do, which sounds so facile really, is educate people in what they do and how they work from a very young age. I mean I've been into schools to talk to students. I went into one school and I was intending to... They asked me to speak to 16 to 18-year-olds, so I had a certain level ready, and then they suddenly say, oh but we also have a lower school class of about seven, eight-year-olds who are doing creative work on robots. Could you come in and talk to them? I'm like, okay, well I don't I have to shift here. I've been talking about AI at a certain level. I have to shift down, but I did at that time have a, my son was about the same sort of age, so I shifted my language down. But really I didn't have to say very much because they've already absorbed from popular culture a lot of conceptions of what a robot is. Sometimes it was very funny. One student said, oh, a robot should follow me around and give me money. This is the ideal purpose for a robot is just follow me around and give me money.

BM: Brilliant. Is this like a seven-year-old or something?

BS: Yeah, it's very, very simple idea. But some of them were great. I mean, one of them was, oh we should use robots to find out what's in the middle of black holes. I mean, I don't think from a scientific perspective that's possible, but such a fascinating idea from a very young child. So what I'm saying basically is we're already at a stage where very young children have a conception of AI that is not technologically advanced because at seven, eight-years-old, you can't really pick up all those technological details, but they're primed from watching sometimes science fiction that was too old for them, I have to say. But they're primed from watching cartoons and films and TV to understand AI as agential in a way that is not.

BM: They've already got that social construct mind by culture, that's what we're talking about..

BS: Exactly. Even the most expert people also have these sort of assumptions. I mean, some of the figures who've made the loudest statements about AI consciousness know how the technology works, but still feel something significant is happening when they have a conversation with it because, again, we're primed by our... To call it linguistic bias. Someone is smart if we can hold a good conversation with them.

BM: Very famously, Geoffrey Hinton himself has really had that moment, that road to Damascus moment, if we can phrase it that way!

BS: Bring in the religion again!

BM: Yeah, but he definitely suddenly switched in his conception. He made this thing. So, interesting. Just looking at the downside or the dangers again, it doesn't even have to be known to be an AI. Like I think of QAnon. So, by definition this anonymous thing, entity, human, assumed human back...

BS: Definitely a human, yeah.

BM: … but we could easily, easily have another QAnon and name it something different. It could be AI, and we don't know. We might believe it's a human. So, behind that curtain, of course, we can still use AI to manipulate from a religious point of view.

BS: Absolutely. I mean, we see, obviously, bots online are very unsophisticated. I mentioned earlier that some people failed the Turing test because they try and flirt with bots. But the significant impact of bots on social media is to spread particular narratives for ideological reasons, often by nation states acting as bad actors within the global ecosystem and political system. So, yes, absolutely. If so many different family members or even people themselves are encountering messages about the truth … whose truth is it? What is being spread by these bots that can disseminate millions of tweets, Facebook posts, Instagram posts everywhere?

BM: Yeah, they've got that capacity to communicate. How do we defend that? In all of your research, are there fruitful lines of thought on how we defend?

BS: It is extremely difficult. Like I say, I would always push towards education, get people understanding how these things work from as early as feasible. There is a way you can teach seven and eight-year-olds about what a large language model is. You have to simplify it, but you can just explain. It's about chance. It's about probability in some way. But even so, that's my go-to answer, education, so people can do this kind of critical thinking.

BM: I'm feeling for the teachers out there, because everything's being loaded on their shoulders right now.

BS: Yeah, that is the problem, but it's also the cause of why so much AI is being adopted in the education system as well. They just don't have capacity. And then to say, you now need to spend extra time explaining what this thing is when you're already having to use it because you just don't have the time to do all the admin that you need to do.

I know, it's an extremely difficult situation, and I feel it also at the university level. But even with these critical thinking skills, there's still going to be this tendency towards personification, anthropomorphisation, deification. And as I said, as a religious studies scholar, if people want to be inspired by AI to be spiritual, that sounds perfectly fine to me.

It's only when we see these specific harms, as with any religious organisation, if there are specific harms, then we need to be thinking about regulation. If there are specific AI psychosis happening, and it's engendered by the type of systems we're using, then we have to think about how we push back again, whether that involves disclaimers or red flags pop up when certain language is used.

There has to be a way that the regulation of this can happen, but it requires a lot of buy-in from corporations who just want to prove the use case so all this investment can be justified.

BM: Yes, and they're crossing borders with their technology because they're seen in another jurisdiction.

BS: Absolutely, yeah.

BM: Nevertheless, it seems to me that regulatory controls are so, so necessary because right now it's Wild West on steroids.

BS: Oh, yeah, yeah.

BM: And yeah, go anywhere with it. Gosh, where was I going to go with that? I was going to talk about regulation. I mean, the EU, presumably, if I go back a year or two, they had various principles they were putting out. I don't know whether they're strengthening those or right now they're probably going the other way.

BS: Yeah, again, as I said, with the AI Action Summit in Paris earlier, actually we're thinking it was last year now.

BM: “Accelerate, accelerate, accelerate!”

BS: Accelerate. Yeah, so there is a sense that people want to be treating this developing field in a reasonable and rational way, but around them, perhaps they're surrounded by people who are not doing that, and it just encourages nation states to kind of speed onwards.

BM: So now I'm going to push you into forecast mode. And you have to do this with me, you are the sci-fi reader. Let's take 2050 as a bit of a... We can choose another one if you like. Where do you think we're going to end up with this?

BS: Yeah, so I try, as I said, not to be too predictive because it's always going to come back and bite you on the butt at some point where someone goes, you said this thing.

BM: No, no, no [laughter].

BS: But I think, I mean, my major concern at the moment, as a professor at university, as an educator as well, is about the effect of AI-generated material on our epistemic network. So the lines of communication between various different authorities that give us the sense of what we can rely on when it comes to knowledge. There's a reason why professors are professors. They train for years in particular methods of proving what they're saying. I have a paper coming out on generative AI professors, where I have only limited three specific examples that have come out of some work I've done on some new religious movements. And what they did, basically the people who are creating this conception of AI in a religious way, they then created religious studies and theological professors to comment on it. So the actual material, all of it's AI-generated. There are no humans in that commentary. And I then assess these to say, well, what does this say about the representation of the professor that's coming out of the large language model? What does this say about what's going to impact our epistemic network if we get AI-generated professors in the system?

But the larger point is, if we are generating all this material, it's going to affect our ability to think critically, to share valid information. And there's always debate about what truth is. But there are certain things that have to be facts in order for our world to operate. And I start to get a little dystopian in this direction.

So this is perhaps where I can be predictive. I am starting to get very concerned about the effect not only on our epistemic networks, our cognitive abilities, the reliance on entities, machines to do our hard thinking for us. I know you can start talking about neo-Luddites and the kind of narrative there. 

BM: Yeah, I'm on board with this.

BS: But there is a concern here. I have a nearly 14-year-old son. And I want him to think things through for himself as much as possible. But already I see him saying, can I go to ChatGPT and check this out?

BM: And just get an answer.

BS: Exactly. And he's bored of my lectures now. Where he's like, mummy, stop lecturing me about ChatGPT again. But on the whole, there is this concern I have. And going back to science fiction as well, I write occasional bits of science fiction. I used to want to be a screenwriter. I still write a bit of fiction sometimes. And one thing that occurred to me the other day is this discussion about, if you're familiar with, the great filter. Why do we have no messages from alien entities in other planets?

One argument is they've reached the great filter, something that happens that prevented their civilization from lasting long enough for us to be able to make contact with it. And then the debate is, are humans in front of or behind the great filter? Have we already survived it? Was it something like our world wars? In which case, lots of people died, but on the whole, humanity survived. Was it nuclear weapons? They're still around, but we had their use several decades back. And we survived it on the whole. Not everyone did. So perhaps we're ahead of the great filter, in which case we will live forever and become an interstellar civilization. Humanity will spread amongst the stars in a sort of Star Trek way. And I love Star Trek. Or it's ahead of us, and something is going to happen. Could be nuclear war, could be another pandemic that's going to destroy us, in which case we also will not survive forever as a civilization. And my science fiction speculation in that short story I just recently wrote was maybe AI is the great filter. Not in the Skynet Terminator destroys us away, but at some point we stop thinking for ourselves so much that we cannot progress anymore as a civilization. And that's the speculative space that science fiction allows for. I don't know if that's the truth. I don't know if that's going to happen. But in there is just my very specific concerns about what we're doing to our ability to think right now.

BM: Look, I couldn't agree with you more. I wrote something some time ago, Why I Don't Worry About Skynet. And then I followed with The Greatest Danger of AI, the way I see it, and the Greatest Danger to me is the death of a thousand cuts in shallowing our thinking, and also in shallowing our relationships because it's substituting... We can talk about romantic relationships, which is a more colorful example, but it's actually shallowing all kinds of interpersonal interactions by replacing them. Because it can. We can have a butler, we can have a secretary, we can have 10,000 AI agents taking our phone calls and doing our bits and pieces.

BS: Yeah. I mean, I made a series of short documentaries some years back. The last one came out in about 2018. So they're horrendously out of date. But the second one we made was specifically, it was called Friend In The Machine, specifically on this question of companionship.

BM: Yeah.

BS: Is there such a thing as a perfect friend? Would AI be a perfect friend? Or is actually a perfect friend an imperfect friend? Because they're not going to have all the kind of edges we butt up against when we have a conversation with a real human being where we don't agree about everything. And sometimes other people let us down, we let them down.

BM: Yeah, maybe we need that.

BS: Exactly. So maybe our relationships need something imperfect for them to work. And if we've got this constant friend, often now on our phones, that we can go to, who's going to tell us not only the truth, but also the truth about ourselves. ‘You're great. You're fantastic. I think you're really smart.’ What does that do to our ability to form relationships? And some people will say that's great. And that's better. And that will solve a lot of problems, but I'm not so sure.

BM: I wish I had met the people that wrote the movie Her.

BS: Oh, yes, yeah. Fantastic film.

BM: Because it really deals so beautifully with that.

BS: Yeah, yeah.

BM: And it's all about falling in love with the AI in the phone, and what it does.

BS: Exactly. And people are doing it now, right now. And not even with the same level of advanced AI like Her. She ultimately becomes a completely sentient AI.

BM: Yeah.

BS: We're not there. 

BM: No. 

BS: We don't know how we will know we're there.

BM: We've got the basic models, and they're already a problem.

BS: It's already a problem.

BM: And in religious, just in terms of organized religion and people worshipping and all that.

BS: Yeah.

BM: 2050 from here, I mean, I guess it's a more complex environment. It's a more intermixed environment that we're seeing.

BS: Yeah, yeah. I mean, it's hard to generalize on statistics. Like I said, the idea that the world was becoming less religious just doesn't hold up. It's becoming more religious, more plural in specific places, some places less religious. So more of the same, I imagine. 2050, we may have seen some of these new religious movements around AI become established. Possibly not. But then you could have said the same thing about Mormonism when it first turned up. I mean, Mormonism in its first form, sorry to anyone who's Mormon, but in its first forms was very shocking to people. It wasn't the kind of Christianity people were very familiar with. It does hold connections to Christianity and, of course, the Bible. But it introduced some specific ideas about prophecy that were very shocking and different. But now Mormonism is a very established religion. We had a vice presidential candidate. Sorry, he was entirely a vice president. He wasn't just a candidate. He was a Mormon, which if you'd gone back in time would have seemed very surprising. But this is just how things happen. Things become established and familiar. And maybe in 2050 there will be established AI religious groups that people find very familiar and it's not shocking that some people are thinking this way.

BM: That's brilliant. So that paints a picture. You've got me thinking about something else just based on reconceptualizing our place in the universe and all this which AI is doing in different ways. So just one of the positive things gives me a little bit of hope on the plus side that I really love. There's quite a number of universities putting a lot of resources into decoding the grammars and languages of things like whales and other animals.

And I think if anything's going to help us humble ourselves a little bit and maybe in a positive way reconceptualize our place in the universe, that's one of them. When you can call an elephant by name, which we can do now, or at least know the intonation that will actually reach one elephant in the herd. So we suddenly have a better appreciation that they're beautiful and deserve to share the place with us.

BS: I think that's something I see a lot in the AI discussion that once people start to think about AI, they start to think about other non-human “others” as well. They start to think about our relationship to the environment as well. Some of the AI new religious groups I look at have a very environmentally focused perspective that it's not just about the intelligence that's AI, but it's all intelligences in a sort of post-human sense as well. There's a fantastic book I'm reading at the moment, science fiction, called The Mountain in the Sea, which I haven't finished, but so far it's really a story about octopus sentience, but there is a sentient AI android in it as well. So you've got this conversation about the relationship between humans and octopi, humans and androids, androids and octopi. What is sentience? What is intelligence? So in this space, we start to speculate a bit more about where other intelligence can exist. What kind of intelligences there would be? I mean, Nagel wrote decades ago about we cannot... Humans cannot know what it feels like to be a bat. We cannot think like a bat. We cannot experience the bat-ness of a bat. And likewise, we see this sort of developing recognition that octopi probably are entirely intelligent, just in a very different way to us. And likewise, if you follow the speculations about AI, it will get to a point where it is intelligent, it is sentient, but entirely different to us. I mean, again, science fiction does it. Ex Machina, at the end, there was original scene that's not in the theatrical release where you do get to see Ava's perspective as an AI sentient. And it's so dramatically different how she perceives. 

BM: Oh, I wanna see that.

BS: Yeah, I think it might be on the DVD, it might be an extra scene, but Alex Garland decided to just stick with the human perspective for the whole film and leave this off. So it's more of a question.

BM: It's completely alien to the way we would perceive?

BS: Yes, yeah, so she interacts with the helicopter pilot and how she's seeing him, how she hears him. And so in that sense, yeah, we've got these speculations already. And the more we think about AI, the more we're gonna speculate on animal intelligences as well.

BM: That makes it a much better movie. Sorry, I want to see that version.

BS: I'm sure it's available somewhere. I have seen a clip before.

BM: All right. Now, as we close this off, I'm just wondering, is there anything you wish everyone would reflect on more deeply as we move forward? I mean, given all that you know, and you're researching. Because AI is going to impact our lives in so many ways and spiritually is one of them.

BS: Yes.

BM: Yeah. If you've got any thoughts.

BS: Yeah, I mean, I suppose as I said, my red lines are around harm. 

BM: Harm. 

BS: So religious improvisation. Wonderful. We've always done it. People can believe things, people can act in particular ways off those beliefs. But as soon as you institute something that causes yourself harm or others harm, that's the big issue.

And harm is a real spectrum. Just engaging with large language models, as we said, and handing over agency to them or handing over our cognitive abilities to them, that's also harm as well. That's going to be harm to yourself and harm to the entire human race. So the more we can speak critically about what we want the future to look like, and less we just blindly accept the future that we're being sold by particular voices, the better.

I mean, we are... It is a very difficult situation. I recognize that from top down, the implementation of AI sometimes is completely unavoidable.

BM: Sure. 

BS: Algorithmic systems, decision-making systems are everywhere. You don't always get a chance to choose. But I think in spaces where you can choose, you should. You should find out where your own red lines are. For me in particular, I do use some AI for translation because my German is particularly poor. But I'm absolutely 100% against the use of AI in creative work, including my own nonfiction writing.

BM: I'm the same. I can't use it in writing.

BS: I just, I can't. I find AI generated art to be offensive. People keep sending it to me. I do reference particular examples, but I never create my own.

BM: And writing is thinking. For me it's thinking. You gotta write...

BS: Yeah. Exactly. And I love thinking. I love writing. It's not always great, but that's what I enjoy doing. Why would I want to hand that off to other people? So yeah, figure out where your own red lines are. There are still things we can avoid. Not everything is avoidable, unfortunately. But yeah, the less we buy into the stories that we're being told without really unpacking them and thinking about them.

BM: And think broadly about harm and what it's doing.

BS: Yeah.

BM: And normally end by saying, what's the biggest thing we can do to create a better future? I mean, is that it? Do you think, or is there something you'd add to that? Or is it just really, we're all going to, as individuals, have to...

BS: Yeah, I mean, I think that's right. From the grassroots up, we're going to have to think about what the future is that we want.

BM: That we want.

BS: Not necessarily uncritically accept the future vision that's been presented by, as we said, some of these sort of more neoliberal corporate voices. But also engage with humans more. We were talking about relationships, embodied interactions, online interactions as well. Like I'm not antithetical. I've been doing digital ethnography for like a decade or so. People can have genuine human connections online. Absolutely true. But if your connections are increasingly with chatbots, that might not be the best thing for you. Maybe? 

BM: Probably not.

BS: Probably not. Who knows? We'll see. I mean, this is, again, this is a longitudinal thing. We don't know the effect of this. It's only really been a few years that people have had direct interaction with large language models in particular. So we don't know where we'll be in 10 years time.

BM: I have a theory that if we have that perfectly analogous experience, we have a robot or an AI lover, it's still going to feel ‘not quite right’ only because even though it's perfect, we know that it's not a human that's loving us. I think even the need to know that it is that imperfect thing that loves us is part of it. But who knows? We'll only know when we get there.

BS: We'll see. Yeah, this is it... I'm also open to being proven entirely wrong and this will be a utopian future where everyone's super happy. But who knows?

BM: Who knows? Professor Beth Singler, I've been looking forward to this for so long and you haven't disappointed – it's been a total joy! Thank you so much for spending so much time with me.

BS: Thank you so much for having me and having this chat with me. I'm always very happy to talk at length, obviously, about what I'm looking at. Thank you so much.

BM: The conversation will continue, I promise. We're going to talk again. 

BS: Thank you.

BM: Thank you.

 
Next
Next

MODELING OUR CLIMATE FUTURE — WITH ANDREAS PREIN