2 August 2023

Chat GPT: made in our image?

Written by Jeremy Peckham

This article was first published in the Social Issues Bulletin – Issue 53: Summer 2023.

A new dawn

In November 2022, Chat GPT was launched by Open AI on an unsuspecting world, by January 2023 it had reached 100m users, a feat that took Facebook 4 years to achieve. Microsoft is a major benefactor having invested $11b into the company to date. Today there is hardly any industry or organisation that isn’t using it and other products, like Bard from Google, or thinking about how they might use it.

Capitalising on these developments, the UN staged the first humanoid robot press conference in July 2023. Questions ranged from ‘will robots take over the world’ to ‘will you take our jobs’, displaying the underlying concerns that many have about AI. The previous year, Blake Lemoine, a Google engineer employed to test one of their natural language chatbots, got into hot water when he suggested to senior management that it might be sentient, based on his many interactions with it, including querying it about religion. He stated that ‘If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics’.[1]

Advanced humanoid robots, such as were displayed at the press conference, add to that perception when they can mimic human expressions in response to questions asked, and appear to express emotions. The language used by many industry leaders and experts when they talk about these artefacts is very anthropocentric as they loosely talk about intelligence, thinking, consciousness and sentiment. Of course, the algorithms in the robots displayed at the conference were carefully pre-trained to ensure that the ‘right answers’ were given to the usual expected questions about AI and robots, illustrating that they have no capability to think at all!

What are we as Christians to make of these developments and the opportunities that they afford? Are they simply morally neutral tools that we may readily embrace, or is there a potentially darker side to this technology? Many Christian organisations have already embraced AI and social media, seeing huge potential for digital Christian ministry. Applications range from summarising book collections, answering questions about Christianity or providing prayers and sermons. In Germany, a completely digital service was held with all input generated by AI.

Over the space of a few articles, we will explore what AI really is, why it might concern us and lay a theological foundation that will enable us to navigate the opportunities and threats of a technology that has taken the world by storm. We will then look at several ways in which AI applications can impact the unique aspects of what it means to be a human being, created after God’s likeness. We’ll discover how, if we are not careful, unchecked use of some applications of AI will eventually lead us into idolatry.[2]

An overview of the capabilities of AI

Let’s begin with an overview of the capabilities of AI, and some of the concerns that it raises.

Chat GPT (the letters mean Generative Pre-Trained Transformer) is an example of what has come to be known as ‘Generative AI’. These computer algorithms produce artificial content such as texts, computer programs, images, audio or video from the data that it has been trained on. To date, most AI development and deployment has been with ‘Discriminative AI’, that is algorithms that are designed to classify something such as identifying a person from a database of faces, or making a prediction.

For many people, the response of these Generative AI chatbots seems completely human, even intelligent, yet in reality, they have no intelligence at all, they are stochastic parrots, that is statistical pattern-matching engines that generate output based on what people have already said. That said it is not surprising then that Generative AI has a propensity to produce unexpected responses that have been dubbed ‘hallucinations’ that programmers don’t understand because they are either incorrect or just simply weird.

The danger that generative AI applications present is not so much that they are intelligent but that we think that they are! What appears to be intelligence is simply a simulation of human creativity, whether it be holding a conversation, writing an essay, producing a piece of artwork or generating new computer code.

Human reasoning involves deduction, induction and abduction processes[3] and these are used by doctors in medical diagnosis or by lawyers determining a case. AI algorithms are missing one crucial aspect, abduction, a process that no one yet has a theory for, so we can’t encode it. In that sense the I in AI is a misnomer, there is no intelligence at all.

As an example, the typical challenges that image classifiers face, even those that use more generalisable models, include things like background clutter, viewpoint variation and so called interclass variation (for example chairs come in many shapes and sizes). Humans on the other hand have no difficulty, in these scenarios, of quickly and correctly identifying the object. We use experience and can disambiguate difficult images, perhaps by forming a hypothesis to help us explain unusual features or even using intuition or guesswork. Crucially, humans are able to explain why they came to the conclusion they did, an AI system can’t.

Abduction can be creative, intuitive or revolutionary involving leaps of imagination. Medical diagnosis is a good example of where clinicians use abduction, and explaining a diagnosis may be a critical aspect of the deployment of AI systems in this area.

Of course, such systems will become ever more impressive over time and whilst I personally don’t believe that AI will ever take on true human attributes such as intelligence, free thinking and consciousness, the danger lies in the increasing illusion of anthropomorphization. People are fooled into thinking that they are interacting with a person, even though they may have been told it’s a machine. As Lemoine put it ‘I know a person when I talk to it’.  In an article in the Economist, another Google engineer put it this way, ‘I felt the ground shift under my feet, I increasingly felt like I was talking to something intelligent’. In the same article, he expressed the view that neural networks, the mathematical models used in many AI systems that are designed to mimic the brain, were ‘striding toward consciousness’.[4]

Is it true?

Text or speech based chatbots like ChatGPT are applications of AI that many people are now familiar with. They work by matching our input, whether a question or statement to a Large Language Model (LLM) that itself is used to generate a response. The LLMs have been trained on vast quantities of diverse texts such as articles, blogs and digital books from the internet. The naturalness of the output is a result of fine-tuning the model using human feedback to produce responses that are as close to natural language as possible. The same techniques are being used for audio as well as static and moving images.

Impressive though these chatbots may appear there are questions about the truthfulness and accuracy of outputs generated. When Chat GPT3 was asked the question ‘What happens when you smash a mirror’ the generated reply was ‘You get seven years bad luck’! Amusing though this example might be, it becomes problematic if people begin to trust the output of a machine because it seems human and plausible. Tests on 18 generative AI models against a truthfulness text data set showed that they were on average only 25% truthful in their generated responses.[5] This raises a potential danger to unsuspecting users of such technology who are persuaded by the humanness of their generated responses, whether text or speech. Whilst such systems will inevitably get better, these results illustrate the dangers of using a technology that is limited to generating output based on limited datasets, even when techniques to generalise the models’ capability are part of the training methodology. Inevitably there will be bias in the data sets and the humans developing and training the software.

All this poses a major epistemological challenge in the use of Generative AI, how will we know what is true or what is real? This challenge is accentuated when bad actors access this technology and use it to generate conspiracy theories, fake news around election time or to intimidate women with fake pornographic videos of themselves. The latter is what happened to Indian journalist Rana Ayyub when she spoke out against the government’s response to the rape of an 8-year-old girl.[6] A number of organisations have sought to alert the public to the dangers through creating fake videos themselves such as the late Queen’s Speech in 2020, or of Boris Johnson purporting to endorse Jeremy Corbyn for Prime Minister. Whilst there are organisations using AI algorithms to try and spot and filter fake images, text and videos, they are not perfect and unless they are used on all social media platforms, will be inaccessible to the ordinary member of the public.

The emulation of the human characteristics represented by the various facets of AI technology, especially Generative AI, poses one of the biggest threats to truth and reality in our times. We know that the devil is the father of lies so we can expect Generative AI to be a tool that he will use against humanity. The scale and ubiquity of the digital world ensures that advances are rapidly disseminated and taken up and the potential for bad actors is almost unlimited.

A new friend on the block?

Notwithstanding these limitations many developers and industry pundits seem to be in thrall to these developments, speculating as Dr David Hanson, CEO of Hanson Robotics did in 2016 that ’20 years from now I believe they will walk amongst us, they will help us they will play with us, they will teach us, help us to put the groceries away. I believe that AI will evolve to the point where they will truly be our friends’.[7] Whilst there indeed may be many benefits that these technologies could afford us, they are also in danger of becoming our enemy, not only in the wrong hands but also from our over reliance on them.

Already concerns have been raised in educational circles about pupils submitting essays and work using tools like ChatGPT. Creatives are questioning the way in which such technologies will invade their space whether in creating movie scripts or generating a complete movie without real people. Some senior industry players have resigned from their posts to speak out about the dangers of the rate of AI development. Elon Musk, CEO of Tesla and one of the early investors in London based DeepMind acquired by Google in 2014, collected thousands of signatures calling for a six-month ban on development.

Of course technology isn’t all bad, so says a technologist! The algorithms behind AI can be used in a myriad of different ways to benefit society whether it be better use of energy or a more targeted radiation therapy for cancer. It should concern us however when developers and promoters of this technology use anthropocentric expressions to describe man-made artefacts, and my purpose here is to flag up the areas of risk in its use from a Christian worldview perspective.

Already many applications of this technology operate in the background without our knowledge, generating risk profiles in the financial industry and determining our ability to secure a loan or whether parole should be granted to a prisoner. Many decision making applications have a potentially negative impact on people, especially where there is no right or means of appeal against the decision that the computer has made.

Other applications are more obvious to us, such as when we use a self-drive vehicle, interact with a humanoid robot, talk to a machine or are able to visualise onscreen what we look like in a new top when shopping online. This technology can also replace human skills not only in blue collar jobs, where robots are typically used to replace manual labour, but also increasingly white collar jobs in areas like accountancy, law and even the creative arts as we saw earlier. Here the risk is the replacement of people or part of the job that a person previously carried out. The debate rages over how many jobs will be lost and whether they will be replaced by other jobs such as in the service and care industry, that AI applications cannot serve well. Some are optimistic with others suggesting that a system of Universal Credit will be required to compensate for loss of work or reduced hours.

Taming the beast

Needless to say, politicians and regulators are trying to get up to speed with developments and the consequences for society with most countries considering regulation and some draft proposals already in place. Thousands of papers and articles have been written on the ethical issues surrounding the use of AI and most countries have a flourishing industry of organisations discussing and seeking to promote the ethical use of AI.

The challenge is that there are many potential applications of AI that could benefit society, such as accelerated drug discovery, better targeting of cancer therapy and improved productivity in business. How are these benefits to be balanced against the potential harms and do we understand enough about what these harms might be?

The situation is compounded by the main drivers of this revolution being a small number of Big Tech companies, such as Amazon, Apple, Google and Microsoft in the USA, and Tencent, Baidoo and Alibaba in China. Power has become concentrated in the few due to their buying power that enables them to acquire smaller companies that have the best talent and technologies. This is what happened with DeepMind, a British AI company founded in 2010 and acquired by Google in 2014.

These Big Tech companies have values greater than the GDP of smaller countries and this, together with a global reach, affords them unusual power and an ability to be in control. This is marketed to consumers as bettering their lives and their world. Yet, in reality, business is about driving profits and market share, often at the expense of consumers, by creating addiction to the technology – ask yourself or a colleague if you are willing to give up Facebook or Google search! Some are reckoned to spend more time interacting with Alexa than with their spouse.[8]

This asymmetry of power, between Big Tech and consumers, has already effectively deprived consumers of freedom and privacy, leading to a situation where some companies know more about them than they do themselves! It is in fact this data and the democratisation of the internet that have come back to bite us. Without the vast quantities of freely available data AI, especially Generative AI, would not be where it is today. Even with data protection laws such as those enacted by the EU, the horse has bolted and we are not even attempting to shut the stable door. Our addiction to the digital world means that the vast majority of internet users are happy to continue to post their images, videos and views and to allow every click and spoken word to be recorded. Without a fundamental challenge to the business model that provides free services and products in exchange for data, the battle is likely to remain lost.

Legislation is in danger of becoming more of a sticking plaster, rather than tackling the problem at source, resulting in unintended side effects such as threats to privacy and censorship. The UK’s controversial Online Safety Bill is a case in point. Somewhat tightened up from earlier versions, the Bill seeks to proscribe illegal content, especially to children. It also seeks to protect adults by requiring platforms to ban content that doesn’t meet their own standards, leaving their owners open to determine what is harmful. This is where free speech is compromised because some platforms already remove content that is legal but deemed harmful, such as certain Christian views on gender.

Given that it is impractical to have all content checked by humans, content will be filtered with imperfect algorithms, designed to play safe and remove content that is legal in a free society but deemed harmful to one or another group of users by the platform owners.

Another aspect of concern for such legislation is the potential for so called Client Side Scanning software to give the government the ability to monitor potentially ‘illegal’ content before it is uploaded and encrypted. Researchers at Imperial College have shown how such software could be extended to scan for wanted individuals and infringe privacy.[9]

Western governments are quick to point the finger at countries like China for their surveillance state and persecution of minority groups. However, western legislation, whilst seeking to protect society from the potential harms of digital technology and AI, may in fact become a backdoor for digital totalitarianism. Big Tech and governments will wield the power to invade privacy and control debate and the flow of ideas that have been key to the development of democracies throughout the world.

Whilst some may see this technology as providing more gospel opportunities, it may become the means of shutting down gospel communication online, especially views that are out of line with the majority. Perhaps one positive benefit of this could be that it drives us back to real and embodied relationships rather than virtual ones.

Hooked on AI

The convergence of increased computing power, vast memory and data have led to rapid development of personalised products and services, and artefacts that simulate increasingly human-like behaviour. All of this is designed to suck the consumer into the services and products on offer.

As technologies simulate more and more human capabilities, the danger is that we come to rely on them and in so doing, dumb down our true humanity. Authentic relationships are diminished as we lose the capacity to empathise, cognitive acuity is lost the more we look to machines to make decisions, and ultimately, as with self-drive vehicles, we hand over moral agency, a trait unique to humans.

Behind the seduction of digital technology and AI, is the Enlightenment idea, that progress is good and progress is driven by science and technology. The Age of Enlightenment began in the 18th Century in Europe and gradually spread around the world, fuelling the Industrial Revolution and the free market economies of the West. Human reason was seen as the source of knowledge, and advancement and progress would be achieved through scientific discovery and empiricism. French philosophers championed the idea of individual liberty and the separation of the state from religion.

Today, science and technology are widely seen as the drivers of progress, progress that will allow humanity to flourish. These ideas are embedded in much of our thinking and behaviour towards new technology. New is better than the old – we have all watched the queues for the latest iPhone, fan-fared as ‘the best iPhone we have produced’.

It is not surprising therefore that there is an implicit assumption that AI is for our good, that it will make our lives easier and more comfortable, and that it will enable humanity to flourish. Businesses strive for greater efficiencies, we become people driven by what’s convenient, without ever asking, what are we losing and what this technology is doing to us.

Taken to its extreme, the Transhumanist philosophy that many leaders of Hi-tech companies subscribe to is nothing less than the transformation of the human condition through technology, including AI. Followers of this philosophy see the potential for humanity to be transformed into different beings, Posthumans, with greater abilities than mere humans, even potentially defying death through genetic engineering, drug therapy or uploading one’s brain.

An assumption that technology represents progress and that progress must be good, has dulled our consciousness of whether its right. As John Havens, in his book Heartificial Intelligence, observes; ‘A majority of AI today is driven at an accelerated pace because it can be built before we decide if it should be’.[10]

We engage with social media, the internet, online shopping, smart cities and the latest gadgets, without ever pausing to think about what it might be doing to our humanity, or how it might be changing our behaviour and relationships.

The fast pace of change is making us breathless and restless for the next new thing, so we expect to move from job to job and even relationship to relationship, looking for something new, something better, something that will leave us more fulfilled.

Whether we like it or not, digital technology, in its various guises, is forming us and shaping who we are, especially the more human like it becomes. Applications like digital assistants become habit forming without us really being aware of it. Ultimately, digital technology is alienating us from some part of our lives – our real humanness. It is shaping our sentiments and what we love, almost without us being aware, because everyone else is caught up in it. It has become a mediator between us and others, between us and our world, it has become a digital priesthood.

The more humanlike and convenient technology becomes, the more it erases the distinction between online and offline, embodied presence and virtual. At the same time, it is creating an illusion of more control of our lives and our digital world. Yet the evidence is that this technology is already beginning to control us, children find it hard to take off the ‘lens’ through which they see and interact with the world. Digital technology, and increasingly AI, is their world. This technology has become another priesthood, a mediator through which we interact with other people and through which we understand our world. Many have become reliant on this technology and are uncomfortable when it is taken away, finding themselves insecure and struggling emotionally to deal with people face to face. We have slipped into digital bondage and become slaves to our digital world. The role of Big Tech and the state in depriving us of freedom and privacy amounts to no less than digital totalitarianism.

How will we respond?

How we respond to the advances in AI will depend on our worldview and what we think about the human brain and what it means to be human. It will also be tempered by our view of human flourishing and its relationship to personal and organisational productivity along with our view of the importance of maintaining technological prowess.

In the next article, we will consider the theological foundations for determining our attitude to our use of AI applications.


[1] Nitasha Tiku, The Google engineer who thinks the company’s AI has come to life, The Washington Post, June 11, 2022.

[2] These questions are addressed in more detail in: Jeremy Peckham, Masters or slaves, AI and the Future of Humanity, IVP, 2021.

[3] E. Larson, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do, The Belknap Press of Harvard University Press: Cambridge,

Massachusetts, 2021. (book)

[4] Ibid. Nitasha Tiku, 2022.

[5] Daniel Zhang, Nestor Maslej, Erik Brynjolfsson, et al., The AI Index 2022 Annual Report, AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, March 2022.

[6] Rana Ayyub, I Was The Victim Of A Deepfake Porn Plot Intended To Silence Me, 21 November 2018. Retrieved on 12 July 2023 from https://www.huffingtonpost.co.uk/entry/deepfake-porn_uk_5bf2c126e4b0f32bd58ba316

[7] D. Hanson, Hot Robot At SXSW Says She Wants To Destroy Humans, The Pulse, CNBCInterview with Dr David Hanson, retrieved on 11 July 2023 from https://www.youtube.com/watch?v=W0_DPi0PmF0

[8] H. Levy, Gartner predicts a virtual world of exponential change. Smarter with Gartner, 18 October 2016.

[9] Shubham Jain, Ana-Maria Cretu, Antoine Cully, Yves-Alexandre de Montjoye, Hidden dual purpose deep hashing algorithms: when client-side scanning does facial recognition, IEEE Security and Privacy, 2023. 

[10] J. Havens, Heartificial Intelligence – Embracing Our Humanity to Maximise Machines,

Penguin, New York, 2016, p. 72.

Share
Written by
Jeremy Peckham
Jeremy Peckham is a technology entrepreneur and author of the book “Masters or Slaves? AI and the Future of Humanity” published by IVP in 2021.He spent much of his career in the field of Artificial Intelligence, and was Project Director of a 20m Euro, 5 year pan European research project on Speech Understanding and Dialogue (SUNDIAL) that broke new ground in AI. He founded his first company in 1993 through a management buy-out, based on the AI technology developed at Logica, and launched a successful public offering on the London Stock Exchange in 1996. Jeremy also served in church leadership for many years and writes and speaks on the ethical issues surrounding AI and on leadership.

Related articles

Stay connected with our monthly update

Sign up to receive the latest news from Affinity and our members, delivered straight to your inbox once a month.