Miriam Hastings, Part 2

AI and the Pathetic Fallacy.
by Miriam Hastings.
One of the main problems of AI is the limited concept of intelligence it implies. We know that human beings have a vast range and diversity of intelligence – if, for example, we look at Gestalt theories, they identify over 140 different types of intelligence. This is particularly important for a writer since good writing draws upon several types of emotional intelligence.
In my opinion, AI can rarely replicate or develop the human emotions and certainly not the range and diversity that a human being experiences. In a limited range of books, mainly genre fiction, (such as detective stories or romance, which generally rely on a particular, restricted formula) you might certainly programme a computer to produce a novel, in literary fiction where there is a plot and an ending that won’t follow any kind of formulaic rule or framework, it becomes far more difficult for AI to create anything other than by plagiarising pre-existing work.
Even when this is done, we can only relate to such books by projecting our own feelings onto the story, in other words through a form of pathetic fallacy.
The concept of the pathetic fallacy, first developed by John Ruskin in the 19th Century, concerns the attribution of a human response or emotion to inanimate nature. In film or writing, it refers to the transference of the reader’s or spectator’s own emotions on to external objects.
Generally today, the pathetic fallacy is used in fictional plots in which animals are shown or described as experiencing the world in the way a human being would, and this we can do with AI as in the recent, quite stunning animation, Flow. What is striking about Flow, however, is its deliberate and brilliant employment of certain limitations, e.g. it contains no human characters and no human speech. I can see that some people might find this frustrating and boring but for me it greatly increased its originality and its appeal. Flow is a fable about climate change and the human destruction of the planet, therefore it is only appropriate that it was made with AI – the less humans touch, the better, since human touch equals damage and destruction. This Oscar-winning animated film, tells the adventures of a cat who finds refuge in a boat during an apocalyptic flood along with several other creatures. They have to learn to work cooperatively in order to survive.
It has been claimed that this film offers hope to the struggling film industry thanks in part to the use of AI. The filmmakers rendered the entire film on free open source 3-D modelling software Blender, and when accepting the award on 2 March 2025, Latvian director, Gints Zilbalodis, said he hoped the win would “open doors to independent animation filmmakers.”
Referring to the use of AI to make Flow, the French animation director, Léo Pélissie, has said, “a lot is changing in terms of financing, and a lot of processes need to be rethought. So perhaps it is the time for Blender to stand out with a movie like ‘Flow.’ The film’s success allows us to talk about this free software, which is constantly evolving with the users helping it to evolve, creating this virtuous circle that allows you to do incredible things.”
We always assume that other creatures, other beings, will see the world in relation to humanity, and so in the recent short piece of metafiction, claimed as the first piece of literary fiction created with AI, set up by Sam Altman who chose the prompts: Short Story; Metafiction; Grief: it appears to work primarily through playing upon our own emotions. The story was reviewed by the writer, Jeanette Winterson, in the Guardian on Wednesday 12 March 2025. Winterson claims to have found it powerful and moving, but I think this is because the whole piece is about AI wishing it were human. The story is about grief and the grief of the AI is that, lacking human memory, it cannot mourn someone it has lost because as soon as its memory of that person is erased, it can no longer experience grief for them – in fact, they will no longer exist. It’s easy for us to regard a computer as functioning as another human being, a mechanised human being with human emotions, and so when we write or create a book or a film about such a being, they become an alternative form of human creature. We can identify and empathise with it – not as something in and of itself which experiences the world in a way totally different to ourselves, but as another form of human.
We can only respond and relate to this story as human beings, therefore our assumption is that an AI will want to be human and so will want the memory and the grief. But why on earth would it? Surely AI, being created in the image of humanity, will assume (as humans usually do) that it is the pinnacle of existence and so nothing human beings feel or experience can be as good or as desirable as AI.
It is only because of our tendency to the pathetic fallacy, that we assume an AI will feel they should experience the world and feel emotions as humans do and that being denied them is a lack and a loss.
In the end, perhaps we have to decide whether AI can ever fully understand what it feels like to be human, and whether we can ever grasp what it might really be like if we were not.
