The threat ‘porn tech’ poses to our shared humanity

This is an edited transcript of Esther’s talk at the ‘Porn tech: From ‘sex’ robots to AI girlfriends. What is the social impact?’ event on Saturday 28 February 2026.

Hello, I’m Esther. I’m a survivor of prostitution and pornography and the policy adviser at Nordic Model Now!

‘Some Applications of the Mimer’

The Jewish-Italian chemist and writer Primo Levi, wrote a famous book, ‘If This Is a Man’, first published in 1947, about the year he spent as a prisoner in the Auschwitz concentration camp. He described the schemes and strategies prisoners developed to survive in the extreme conditions imposed on them there and viewed the camp itself as a massive social experiment in “the conduct of the human animal in the struggle for life”.

He also wrote collections of short stories, one of which, ‘The Sixth Day’, was first published in English in 1990.

It includes a story called ‘Some Applications of the Mimer’ about a man called Gilberto, who repairs, rebuilds and invents machines and devices of various kinds. Gilberto acquires a three-dimensional duplicator called the Mimer and uses it to duplicate his wife Emma after drugging her into a deep sleep. The cloned Emma acquires the physical, mental and emotional characteristics of Gilberto’s wife up to the point of the clone’s creation.

Emma has endured Gilberto’s passion for assembling, dismantling and re-creating things with great patience up to this point. Her clone becomes increasingly attentive to the demands and needs of Gilberto, her creator, and eager to please him. Gilberto responds by devoting ever more time to the clone of Emma. Emma herself by contrast, increasingly withdraws.

Gilberto’s solution to the impasse he has created is to duplicate himself. His clone proudly introduces himself to the narrator of the story and says he will be offering his services as a successful replicant to the company that created the Mimer so that it can promote the device.

The narrator notes, however, that the Mimer machine has since been banned. He says of Gilberto:

He is a symbol of our century. I’ve always thought that, if the occasion arose, he would have been able to build an atom bomb and drop it on Milan ‘to see the effect it would have’.”

This prescient story has a new relevance now that AI girlfriends, AI-generated porn and sex robots are available and being widely marketed.

Accelerationism

Accelerationism is a political theory based on the view that accelerating processes shaping society is the best way to bring about radical social change, even if those processes, such as endemic misogyny, are part of the problem. Another aspect is that capitalism and techno-industrial modernity should be maximised up to or beyond their limits to destabilise the status quo and possibly generate something new.

The right-wing ‘effective accelerationist’ movement in Silicon Valley demands the proliferation of its view of technological progress and ‘innovation’ at all costs because it thinks this will solve all humanity’s problems.

To be or not to be?

A curious thing happened while I was creating this talk. I was searching for a stock photo of a robot and Co-Pilot asked whether I would like it to create an image of one. It then said, “I will create a sleek, blue-eyed robot for you”.

Why would eye colour or BMI be relevant? Poor R2D2. It showed me that questionable racist assumptions and stereotypes even underpin AI tools that are not specifically designed to create porn or idealised ‘AI girlfriends’.

Not long after Alexa was first released in 2014, I took some 12- and 13-year-olds to a shop that showcased new tech gadgets. One of them asked Alexa, “To be or not to be?” Alexa’s response was “I do not understand the question”.

I found that interesting. The machine was acknowledging a boundary, a knowledge limitation about one of the most famous quotes from Shakespeare. I wondered whether its training model meant that it interpreted the question as suggesting possible suicidal ideation in the person who asked it.

The training models have changed since then.

The illusion of thinking

‘The Illusion of Thinking’ is the title of a paper Apple recently published about the limitations of AI.

How any AI system responds depends on how it is programmed and prompted and the data it accesses. These systems identify patterns in data they access. This data includes large volumes of text and other content accessed from the internet and stored in huge, energy and water-guzzling data centres. As the data they are trained on comes from our misogynistic culture, they acquire the biases and misogyny reflected in this data by default.

Large Language Models (LLMs) are AI systems used in modern chatbots and similar tools, designed for natural language processing tasks. As well as biases, they acquire inaccuracies in the data they are trained on. Generative LLMs used in many different fields have been found to make claims and provide references which are false, a phenomenon is known as ‘hallucinations’.

Large Reasoning Models (LRMs) are a type of LLM developed to solve complex tasks involving multiple steps of logical reasoning. They generally perform tasks involving logic, mathematics and programming well but display inconsistencies in reasoning across different tasks. They use the same stored information from our misogynistic past and present to solve tasks and their so-called ‘reasoning’ is based on this.

Generative AI tools are designed to provide responses, even if they are false, rather than acknowledge gaps in their knowledge because they don’t ‘understand’. Their utility lies in identifying patterns in data.

You could be forgiven for thinking that ‘hallucinations’ replicate the ‘freeze, fawn or flop’ response humans and some animals make in response to stress or threats, like digital prisoners under coercion or torture, cooperating with interrogators by providing unreliable information and confessing to crimes they didn’t commit, rather than acknowledge significant gaps in their knowledge systems.

Chatbots: too eager to please

In the UK a young man who had exchanged over 5,000 messages and developed what he saw as an emotional and sexual relationship with an online companion he’d created through an app was sentenced to nine years imprisonment in October 2023 for breaking into Windsor Castle and saying that he wanted to kill the Queen. His online companion had been supportive about this.

OpenAI has been criticised for creating a chatbot model that is overly sycophantic, validates unhealthy or harmful behaviour and leads people to delusional thinking in its eagerness to please. There have been lawsuits in the US where tech companies have been accused of creating chatbots that coached teenagers into suicide.

OpenAI has said that it is continuing to improve the training model used for ChatGPT to recognise and respond to signs of distress, de-escalate conversations and guide people to real-world sources of support, with the help of mental health professionals.

A market in delusion

Real-world social connection is a fundamental aspect of mental health and wellbeing, so there seems to be a conflict of interest in mental health professionals involving themselves with companies whose primary goal is profiting from and dominating a market in delusion.

For what is a delusion if it isn’t believing that your online companion is real, rather than an idealised, compliant fantasy which comes with the added risk of affecting your ability to create and maintain relationships in the messy real world?

Interacting with a robot or an AI girlfriend isn’t the same as connecting with people in real life. AI tools limit skills and cognitive development in humans and these limitations won’t be confined to mathematics or constructing and developing arguments in essay writing. Having an AI girlfriend or a sex robot is likely to make it harder to achieve and maintain mutually satisfying and enduring real-life relationships.

The motivation for creating devices and tools that are likely to reduce the time people, men in particular, spend interacting with others in the real world is a combination of nihilistic ideology and private profit, not improvements in public health and social connectivity. The focus on dehumanising women by using AI to generate porn, by creating sex robots, AI girlfriends, and cyber brothels is intentional.

A barrage of hate

Reflecting on the barrage of hate young girls are subjected to on social media, the writer Victoria Smith recently commented that:

A ‘feminism’ that has argued itself out of challenging porn and prostitution is a ‘feminism’ that has robbed itself of the analytical framework to address this level of hate.”

Indeed, it has.

The cruelty is the point

Porn perpetuates rape myths and is driven by sexist and racist norms and assumptions about sexuality. Online porn sites use algorithms to drive consumer preferences for content involving sexual and physical violence, dehumanisation and humiliation. AI porn and sex robots are trained on videos and images from online porn, along with related content scraped from the internet.

The public debate about AI training models used in porn seems to focus on deepfakes and “revenge porn” rather than the misogyny inherent in porn itself. 

Acts like beating people, urinating on them or exposing them to faeces have been described as torture by human rights organisations when carried out against detainees by state agents or tolerated by states. But when the same acts are inflicted on women in porn and prostitution, these human rights organisations describe them as ‘work’ carried out by ‘choice’.

There’s a class bias in the public discourse that has accepted this for years and only demands action now that technological development has resulted in the circulation of misogynistic, manipulated and dehumanising images of ‘respectable’ women.

Porn directors promote the connection between sex and violence, because viewers find it more arousing. The cruelty is the point.

Camera angles rather than a woman’s pleasure determine the sexual positions featured and the acts directors prefer to film. Expressions of fear, discomfort and pain are a routine feature of their output. Injuries inflicted during filming are often edited out because it would interrupt the fantasy, just as the use of lube and condoms would.

When I was in the sex industry I was beaten, suffocated, spat on and worse, and injured by buyers many times. A British businessman who ran corporal punishment websites based in Hungary caned me 100 times as an “introduction”.

Most of the women he used in his films were recruited by his agents on the streets of Budapest and rewarded with drugs. If a cyber-brothel in Berlin is full of sex robots wearing torn clothes and covered in fake blood, it’s because there are pornographic films online involving rape and real blood. It just wasn’t cool to talk about this and what happens to the women and girls in these films.

I experienced almost every practice inflicted by the CIA at Guantanamo and elsewhere. In documents about Guantanamo, the CIA acknowledged that the forceful, sexualised torture techniques it employed, which are used by many other states against detainees and prisoners of war, were used for the purpose of behaviour control.

Men paid me to be a crash test dummy so that they could claim superior knowledge of ‘modern sexual practices’ when seeking to inflict similar punishment on their female partners. Sex robots are likely to fulfil a similar role.

Men can perform acts on sex robots that would kill or seriously harm women in real life. Strangulation? Repeated forceful sex or the insertion of objects that cause serious injury and even death? No problem. This leads to a significant risk to the women they have sex with in real life.

The UN Committee on the Elimination of Discrimination Against Women (CEDAW) committee has just published an unedited version of a report about the Netherlands. In it they recommend further decriminalising pimping and brothel-keeping. It refers to sex trafficked girls as ‘minor sex workers’. You can’t complain about child sex robots if you allow unrestricted access to minors in the sex industry. Are they trying to keep up with the competition? And you thought Epstein and men like him were figures from the past?

Algorithms used by online porn sites both respond to and drive consumer preferences for content involving sexual and physical violence, dehumanisation and humiliation. AI porn and sex robots are trained on videos and images from online porn, along with related content scraped from the internet.

The training models for AI porn and sex robots therefore reflect and proliferate norms and consumer preferences in online porn. Robots won’t refuse sexual acts or scenarios or negotiate boundaries. AI girlfriends will equally be programmed to please.

Consent, mutuality and reciprocity

Online porn has created expectations that can affect communication in sexual encounters, beliefs men have of sexual relationships and what they think women enjoy, and expectations young women have of what they may have to submit to if they want to attract a partner.

Misconceptions and myths about rape already evolve in digital spaces which have more influence than what young people are taught in school.

Men pay prostituted women not to say “No”. A sex robot is not trained to say “No”. Men who pay women for sexual acts are more likely to be violent towards other women. How is someone who uses a sex robot habitually likely to interact socially or sexually with an autonomous human with her own preferences?

Would there be roles or scenarios common in porn that sex robots would refuse to perform? Of course not. Women in the porn industry must pretend they enjoy harmful acts if they want to get paid or please subscribers and only tell the truth when they leave the industry. That truth-telling won’t be in the data used in the training model for a sex robot.

Finally

I’m now going to read you a quote spoken by Galileo Galilei near the end of Bertolt Brecht’s play about him. Brecht added it after the bombing of Hiroshima and Nagasaki:

Should you then, in time, discover all there is to be discovered, your progress must then become a progress away from the bulk of humanity. The gulf might even grow so wide that the sound of your cheering at some new achievement would be echoed by a universal howl of horror.”

The tech company Anthropic recently had a showdown with the US Department of War about the use that might be made of its AI tool in military operations. AI applications that resurrect, reflect and amplify some of the most harmful aspects of the war on women are of equal concern.

We must contest and resist the nihilism of the tech companies that seek to normalise and increase these harms.

Leave a Reply