|
Image copyright Mariann Hardey, 2025 On November 11th, I gave the opening keynote at the Deepfake and Society Symposium at the University of Otago. My colleague, Dr. Wasim Ahmed, and I were invited to set the stage for a day of critical humanities research. My talk was designed as a "zine-note" to explore the human, cultural, and political stakes of our new reality. Here is the script from my presentation. MY ROOFER, THE DISINFO-ARCHITECT (A TRUE STORY)
Before we talk about AI, algorithms, and global networks, I want to tell you a very analogue story about a roofer. On January 1st this year, I had a serious leak from my roof, which I had repaired. A few months later, a man knocked on my door. He didn't try to sell me a new roof—that would have been too obvious. He did something much smarter. He introduced himself as an 'expert tradesman'. He pointed up at my chimney and said, with a deeply concerned frown, "I've noticed... you've got a missed bit". He then spun this incredibly detailed story. This tiny, specific flaw, he explained, was going to let water run down between my house and my neighbour's, leading to 'significant, costly, hidden damage'. But, he had his tools. He could solve my problem, right there and then, for £450. Now... I didn't let him 'repair' my roof. What stayed with me was the algorithm. Not a digital one, but a human one. A simple, three-step script for hacking trust. He sold me a narrative of 'urgent, hidden danger'. He sold me fear. And most importantly, he sold me 'privileged access to a 'truth' that I couldn't see for myself'. He'd identified his target—a woman, living alone, who he perhaps assumed was 'easy to manipulate'. My roofer was a disinfo-architect in analogue. He proved that a compelling fiction is more powerful than a boring truth. This analogue con is the exact logic of modern disinformation. It’s not the bald-faced lie. It always starts with the 'missed bit'. It's the 'cherry-picked statistic'. It's the '10-second video clip cut from a 2-hour speech'. It's the 'leaked' email. It's a tiny, specific 'flaw' presented as the key to a much larger, hidden danger. It is a 'performance of authenticity'. We, especially as researchers, have been trained to believe that 'truth will out'. That facts will win. But the roofer proves that's not true. The antidote to a bad story isn't a fact. It's a better story. WHAT IS DISINFORMATION? (Inspired by a Roofer and Warhammer) This is the core of our problem. Disinformation isn't just a lie. It's the 'institutionalisation of deception'. It is the roofer's tactic, scaled up by technology. The 'missed bit' is now weaponised to turn a safe home, or a safe society, into a source of fear. The result? 'Truth itself becomes a malleable commodity'. My new roof, successfully reframed as flawed. A fair election, reframed as stolen. And this creates the battlefield we all now live in: the 'Disinfopocalypse'. An environment where it is 'difficult, if not impossible' to tell fact from fiction. Where we are 'drowning in data', but trust... trust becomes our most precious, and most endangered, resource. Disinformation is the Roofer's Tactic, leveraging institutionalisation of deception. A social script, a feigned concern, a performance of an expert. A deliberate, engineered assault on our shared reality. He weaponised a "missed bit" to turn my safe home into a source of fear. The result: Truth itself becomes a malleable commodity. My new roof was successfully reframed as flawed. 'Disinfopocalypse': The Battlefield A present where the very concept of objective truth is under relentless siege. An environment where it is difficult, if not impossible, for individuals to distinguish between actual facts and manipulated falsehoods. The result: We are drowning in data (and warnings), and trust becomes our most precious and endangered resource. THE "GOLDEN AGE" OF FAKES It wasn't always this way. When I started my academic career in the late 1990s and early 2000s, the internet was in its 'Golden Age'. The most dangerous 'fake' I investigated was on an internet dating profile. My early work was on digital behaviour, etiquette, and identity. The "fakes" were catfish. The "lies" were 10-year-old photos. The stakes were personal: a bad date, heartbreak. My research question was: 'How do people perform a 'true' self online?'. As researchers, we were observers—digital anthropologists studying a new tribe with a certain academic distance. The "truth" was still a knowable thing we could uncover. Now the stakes have shifted. They have moved from personal deception to societal manipulation. The lie is no longer a 10-year-old photo; it's a 'deepfake' video... a coordinated, AI-driven campaign. This was the end of our academic innocence. We went from being observers to being participants in what my colleague Wasim Ahmed and I call the "Disinfopocalypse": a state where we are "drowning in data" and have zero "clarity" on information source, legacy, or manipulation. HUMAN MACHINERY OF LIES My colleague, Dr. Wasim Ahmed, who you'll hear from next, will show you the 'battle maps'—the SNA graphs of how lies spread. I'm going to talk about the people on that map. This is the Human Machinery of Lies. It's a simple, two-part recipe. The Seeders: The Architects of the Story. These are the modern "snake oil salespeople". They craft the initial narrative. But their motives are complex: Profit: The "lucrative business" of falsehoods. Clicks equal ad revenue. Conviction: The "true believer" who genuinely thinks they've found a "universal truth" that the mainstream is hiding. They are 'authentic in their inauthenticity'. The Amplifiers: The Unwitting (and Witting) Chorus This is us. The people on Wasim's network maps. We don't amplify because we're malicious. We amplify for the most human reason of all: 'Social Capital'. To belong. To signal our identity. Humans are a storytelling animal. We occupy this planet by creating and sharing fictions. We call them gods, nations, and money. Disinformation works because it leverages our deepest evolutionary drive: the desire to understand and belong. THE AI ENGINE AND THE ARENDTIAN NIGHTMARE So, we have this ancient, human machinery. This brings us to the critical question: what happens when you connect this human machinery to a new, non-human algorithmic engine? You get the great accelerator. The algorithm builds the 'echo chambers' that trap us, feeding us more of what enrages us. It's the 'YouTube rabbit hole' on a societal scale. This technology isn't creating a new problem; it's perfecting an old one. It creates a public that changes behaviour based on 'emotional reaction, not reasoned analysis'—because that analysis has been manipulated or is invisible. Deeeep Fakes. Who built this engine? This algorithm—this AI—is the 'great accelerant' of our times. But it wasn't built in a vacuum. It's the product of a 'tech-bro' culture that lionises disruption and scale over nuance and safety. We don't have to guess its values. Long before generative AI, the academic Safiya U. Noble, in her foundational book Algorithms of Oppression, diagnosed the harms of this culture. She showed us how a simple Google search for 'Black girls' returned almost exclusively pornography. The AI engine is built on a coded logic of gendered and racial humiliation. It should come as no surprise, the very term 'deepfake' wasn't coined by a university lab. It was the username of a Reddit poster in 2017, promoting his 'killer app'. A tool specifically designed to 'paste the faces of female celebrities onto pornographic videos'. An industrial-scale production of non-consensual, gendered humiliation. My point is, this engine 'isn't neutral'. Its goal is not truth. Its goal is engagement. The algorithm doesn't care why you're angry. It just knows you stayed. OUR FIELD GUIDE FOR THE NIGHTMARE A tech dystopia is the nightmare that we have been warned about for over a century. Increasingly, I turn to popular fiction to frame these cultural narratives, where these texts are the diagnosticians of our current state. The Diagnostician: Ray Bradbury My first diagnostician is Ray Bradbury. We all remember Fahrenheit 451 for the fire. We remember the woman who 'immolates herself and her home', a terrifying, final act to protect her 'thoughts, her very life', and her 'fundamental right to share knowledge'. She embodies the human drive to protect truth from an overt, raging fire. But Bradbury's deeper warning—the one for our time—was the 'subtler, perhaps even more terrifying, form of censorship'. What if the books are never burned? What if they are 'simply rewritten'? What if their facts are 'expertly distorted, until public understanding itself becomes malleable'? This is the world AI perfects. The censorship we face is that 'quiet, constant hum within our minds' of the algorithm, endlessly rewriting reality. The Method: George Orwell If Bradbury diagnosed the environment, George Orwell diagnosed the method. In 1984, the Ministry of Truth mandates that 'two plus two equals five'. How? Because the authority and the echo chamber reinforce it. The 'AI-driven echo chamber is this method, perfected'. It algorithmically reinforces the lie until it becomes the only fact you see. The Political Goal: Hannah Arendt However, it's Hannah Arendt who provides the most terrifying and accurate diagnosis of the political goal. This is the sharpest point I can make today: The real horror of the deepfake is not to make you believe a lie. It is to make you believe nothing. It is the 'systematic destruction of a fact-based reality'. The goal is to create a populace so exhausted, so cynical, so disoriented that it 'believes everything and nothing'. A populace that has lost its shared, fact-based world also loses the ability to govern itself. It can 'only react, not reason'. The endgame is to erode our 'shared epistemology' so that democratic argument itself becomes impossible. PROVOCATIONS FOR TODAY'S SPEAKERS So, this is the lens for today. As the opening keynote speaker, I want to offer a provocation for the incredible ideas I see on the programme: When you hear talks on 'digital harm' or 'copyrighting the self', I want you to ask: 'How can we 'copyright' a self that is infinitely reproducible?' How do we define 'harm' when the goal is to destroy the concept of truth itself? When you hear talks on 'critical literacy', ask: 'How do we teach students to critique a text that is designed to bypass the brain and hit the gut?' When you hear talks on 'bioethics' and the 'colonisation of reality', this is the heart of it. A deepfake is the ultimate 'colonisation of the self'. What 'relational ethics' can we possibly have with a 'synthetic self'? I've shown you the 'why'—the humanist elements in crisis. My colleague, Dr. Wasim Ahmed, is next. He will show you the 'how' and the 'where.' He will show you the maps of this new reality. BE THE HUMAN FIREWALL The solution to this humanist crisis will not be an algorithm. It is us. Our 'academic process of verification, critique, and rigorous doubt... is the antidote.' My final provocation is this: Our job is no longer to study this; it is to act on it. 'Our job is to be the 'Human Firewall. In our teaching, in our research, and in our public life.' Thank you. Comments are closed.
|