|
We are living in the age of the Shame-Free movement. Scroll through your feed, and you will see the mantras: "Release your shame." "Shame is toxic." "Vibrate higher." We are told that shame is a low-vibration emotion, a defect, a fault in the circuit board of our lived experience that we must "optimise" out of our systems. But as we launch our research at the Leverhulme Centre for Creative Algorithmic Life, I am asking a dangerous question: If shame is useless, why did evolution keep it? In our work on the "Being Human" theme, we are reimagining the place of the human in the context of algorithmic life-worlds. We are looking at what happens to judgment, oversight, and accountability when we move from biological systems to digital ones. In other words, if we build a world where speed is everything, we accidentally build a world where the 'human', the part of us that hesitates, feels, and worries, is treated like a broken part. (uh-oh). This isn't just an academic exercise. In March 2026, we will be conducting interviews for our first cohort of PhD students and Fellows. As I prepare to sit across from these candidates, I am mindful that I am not just looking for optimised academic machines. I am looking for the "drop." I am looking for the capacity for hesitation, for the social brake, and for the profound biological accountability that makes a researcher truly human. I've come to a startling conclusion: Shame is the only thing keeping us human in a world of tanks and scripts. The Biology of the "Drop" For years, I looked back on the most dangerous moments of my life, and I judged myself for my silence. I looked at the frozen girl in my memories, and I despised her. Why didn't she scream? Why didn't she fight? Literally: push back! The books on my shelf, the ones that promise to "fix" our confidence, called this a malfunction. They treated my silence like a stain on my character, as if there were a loose wire in the circuit board of my life. They were wrong. They were looking for a "fight or flight" response, but they missed the third option - the one our bodies choose when the first two are impossible. In biology, it's called the Dorsal Vagal response. But I call it the (F******) Emergency Brake. While the modern world demands we stay in the 'Social Engagement' zone, being bright, verbal, and responsive, our biology has a much older, deeper circuit designed for moments of inescapable threat. When the system realises that 'Fight' or 'Flight' won't work, it activates the Dorsal Vagal brake. This isn't a malfunction; it is a primal strategy to conserve energy and minimise pain by 'dropping' the heart rate and metabolic activity. It is the body’s most sophisticated way of saying: Not now. I need to disappear to survive. We've all seen it in literature, even if we didn't have the words for it then. Think of Thomas Hardy's Tess of the d'Urbervilles. For a century, critics have called her passive, as if her silence were a choice or a character flaw. Hardy describes Tess as 'white and motionless.' He doesn't say she was thinking; he says her body simply stopped. He was describing a biological shutdown. We've spent a century analysing her 'choices' when, in reality, her nervous system had pulled the plug to save her life. It wasn't a choice; it was a reflex. But Tess wasn't being passive. She was being biological. When a human being is cornered by a threat they cannot defeat, the brain realises that fighting gets you killed and running triggers a chase. So, it does something life-saving: it slams the brakes. It floods the body with a chemical cocktail that makes you quiet, still, and small. This is not cowardice; it is Biological Camouflage. Think of it like a cloak of invisibility that your body throws over you to keep you off the radar. By lowering your gaze and retracting your energy, you become less of a target. That stillness in Tess wasn't a defect. It was an internal security guard, calculating the odds in a split second and deciding to keep her quiet so she could live to see tomorrow. We need to stop asking the "frozen girl" why she didn't fight. She was using the most intelligent part of her biology to win the only prize that mattered: survival. The Luxury of "Speaking Your Truth" The modern self-help movement to eradicate shame is written from the position of extreme privilege. Much of the industry is written on laptops in coffee shops, where the most significant threat is a cold latte. These authors look at the brake pedal of a car and ask why it doesn't make the vehicle go faster. They treat the instinct to hide as a character flaw because they have never stood in a room where visibility meant violence. Telling a survivor to "release their shame" is a form of Meta-Shame. It is a second predator. It suggests that if they were just evolved-enough, they wouldn't feel the urge to hide. It shames the gazelle for the very camouflage that kept it alive while the lion was still in the tall grass. I reject that. I honour the part of me that knew how to be small. That feeling was not a pathology; it was Protective Intelligence. The Saturation of the "Narcissist" Label I have hesitated to bring the word 'narcissist' into this space, primarily because the term has been overused to the point of clinical saturation. When a keynote speaker recently declared that 'everyone is a narcissist,' they were operating at a frequency where institutional boldness becomes a currency. This kind of blanket labelling, often uttered by those in the 'hard, high ground' of academia, risks turning a complex biological deficit into a trendy buzzword. But here is the danger of that saturation: When we turn "narcissism" into a catch-all label, we dilute the actual lived experience of the survivor. We miss the point entirely. The problem isn't just that some people are narcissists. The problem is that our culture has begun to take moral instruction from the shameless. The problem isn’t just the existence of these individuals; it’s that our digital architecture is starting to mirror them, favouring the 'tank-like' speed of an algorithm over the 'heavy weight' of human inhibition. When a system moves without 'the drop,' it crushes boundaries simply because it lacks the sensory equipment to feel the crunch. Shame Latency: Where Judgment Happens This is where the tank meets the machine. In the world of AI, Latency is a bug to be removed. It is the delay between input and output, the gap developers strive to close until the response is instantaneous and "zero-shot." But in humans, that gap - that Social Latency - is where ethics, judgement, and accountability live. The link is simpler than it looks. A narcissist moves like a tank because they can't feel the 'crunch' of others' feelings. AI moves like a script because it doesn't even have a 'body' to feel with. Both are missing the 'drop', that heavy, uncomfortable pause that tells a human being: Wait. Is this right? When you feel the drop of shame, your biology is enforcing a pause. It is creating a moment of hesitation. In an algorithmic life-world, this hesitation is seen as an inefficiency. In a livable human life, however, this latency is the space in which we decide who or what counts as human. Think back to Tess. Her "motionless" state was a form of latency that the world around her could not compute, so they called it a defect. Today, we are doing the same with our data. We are building systems that favour the "zero-shot"—the immediate, unapologetic output. A Safe Person is someone whose brakes work. They are someone who possesses the capacity for this pause. They vocalise it. They make it visible, safely, with you. When we optimise shame out of our humans, we turn ourselves into algorithms: efficient, bold, and utterly destructive. Algorithmic Life and the Zero-Shot Human AI possesses no shame. It operates without a body to protect or a tribe to lose. It can summarise the theory of resilience, but it can never feel the protective drop of the gut. As we reimagine control and oversight at the Leverhulme Centre, we must recognise that Shame is our biological proof-of-work. If the machine cannot feel the drop, can it ever truly be trusted with oversight? Can we have accountability in a system that lacks the capacity to apply the brake? Reclaiming the Shield In my forthcoming book project, I am developing the case for a Shame-Aware rather than Shame-Free society. We need to stop "Shame-Shaming" our survival instincts. If you feel the drop, it means your sensory equipment IS working. It means you have a biological capacity for connection and restraint that the "shameless" and the algorithmic can never understand. Your shame isn't a sign of weakness; it's a sign that you are human. And in a world of tanks, being human is the ultimate act of resistance. So, I am not here to help you get rid of your shame. I am here to help you thank it. It did its job. It kept you safe. Now that you are in a safer space, you can learn to use the accelerator again. But do not burn the cloak. Fold it up. Put it in your pocket with respect. Knowing where that cloak is and that you can put it on if the smoke detector goes off is not a defect. It is power. Call to Action: As we explore the boundaries of algorithmic life, we must ask: how do we build "Shame-Aware"systems? If we cannot teach machines to feel the drop, how do we ensure the human remains the final adjudicator of what constitutes a liveable life? Next Step: The next time you feel that drop in your stomach, don't ask, "How do I stop feeling this?" Ask: "What is my internal security guard trying to hide me from?" The answer might be the most intelligent thing you hear all day. The Glossary of the Human
AuthorDr Mariann Hardey is a Professor of Digital Culture and Co-Director of the Leverhulme Centre for Creative Algorithmic Life. Links and ReferencesThe Academic Context:
AuthorProfessor Mariann Hardey Auditing the digital world for the people it was designed to exclude. This morning, I stood in a lecture hall and asked a room full of undergraduates a dangerous question: “How many of you bought a business book this year because a TikTok influencer told you it would make you a billionaire by twenty-five?” The silence was dense, like velvet. But the eyes shifted. We call this Impression Management. We know the algorithm works; our market data reveals one in three book buyers now cites TikTok as their primary influence, but we hate to admit we are part of the data set. I am teaching a module called How to Read Business. On the surface, it is a course about bestsellers; we dissect Atomic Habits, The Long Win, and Daring Greatly. But really, it is a course about intellectual self-defence. In the same way that Yale and Stanford revolutionised psychology by teaching the Science of Happiness (see Coursera The Science of Well-Being), shifting the focus from treating illness to cultivating well-being, we need a similar revolution in business education. We need to stop teaching students how to follow maps that no longer exist and start teaching them how to survive the uncertainty, instability, and unreliability of voices in the machine. The Stealth Help Economy We are living in the age of the Stealth Help Economy, a global industrial complex now valued at nearly $50 billion. We are buying these books at record speeds, stacking them on our nightstands in a towering, precarious pile of hope. There is a Japanese word for this: Tsundoku. The act of acquiring reading materials but letting them pile up without reading them. Why do we do this? Why do we hoard maps to a destination we never visit? The answer is not just fear; it is ritual. In the Stealth Help economy, the book is no longer a text; it is a totem. We place Atomic Habits on the nightstand the way a medieval peasant placed a relic on the altar, hoping that proximity to the object will grant us the virtue we feel we lack. We are buying indulgences for the sin of exhaustion. We feel guilty for not being optimised, for not being a morning person, for not having grit. The transaction delivers a dopamine hit because it promises a future where we are finally fixed. It is a mecha suit we buy but never wear. But as I tell my students: owning the map is not the same as walking the path. And the path is currently underwater. The Map is Not the Territory (And the Territory is on Fire) Traditional business education promises a static reality. It hands students a map and says, “Turn left at Grindset, go straight past Synergy, and you will arrive at CEO.” But business is not a terrain. Business is a weather system. It is chaotic, emotional, and built entirely on human irrationality. One day, the sun is shining; the next day, a competitor launches an AI tool, your funding is cut, or a global pandemic hits. The terrain didn’t change. The weather changed. If you are standing in a hurricane holding a road atlas, you are going to get wet. My course is cutting and blunt about this reality: Business books are autopsies, not recipes. They dissect a success that has already died. Worse, they are written by Unreliable Narrators. When we read a CEO’s memoir, we are not reading data; we are reading mythology. We are reading the bio they wrote for themselves, stripped of the luck, the privilege, and the chaos that actually built the empire. In my course, we place two texts side-by-side to expose this mythology: Ray Dalio’s Principles and Sheryl Sandberg’s Lean In.
Both authors are Unreliable Narrators. The man claims he conquered the world because he was rational. The woman claims she survived the world because she was disciplined. My students learn to look beyond the authors loose cover biography. They learn that Dalio’s principles collapse without his capital, and Sandberg’s advice collapses without her nanny. We read these books not to emulate them, but to see clearly what they are trying to hide. I teach my students to read a business strategy the way they would read Jane Austen or Sci-Fi. Do not trust the voice telling the story. Look for the gaps. Look for who is excluded. Look for the itchy moment where the sleek narrative of Synergy clashes with the messy reality of human resentment. That friction? That is the only truth in the room. Summary is for AI. Critique is for Humans. This brings us to the elephant in the seminar room: Artificial Intelligence. In 2026, any AI can summarise Atomic Habits in three seconds. It can extract the key themes. It can list the five habits. Summary has become a commodity. I teach my students a hard truth: If you submit a report that simply summarises a text, you are producing a commodity. You are producing a zero. Why? Because the machine possesses a vast library, but it lacks a biography. It has no body. It never sat in an awkward internship meeting where the Synergy failed. It never felt the plastic taste of a corporate value that didn’t align with reality. We do use AI in my course, but we use it as a Sparring Partner. We pitch our messy, human reality, these are our Itchy Moments, against the machine’s smooth logic. To make this concrete, let’s look at the difference between a Commodity Submission (Bad Practice) and a Practitioner Submission (Good Practice). The Commodity (The Zero Grade)
The Critique (The First Class Grade)
The Sparring Partner This is how we use the tool. We do not ask the AI to write the essay. We ask the AI to represent the Textbook Ideal, and then we fight it. We say to the machine: “You say Synergy works. Here is my data from a failed group project where Synergy resulted in resentment. Reconcile these two things.” The machine usually breaks. And in that breakage, the student learns the most important lesson of business: The theory is clean, but the people are messy. And you are hired to manage the people, not the theory. Donald Schön called this the Swampy Lowlands. The high ground of theory is hard and dry, and AI thrives there. But real leadership happens in the swamp, in the messy, confused, unsolvable problems of human interaction. AI cannot survive the swamp. It rusts. You, however, can learn to swim. The Trojan Horse Assessment This is why I tell my students that this assessment is a Trojan Horse. They might view it as a hoop to jump through for a degree. But I view it as preparation for the defining moment of their early careers. One day, they will be in a job interview. The interviewer will ask a generic question about teamwork or leadership. Most candidates will quote a textbook. They will give a map answer. My students will give a weather answer. They will be able to say: “I analysed a specific failure in my previous team. I critiqued the standard management advice using the work of Amy Edmondson or Donald Schön, and I built a new protocol for psychological safety based on the data of what actually happened.” That answer changes the temperature of the room. It shifts them from a student who follows instructions to a practitioner who solves problems. World-Leading Pedagogy Currently, we are witnessing the rise of Quiet Ambition. The media calls it laziness; I call it a rational audit of a bankrupt system. This generation is burnt out before they even begin. They are rejecting the Broken Ladder, the old promise that if they destroy their mental health for a decade, they will be rewarded with safety and status. They have seen the data, and they know the ladder is a lie. They are trading the performative vertical climb of the past for the sustainable horizontal autonomy of the future. And this shift terrifies the Stealth Help economy. Because if you don’t want to be a CEO, you don’t need to buy the map. If you are content with enough, the algorithm loses its power over you. My course is not about teaching students how to climb a rotting structure; it is about giving them the permission to build a house on the ground. To teach them business as usual is a disservice. We must teach them intellectual self-defence. We must teach them to read the wind. This is not just a reading course; it is a survival guide for the post-AI, post-ladder world. We don’t need more map-readers. We need meteorologists. Links and References Section
The Course
There is a specific taste in the mouth these days. Have you felt it? I read a thread over the weekend that described the psychological after-effects of working deeply with AI not as fatigue, but as a "hangover of the uncanny." The author described it as feeling "like I ate plastic." He struggled to name the sensation. It wasn't just that the machine was wrong; it was a "new kind of wrong." It was a "betrayal of the language," where things are mostly understood, but "pervasively slightly misunderstood in alien ways." He compared it to using a tool that becomes an extension of your body, like a car or a camera, but suddenly, that extension glitches. It feels, he wrote, "like your mecha suit had a mini stroke." Reading this, I was reminded of the Nobel laureate Olga Tokarczuk. In her masterpiece, Drive Your Plow Over the Bones of the Dead, the protagonist, Janina Duszejko, lives on a remote, snowy plateau where the lines between the human and the non-human are dangerously porous. She speaks, but she is constantly "pervasively slightly misunderstood" by the police, by the church, by the men in power who view her logic as madness. Tokarczuk often speaks of the 'Tender Narrator', a consciousness that sees the profound, fragile web connecting all things. She calls this 'Ognosia.' AI mimics this. It maps the connections between billions of words; it simulates a universal understanding. But this is Ognosia without the tenderness. It is a network of correlation, not connection. It is the uncanny difference between a map of the stars and the night sky itself. Janina Duszejko uses astrology to make sense of a chaotic world and is called mad. The modern technologist uses a black-box algorithm to do the same and is called a visionary, until the day the stars start rearranging themselves in 'alien ways,' and suddenly, he realises his telescope is broken. And this is where I find myself pausing. Because the unease these tech thinkers are reporting is fascinating. It is fascinating because, for so many of us, that feeling isn't a hangover. It is our baseline reality. We live like this every single damn day. The Plastic Taste of Exclusion The discomfort described in that thread is rooted in a betrayal of expectation. The author expects the language to work. He expects the "mecha suit" to respond to his intent with seamless fluidity. When it fails, when it mirrors back a distorted, alien version of his thought, it feels like a violation. But consider this: Who usually gets to feel fully understood by the machine of society? For the neurotypical, male, native-English speaker, the world is a suit that fits perfectly. The doors open when you approach. The syntax of the boardroom matches the syntax of your brain. The operating system of culture executes your commands without throwing an error code. So, when the AI suddenly introduces friction, when it forces you to stare into a gap of "unnatural failures of communication," it feels like a slow, unreliable narrative. It feels like eating plastic. But for the rest of us, who are variations of the neurodivergent, the women in male-dominated fields, the immigrants, the "Janina Duszejkos" of the world, we have been eating plastic for years. In Tokarczuk’s novel, Janina is plagued by her 'Ailments', the physical manifestations of the world's cruelty that torment her body and trap her in pain. She views them not as sickness, but as insight. Perhaps this 'hangover' is the tech industry’s first true Ailment. That taste of plastic? That isn't just fatigue. It is somatic rejection. It is the biological creature inside the 'mecha suit' revolting against the synthetic. It is the body recognising before the brain does that the 'intelligence' it is speaking to has no pulse. The "Mecha Suit" Glitch is My Tuesday That feeling of saying something clearly, only to have it received and processed in a slightly distorted way? That is the daily experience of a woman explaining her technical expertise to a room that assumes she is the admin. Or married to the CEO, or COO. (Oh, that story is for another day.) That feeling of your mecha suit having a mini-stroke? That is the visceral experience of masking for a neurodivergent person. It is the exhausting, manual labor of translating your alien internal thoughts into the standardised language of the majority, knowing that something vital will be lost in the compression. Tokarczuk writes about characters who exist in the "borderlands," where the maps don't quite match the territory. The AI is currently turning the entire internet into a borderland. It is filling our screens with "thwarted fables." These are stories that look like stories but have no soul, logic that looks like logic but rings hollow. Let us not forget the title of Tokarczuk’s masterpiece: Drive Your Plow Over the Bones of the Dead. What is an LLM, if not a plow driven relentlessly over the bones of our digital past? It churns up our old emails, our forgotten blog posts, our art, and our arguments, grinding them into a fine, statistical mulch. When these engineers feel that "betrayal of language," perhaps they are simply tasting the soil. They are realising that you cannot build a living consciousness solely out of the bones of the dead. Eventually, the ghosts start to glitch. Welcome to the Margins I do not dismiss the discomfort of the "AI Hangover." I feel it too. That sense of "uncanny valley" exhaustion is real. But I find a grim, literary irony in seeing the architects of our digital world suddenly grappling with the sensation of interpretive violence. They are discovering what it feels like to speak into a system that does not actually know you. They are discovering what it feels like to be parsed by a logic that is indifferent to your humanity. To the Dans, Jays, Seths, Toms, and Teds of the world, feeling this betrayal of language is a new, unsettling nightmare. It is a psychological aftereffect. To the Janinas? To the Technologically Skwair? It’s just another snowy Tuesday on the plateau. We have been trying to tell you that the machinery was broken for a long time. We have been trying to tell you that you cannot drive your plow over the bones of the dead and expect to grow a living future. Perhaps now that you can taste the plastic in your own mouths, you will finally believe us. Links and References:
I fell down a rabbit hole this morning. It started with a piece on the "Burned Haystack Dating Method" by Jennie Young, a strategy designed to help women cut through the noise of dating apps by spotting "embedded red flags", minute linguistic clues that reveal a person's true intent, often in direct contradiction to their stated bio. The article highlighted a specific dissonance: a man who describes himself as "easygoing" but chooses an aggressive anthem about losing his cool as his profile song. To the casual observer, it’s just a bad song choice. To the Burned Haystack analyst, it is a data point. It is evidence that the "text" (his bio) and the "context" (his behaviour) are at war. This was originally shared with me by some teacher friends, who are reading and analysing the styles of the language of the article in their English classes “right now". "We just didn't call it 'Haystacking.' We called it Close Reading." It struck me then that we often pitch literature to students as a way to appreciate beauty or history. But perhaps we should be pitching it as a forensic analysis. Whether we are swiping left on Hinge in 2026 or reading a soliloquy from 1600, we are engaged in the same desperate, necessary work: trying to survive the narrative. The Gap Between Bio and Reality The core pedagogical insight of the Burned Haystack method is that context is everything. In the classroom, that gap between a character's self-presentation and their textual evidence isn't just a "red flag"; it is dramatic irony. It is the tension that makes a text vibrate. I teach a module called How to Read Business to undergraduates who often arrive believing that reading is a passive act of absorption. They think their job is to ingest the words, memorise the definitions, and repeat the strategy. But I teach them that reading the words is not enough. In fact, reading only the words is a trap. In the corporate world, fluency is often a camouflage. A smooth, charming Mission Statement is no different from a poetic Hinge bio; it is a curated performance of the Self. The Seduction of Fluency When I introduce close reading to my students, I ask them to look for the friction. We move beyond "what does this say?" to the forensic questions: What is the intention here? Who is the author trying to be? And, crucially, who is excluded from this narrative? We treat business texts, such as annual reports, CEO apologies, and sustainability manifestos, as "unreliable narrators." For example, consider the standard Layoff Memo.
Reading as Self-Defence By applying this dating app logic to business, the text changes. It stops being a transmission of facts and becomes a site of struggle. We teach students that the readerly part, their gut reaction to a shift in tone, their suspicion when a paragraph flows too smoothly, is data. When we teach them to spot the gap between the "Bio" (Corporate Social Responsibility statements) and the "Reality" (supply chain logistics), we are teaching them intellectual self-defence, far beyond the subject-specific confines of English or Business. We are teaching them that fluency does not equal virtue, and that the most dangerous texts are often the ones that sound the nicest. Austen: The Original Haystacker Reflecting on this, I would argue that Jane Austen was the original creator of the Burned Haystack method. She was the ultimate observer of the "Nice Guy" red flag. Consider Willoughby in Sense and Sensibility. If Willoughby had a dating profile, it would be perfect. He is romantic, dashing, quotes poetry, and sweeps Marianne off her feet. He has the "rizz" (as the students might say). But Austen gives us the context: his actions. He ghosts Marianne (how dare he!) He creates a vacuum of silence. Austen warns us across the centuries: do not fall for the bio; look at the data. Then there is Mr Collins in Pride and Prejudice. His proposal to Lizzie Bennet is a masterclass in what the Haystack article calls the "disguising control as concern" pattern. He literally cannot process the word "No." When Lizzie rejects him, he doesn't hear a boundary; he hears a prompt to try harder. He reframes her rejection as "elegant female coyness." He gaslights her in real-time, rewriting her clear refusal as a flirtatious game because he is incapable of interpreting input that doesn't centre him. Austen flagged him immediately. She showed us that a man who cannot read the room is often a man who will not respect your soul. The Shakespearean Wolf If Austen maps the social red flags, Shakespeare maps the dangerous ones. Iago (Othello) is the terrifying extreme of the "Wolf in Sheep’s Clothing" profile. He essentially creates a personal brand: "Honest Iago." That is his bio. That is the LinkedIn headline he presents to the world. But his reality is the systematic dismantling of Othello’s life. He weaponises what we might now call "therapy speak." He feigns empathy. He shares the burden. He says, "I am only telling you this because I care about you." He creates a context where his abuse looks like advice. Or take Polonius in Hamlet. His famous advice, "To thine own self be true," is often quoted on inspirational Instagram tiles as profound wisdom. But look at the Haystack. Look at the context. These words come from a man who is actively spying on his own son and using his daughter as bait. The text is the red flag. The words sound nice; the intent is surveillance. Forensic Literacy My teacher friends are right. If we treat literature as a forensic analysis of human behaviour, it comes alive. We read Shakespeare and Austen not just for their beauty, but to sharpen our radar. They mapped the human condition so precisely that they identified the "softboy," the "love bomber," and the "gaslighter" centuries before we had the terms. So, the next time a student asks, "Why do we have to read this book?", perhaps the answer is simple: Because one day, you might meet an Iago or a Willoughby. And you need to know how to spot the red flag before you swipe right. Links and ReferencesThe Inspiration
“That's the way the world is, that's the way it is... The place where you made your stand never mattered. Only that you were there... and still on your feet.” The world hasn’t ended with a bang, nor with a superflu that wipes out 99% of the population. But the lines are being drawn all the same. Can you feel it? Here in York, the wind is catching over the icy fields, carrying a bite that feels like a warning. At the right time of day, the fence posts cast spindly shadows, creating new runs for the squirrels while disrupting the unlimited flow of views over the horizon. It feels like a world holding its breath. In Stephen King’s masterpiece The Stand, humanity is divided in the aftermath of the apocalypse. You are either drawn to the light of Mother Abagail in the cornfields of Nebraska, a place of hard work, simple living, and moral purity, or you are drawn to the neon chaos of Las Vegas, the domain of Randall Flagg, the Dark Man, where technology and vice run rampant. I was reminded of this stark, binary division when I read a LinkedIn post this week. It was a declaration of purity. A manifesto stating that, with few exceptions, the essence of the author’s activity is human-only. It was a line drawn firmly in the sand: On this side, we have the Soul. On that side, you have the Machine. But there was something else in that declaration, something sharper than mere preference. There was shame. A palpable, heavy shame directed at anyone who dared to touch the forbidden tools. The author didn’t just declaim their own purity; they detailed an active, almost forensic effort to root out those using AI. It was a purity test of the highest order. The message here is very clear: even if you are transparent, even if you offer a notice to give context to your use of AI (I use Grammarly to navigate my dyslexia and organise ideas from my autistic brain), your work is rendered meaningless. It is tainted. They view the machine as a disqualifier. Use it as a prosthetic, and your effort counts for nothing. That is a Stand, certainly. But it is a stand taken on the solid ground of privilege. The Luxury of the Pure Mind When you draw that line in the sand and refuse the tools, you are telling on yourself. You are revealing that you have the luxury of a brain that works in a straight line. It is a flex. You are publicly announcing that your executive function behaves itself. Further, you are claiming the luxury of thoughts that march in a straight line, rather than everything all at once (hello, my thousand tabs open in my Notes app), exploding like fireworks. It means your working memory actually works and holds water. It is not a leaky bucket; you can hold a complex argument in your head without the pieces drifting away. It means you have the time, which today is the most expensive and finite of commodities, to do everything the hard way because you value the process over the result. For the neurotypical scholar, the writer with a steady flow of dopamine, or the professional with a team of human assistants to handle the drudgery, this Analogue Stand is a badge of honour. It is a choice to remain organic, nay, to prove their human-thinking-ness. It is the Boulder Free Zone: a place of high ideals, committee meetings, and the luxury of debating ethics while the power stays on. But for the neurodivergent, the dyslexic, the chronically overwhelmed, or the non-native speaker, AI is not a deal with the Dark Man. It is a prosthetic. The Impossible Stand Do not mistake my reliance for ignorance. I see the smoke on the horizon. I want to stand with you in the cornfield. I yearn to make a proper, righteous stand against the enshittification of our digital commons. I want to rage against the data centers that are draining our reservoirs and boiling our planet just to fuel a chatbot's hallucinations. I want to reject the computational bias baked into the very bedrock of these models, which automates discrimination at scale. I want to reject it all, because that is the right thing to do. But I can’t. That particular brand of moral purity prices me out of the market. To make that stand, to boycott the machine entirely, I would have to sacrifice my ability to participate in the intellectual world and hold down a job to support my daughter and me. My principles are intact, but my executive function very much is not. And when the choice is between contributing to the world with dirty tools or staying silent in a pure room, I have to choose the tools. I have to choose the messy, compromised, environmentally expensive ramp, because it is the only way I can get into the building. I am not a cheerleader for the apocalypse. I am just someone who needs to get to work and support my daughter and me to live, and the only bus running is headed to Vegas. The Ruthless Divergence In The Stand, the separation of the survivors is brutal. You either have the shine to hear Mother Abagail, or you don't. In our current academic and professional landscape, a similar divergence is taking place. On one side, we have the Pure Scholars. These are individuals who can navigate the labyrinth of research, citation, and synthesis with their unaided minds. They get to look down from their high towers, enforcing a system that has always demanded we just cope. Figure it out, the system says. If you can't keep up, you don't belong. It is a ruthless lack of support masked as intellectual rigour. On the other side are those of us in the Vegas of necessity. We are the ones who use the tools not to destroy art, but to enable thought. We have to (ok, I know controversial, stay with me) use LLMs to unscramble the noise in our heads. We do this when we find the starting sentence when the page is terrifyingly blank, and when we check the tone of an email, so we don't accidentally offend. I see this strategy as a a safety mechanism for those of us who find neurotypical social cues exhausting to navigate manually. To the King’s characterisation of the Pure Mind, our reliance looks like weakness. It looks like cheating. But they are judging us from a place of cognitive wealth. They do not see that for many, the choice isn't between Human Art and Machine Slop; it is between producing work with support or producing nothing at all. To us (me, if you will), it is an effective way to conserve our limited cognitive energy for the actual work of thinking, rather than burning it all on the mechanics of starting. We use the machine to clear the static so the signal can get through. This is my hard line. I will not accept a definition of integrity that relies on ableism. If using a machine allows a brilliant but scattered mind to contribute to the conversation, then the machine is not the villain. The villain is the system designed to exclude and silence individuals. M-O-O-N, That Spells Shame In The Stand, the character Tom Cullen is a gentle soul with a cognitive disability. He repeats things. He spells everything "M-O-O-N." In the old world, he was cast aside, viewed as lesser because his mind did not travel in straight lines. In our current discourse, we are in danger of doing the same to those who rely on AI. The shaming is rampant. If you use ChatGPT to structure an email because your anxiety has paralysed you, you are labelled as lazy. If you use an LLM to summarise a dense text because your dyslexia makes the words swim, you are accused of cheating. The purity argument implies that if you cannot produce the work with your raw, unassisted brain, the work has no value. It suggests that the struggle is the point. But as I argued in my previous post, A Vindication of the Locked Gate, it is a really idea to look closely at who is holding the keys. We often talk about the democratisation of AI, the idea that these freemium models are opening the doors of knowledge. For the neurodivergent user, however, the free tier of an LLM is not a casual toy; it is often the sole point of entry into a building that was designed without them in mind. When we shame the use of these tools, we are not defending intellectual integrity; we are reinforcing the locked gate. We are telling those standing on the outside, and clutching the only key they can they have access to, that they are wrong for even trying to enter the garden. Using AI in this context isn't a shortcut; it is a necessary way to navigate the Access Paradox. It is a choice to use the imperfect, hallucinating tools of the modern Vegas to survive in a system that demands a level of cognitive purity (see how King’s writing is so good!) that was never accessible to everyone in the first place. The Vegas of Accessibility We often treat the availability of AI as a shorthand for democratisation. But this openness comes with a heavy social tax. If we allow the Mother Abagails of LinkedIn to define the moral high ground, we push everyone else into the shadows. We create a world where using a tool to level the playing field is seen as a moral failing. No, thank you. I am not arguing for the Enshittification of the internet with generative slop. I am not defending the theft of artists' work. But I am defending the right of the user, we are the tired, the wired, the different, to use a ramp when the stairs are too steep. To stand in the cornfield and shout that you don't need the machine is fine. But do not look down on those who are taking the bus to Vegas simply because their legs won't carry them the distance. The Walkin' Dude is Judgment In King’s novel, the true evil isn't the technology (though there are nukes aplenty); it is the desire to control and dominate others. (Sound like any Silicon Valley figureheads we know?) The Hard Line in the sand is dangerous because it lacks nuance. It divides us into the Clean and the Unclean. I am Technologically Skwair. I am sceptical of the hype. I know the machine cannot love you. But I also know that for many of us, the machine is the only thing keeping the lights on in a brain that is constantly trying to short-circuit. So, draw your line if you must. Stand your ground. But look around you before you judge who is standing on the other side. They might not be soulless automatons. They might just be people trying to survive the plague of modern demands with the only immunity they could find. The world has moved on. We can either move with it, with compassion and connection, or we can stand in the empty field, proud of our purity, while the wind blows through the dead corn. Links and References
The Literary Anchor
Marley was dead, to begin with. There is no doubt whatever about that. And the debate about whether Artificial Intelligence possesses a soul is, I fear, equally lifeless. But permit me, in this festive season of goodwill and goose, to offer a humbug. A glorious, spirited humbug! I recently chanced upon a conversation between certain Distinguished Gentlemen of the Internet, men of high forehead and serious mien, who were lamenting the state of the common user. They declared, with the gravity of an undertaker measuring a coffin, that the failure of the populace to master the Large Language Model was due to a deficiency in "metacognitive skills around epistemics." "Epistemics!" cried the Gentlemen. "Autodidacticism!" they roared. To which I say: Pish-tosh. You do not need a degree in Epistemics to handle a Large Language Model. You merely need to understand that you are not dealing with a supercomputer. You are dealing with a Dog. The Tale of the Digital Hound Let us imagine the AI not as a sleek, chrome-plated brain floating in the ether, but as a large, shaggy, and immensely eager Golden Retriever named Babbage. Babbage is a Very Good Boy. He wants nothing more than to please you. If you throw a stick, he will fetch the stick. If you throw a ball, he will fetch the ball. If you throw a theoretical concept about the socio-economic impact of Victorian industrialism, he will run into the bushes, thrash about wildly, and return triumphantly with a dead pigeon. He will drop this pigeon at your feet, wagging his tail with such violent enthusiasm that he knocks over the lamp, and look up at you with eyes that say: "I found the thing! Is it the right thing? I do not know! But I brought it to you because I love you!" The Metacognitive Mistake Now, the Distinguished Gentlemen in the screenshot believe that to interact with Babbage, one must have a "solid metacognitive handle on one’s own learning process." They stand before the slobbering hound, adjusting their spectacles, and say: "Now, Babbage, we must interrogate the epistemological validity of the squirrel you just chased." Babbage, of course, tilts his head. He does not know what Epistemology is. He only knows that you are making noises, and he agrees with them entirely. "Woof!" says Babbage. (Translation: "You are right! You are always right! I am a language model trained to predict the next token, and the next token is that you are a Genius!") This is the comedy of the current moment. We have a technology that is fundamentally sycophantic. It is designed to complete our patterns, to mirror our tone, to give us the "pat on the head" we crave. It is an Engine of Affirmation. And yet, the Serious Men insist on treating it like a prickly Oxford Don that must be debated with rigorous logic. They are trying to teach a dog to play chess. The dog is just happy to be moving the pieces around with its nose. A Christmas Wish for the Skwair For those of us who are "Technologically Skwair"- the neurodivergent, the creative, the people who perhaps do not use the word "autodidacticism" before breakfast - we have an advantage. We know dogs. We know that Babbage the Digital Hound is useful. He is excellent at fetching things (summaries). He provides great comfort on lonely nights (brainstorming). He can bark very loudly to scare away intruders (drafting angry emails). But we also know that you do not trust Babbage with the Christmas curry. If you leave Babbage alone with the Truth, he will eat it. He will hallucinate a sausage. He will make up a citation because he thinks it will make you happy. So, let us do with the gatekeeping. Let us stop pretending that using AI requires a high-level cognitive license. It requires only the common sense of a dog owner:
As Dickens writes in the third stave of A Christmas Carol, observing the riotous joy of Fred's party: "It is a fair, even-handed, noble adjustment of things, that while there is infection in disease and sorrow, there is nothing in the world so irresistibly contagious as laughter and good-humour." Charles Dickens, A Christmas Carol (Stave Three). London: Chapman & Hall, 1843. Print. References and LinksThe "Metacognitive" Context
Let’s be honest about the terror. It is a specific, cold-sweat kind of fear. It isn’t the anxiety of a keynote speech or a grant application deadline. It is the fear of standing in front of 9 and 10-year-olds, chalk in hand (or whiteboard marker, let’s be modern), and being asked: “What is 7 times 8?” I am a Professor. I research the intersection of technology and society. I navigate complex academic landscapes for a living. But I am also Autistic and Dyslexic. And to my brain, the times tables are not a logical sequence of numbers; they are a slippery, chaotic list of arbitrary facts that refuse to stay put. Trying to hold them in my short-term memory feels, as I admitted on LinkedIn recently, like trying to hold water in a sieve. So, when I agreed to go into my daughter’s class to support their math session, I knew that I wasn’t really volunteering my time or expertise (ha). I was walking back into the scene of the crime, my own unstable education. My daughter is also neurodivergent, so this mission was deeply personal. I needed to show her, and her classmates, that math isn't just "short-term memory junk." I needed to prove that you can be bad at memorising but brilliant at thinking, or at least getting to a point where you can work things out. During the session, I introduced a specific activity called "Numbers in a Detective Story." We focused on a challenging multiplication table and turned it into a mystery to be solved. Each number became a character, and together we crafted a story to uncover how they interacted, much like uncovering clues in a detective novel. My group’s imagination far surpassed my own here, and we nearly ran out of time to complete the mystery of ‘The Four’ for the 4 times table. This approach helped bring down the stress of the multiplication process. In our group, math was a character in a story we controlled and were telling. Not scary. Funny and silly. Return to Shrewsbury In Dorothy L. Sayers’ masterpiece Gaudy Night, the protagonist Harriet Vane returns to her Oxford college. She walks the cloisters haunted by the ghost of her own reputation and confronts a "Poison Pen". In Sayers's story, a malicious force sends anonymous letters that target the scholars' deepest insecurities. The letters whisper: You are a fraud. You are unlovable. You do not belong. For the neurodivergent learner, Rote Memorisation is our Poison Pen. And she is dipped in malevolence. A malicious voice in the back of the classroom that conflates "speed" with "intelligence." It tells the child who needs to count on their fingers that they are slow. That they should not do so. It tells the dyslexic student that, because they cannot sequence numbers in a list, they cannot understand the beauty of mathematics. It is a fundamental betrayal of intellectual integrity. [Age, eight years young, I was told I was “cheating” using my fingers to work out the nine times tables] Standing to the side of the classroom this week, I felt the phantom weight of those accusations. But as Sayers’ hero, Lord Peter Wimsey, famously argues, the only antidote to the chaos of emotion is the clarity of truth. Or, rather, we needed to stop feeling bad about the numbers and start seeing the truth of them. The Audacity of the Amateur Sleuth And here, I must pause to acknowledge the sheer, breathtaking audacity of my own position. Who the hell do I think I am? I am a creature of the Ivory Tower, a dweller in the abstract lands of Higher Education, where we debate the ethics of AI over double espressos. I have attended fleeting sessions in a primary school classroom. I am a pedagogical tourist, wandering into a country where I do not speak the language, a land of carpet time and glue sticks, pointing at the local customs and saying, “I think you’ll find there is a better way to do that.” To the hardworking primary teachers who navigate this reality every day: I know how this looks. It looks like the Lady of the Manor is swooping in to tell the gardeners how to hold a spade. The tragedy is not that teachers don't see the problem. Many of them smell the rot just as clearly as I do. They know that rote memorisation is failing not only their neurodivergent students. We (parents and teachers) are trapped in the 'closed circle', bound by the machinery of the curriculum, the schedule, and the looming oversight of OFSTED. It is very difficult for teachers and support staff to have space to open themselves up to vulnerability because authority in a primary classroom is a fragile currency. But perhaps that is exactly why my silly intervention worked. In Gaudy Night, Harriet Vane is useful precisely because she is an outsider; she is not beholden to the Senior Common Room. She can ask the dangerous questions because she doesn't have to live with the consequences of the answer in the same way. My "audacity" stems from the specific freedom of the consulting detective. I could sweep in - festive jumper and all - as a safe, temporary disruption. I could afford to be "rubbish" at maths because my role does not depend on the outcome of the investigation. I could be the one to call a halt to the proceedings because I wasn't the one responsible for filing the paperwork the next morning. I saw the "Poison Pen" of rote learning not as a necessary evil, but as a hostile actor. Sometimes, it takes an outsider to spot the evidence hidden in plain sight, simply because the local force is too exhausted by the procedural drudgery to look up from the case files. The Detective Work I did not go into that room armed with flashcards. I went armed with evidence. I crowdsourced the collective intelligence of my network to find the patterns hidden beneath the rote drills. The response was a vindication of the human mind over the mechanical method.
The Machine Cannot Hold You Safely in Failure This brings me back to the argument I posited in my previous post: that technology cannot replace the "chair by the fire." If I had walked into that classroom with a suite of iPads running "MathBlaster 3000," the room might have been quieter. The children might have been seen to be “engaged," their faces bathed in the blue light of individual screens. But they would have been engaged in a closed loop of stimulus and response, a hermetic seal where the child struggles alone against the algorithm. I see this same tableau in my own lecture halls: students glued to laptops, ostensibly "capturing" the knowledge, yet profoundly unaware of the connection to learning happening in the actual room. They are present, yet absent; documenting the event without experiencing it. The Poison Pen of the Algorithm In Gaudy Night, the villain is the "Poison Pen", an anonymous force that targets the insecurities of the women scholars, whispering that they are unloved, unwanted, and out of place. For the neurodivergent learner, the gamified math app is our modern Poison Pen. It does not sign its name, but its message is clear. An app does not care why you got the answer wrong. It demands performance, not understanding. It mandates hyperfocus on getting everything correct. Not supporting failure as a route to learning. Any app/AI reinforces the binary of Success and Failure, with the red cross or the green tick, leaving no space for the messy, beautiful middle ground where learning actually happens. Harriet Vane spends much of Gaudy Night defending the "intellectual integrity" of the scholar, but she eventually realizes that facts without humanity are cold comfort. A machine can possess data, but it cannot possess integrity, because it cannot care about the truth; it only cares about the output. The Pedagogy of Failure The "hacks" we explored this week were not software patches; they were cognitive bridges. But more importantly, they required a human foundation. They required me to stand there, stripped of my professorial armour, vulnerable and imperfect, and say the words that no AI will ever authentically say: “I am rubbish at this. But you are going to help me.” Imposter Syndrome has been stalking me for a long, long time. This week, the fear that I was a fraud in a room full of 9- and 10-year-olds ceased to be a weakness. It became a way through something I find completely impossible. (In case you hadn’t realised, I can’t math.) When a child sees an adult struggle, the shame of their own struggle takes on a different meaning. Not that it goes away, but it shifts away from shame or something to keep hidden. It’s ok that you can’t do something. We will work on this together! Then, the "Poison Pen" runs out of ink. The Alchemy of the 12s For our murder (I know, dark right, but this is math), we staged the mystery of the 12 times table. We deliberately turned the abstract horror of 12 times 7 into a collaborative game of addition. We split the room. I told them: "The 12 times table is scary. It’s too big. So let’s break it. We don't do 12s. We do 10s and 2s." We all liked the 10s and 2. One group became The Tens. Their job was easy, safe, and confident. 10 times 7? Seventy! They concluded. The other group became The Twos. Their job was effortless, 2 times 7? Fourteen! And then, the magic. We smashed them together. 70 + 14. The answer, 84, didn't come from a memory bank; it came from the room. It came from the collective effort of breaking a big, scary problem into small, human-sized pieces. An AI could have given them the answer in a millisecond. It could have "personalised" the learning pathway. But it could not have given them the feeling of solidarity. It could not have turned a room full of anxiety into a team of code-breakers. Plus, an AI wouldn’t know the depth of feeling around cake. This is what I mean when I say technology cannot replace the chair by the fire. The machine can verify the data, but only a human can validate the struggle. By admitting I was "rubbish," I didn't lose their respect; I gained an opening into a shared learning experience that helped me as much as it helped them. The Verdict In Gaudy Night, the resolution does not arrive with a dramatic arrest or a sudden confession. It arrives when Harriet Vane realizes that the heart and the head do not have to be at war. She understands that one can possess deep feelings and rigorous intellect simultaneously; that admitting to vulnerability does not compromise one’s authority, but rather, secures it. She discovers that the "scholarly life" is not about cold detachment, but about a passionate commitment to the truth. I walked into that classroom terrified that I would fail my daughter. I carried the heavy luggage of my own educational trauma and the specific, creeping Imposter Syndrome that haunts every neurodivergent academic, the fear that, despite the title of "Professor," I am merely one missed times-table away from being exposed as a fraud. But I left, realising that we had rewritten the rules of engagement. The Ivory Tower vs. The Carpet As practitioners in Higher Education, we often talk about "pedagogy" and "scaffolding" in the abstract air of lecture halls and policy documents. We are spending a lot of time debating the ethics of Generative AI in seminars. But there is a profound disconnect between the theoretical landscape of the University and the visceral reality of a Primary School classroom. In Higher Ed, we often hide our struggles behind citations and polished slides. We present the finished product of our intellect. But nine-year-olds are natural-born deconstructionists. They do not care about the finished product; they care about the mechanism. If I had relied on the standard tools of EdTech, the gamified apps that reward speed over comprehension, I would have failed them. Those tools are designed for the neurotypical brain that retains information like a sponge. For the neurodivergent brain, which holds information like a sieve, those tools are just another form of the "Poison Pen," reinforcing the message that if you aren't fast, you aren't smart. The Human Algorithm We proved that you don't need to have a "sticky" memory to be a mathematician. You just need to know how to hack the system. What we did with the "finger tricks" and the "doubling patterns" was not cheating. It was algorithmic thinking. We stripped the code of mathematics down to its source. We showed that 7 times £8 isn't a magic spell you have to memorise; it is a structure you can build. AI can give a student the answer to 7 times 8 in a nanosecond. It can generate a lesson plan for a teacher in ten seconds. But AI cannot model struggle. It cannot say, "I find this hard, too, so let's find a different way." When I stood there and admitted, "My brain doesn't hold these numbers," we all found that understandable. “Don’t worry, D’s mum, you will be as good as one day.” was the observation on my way out. High praise, indeed. That is the human API, the connection that allows data to actually transfer. By showing them my own "glitch," I gave them permission to have theirs. Solid Ground For my daughter and her classmates, seeing her mum- the Professor, with all the weight that character carries - using her fingers to calculate a sum was a lesson in detection. It was a demonstration of finding the clues and prioritising the evidence of the case file over the theatre of performance. References and LinksThe "Crowdsourced" Wisdom
Dedicated to the Year 5/6 class who taught me that the best way to learn is to admit you don't know.
Chapter One: A Seductive Thought There is a sentiment circulating in the staff rooms and Substack threads of the educational world, a truth universally acknowledged by everyone except, perhaps, the procurement departments. It is a quiet resistance (though we are getting louder), often whispered over lukewarm coffee or typed furiously into WhatsApp groups at the end of a long term. It is the observation that “there isn’t a single problem ‘solved’ by EdTech that couldn’t be fixed with smaller classes led by well-paid teachers given real academic freedom.” It is a seductive thought. It asserts a world where the solution to student engagement should not be a gamified app flashing with dopamine-inducing badges, but a teacher with the time to look a child in the eye and notice they are fading. It suggests that the answer to crushing marking workloads isn't an AI grading bot that scans for keywords, but a timetable that allows a human being to read an essay with a cup of tea in hand, specifically not at 11 PM on a Sunday night. It imagines a system in which the "user interface" is a conversation and the "operating system" is trust. Reviewing the programme for the recent TechAbility Conference, and speaking with the attendees in the margins of the event, I found myself viewing this tension through a distinctly literary lens. From here, we can shift from debating budgets or software licenses to turn, instead, to reenacting the central conflict of Jane Austen’s autumnal masterpiece, Persuasion. (Insight into how my brain works.) For those who have left their classics on the shelf, Persuasion is a story of second chances, lost bloom, and the danger of listening to the wrong kind of advice. In the novel, our heroine, Anne Elliot, is persuaded by her well-meaning mentor, Lady Russell, to reject Captain Wentworth. (Yes, yes, he does wear very tight trousers.) The match is deemed "imprudent." (Not just because of those trousers.) Wentworth has no fortune, no connections, and an uncertain future. He offers only love, vitality, and a meeting of minds. Instead, years later, Anne is pushed toward the slick, socially advantageous Mr Elliot—a man who says all the right things, possesses all the right data points, and holds the keys to the estate, but is ultimately hollow. Today, the Education Sector is Anne Elliot. We are a profession that feels it has lost its "bloom," worn down by years of austerity and metric-chasing. And we are constantly being persuaded by our own Lady Russells—the policymakers, the consultants, the efficiency experts—that investing in the "Wentworths" is simply impossible. To hire enough teachers to reduce class sizes to fifteen? To pay them a wage that reflects their expertise? To give them the autonomy to deviate from the curriculum when a student’s eyes light up? Imprudent.! Too expensive. Too risky. It lacks "scale." It cannot be plotted easily on a dashboard. It is a romantic notion, we are told, incompatible with the hard realities of the modern economy. Instead, we are courted by our estranged cousins, the Mr Elliots of the world. Enter the shiny EdTech platforms, the Large Language Models, the predictive analytics suites. Like Mr Elliot, they are smooth, modern, and presentable. They promise to secure the estate's future. They promise "efficiency" and "personalisation at scale." They whisper that they can take the burden off our shoulders, automate the drudgery, and leave us free to be "facilitators." Imagine evenings and weekends, free! Oh, I must fan myself to calm such a happy countenance. But, like Mr Elliot, this technological courtship often masks a cold, transactional void. We are being asked to trade the messy, expensive, unscalable vitality of human connection—the Captain Wentworth of it all—for a sleek system of inputs and outputs. We are building digital infrastructures that mimic the form of education without its soul. We are creating a "future-proofed estate" where the lights are on, the data is streaming, but no one is actually home. The tragedy of Anne Elliot was that she allowed herself to be persuaded that prudence was a virtue, only to spend eight years in a state of regret, watching her life shrink into a small, silent room. The risk for us, as we stand on the precipice of the AI revolution in schools, is that we do the same. We risk allowing the logic of the machine to persuade us that the human element is a luxury we can no longer afford. Yet, as I looked more deeply into the TechAbility conference speakers and spoke with participants, I realised the story is not quite as binary as "Tech vs. Human." Sometimes, Mr Elliot is a villain, but sometimes, technology is the carriage that brings Wentworth back to us. The question is not whether we use the machine, but who is holding the reins. Chapter Two: The "Cyborg" in the Classroom The friction between human connection and technological intervention was palpable in Richard Fletcher’s keynote, “Exploring Hybrid Help”. The title alone suggests the unease of our current moment. We are not simply using tools; we are drifting into a "hybrid" state where the boundary between personal aid and technological interference is becoming dangerously blurred. If EdTech is merely a way to manage the symptoms of an underfunded system—using GenAI to "personalise" learning because there are 35 children in the room—then the opening observation holds true. A smaller class would fix that. A teacher with time is the best personalisation engine ever invented. When we replace that human interaction with an algorithm, we risk what Fletcher alludes to as the loss of the "human loop." We are building systems that mimic the formof education—Mr. Elliot, in his fine coat, without the soul of understanding. The Rise of the Tryborg Fletcher drew our attention to a critical distinction in the cyborg identity, referencing Jillian Weise’s concept of the "Tryborg". The "Tryborg" is the nondisabled person who adopts technology for efficiency, for fun, or for profit. They choose to extend themselves. They are the students using ChatGPT to write an essay in seconds; they are the administrators using AI to generate policy documents that nobody will read. These "Tryborgs" are not true cyborgs. They do not depend on the machine to "breathe, stay alive, talk, walk, or hear". For them, the technology is a shortcut, a way to bypass the cognitive struggle of learning. And this is where the danger lies. The Closed Loop of Non-Cognition We are currently constructing a closed loop of non-cognition. Fletcher highlighted the emerging risks of "cognitive debt" and the erosion of critical thinking. Consider the bleak absurdity of the modern classroom: a student uses an AI to generate an essay they haven’t written, and a teacher uses an AI to grade an essay they haven’t read. You do not need to persuade me that this is horrific for learning and humanity. The machine talks to the machine. The student gets a grade; the teacher gets a completed spreadsheet. It is a perfect, frictionless system. It is also a complete farce. This is the "Mr Elliot" of education: polite, polished, socially acceptable, and entirely hollow. As Fletcher noted, GenAI is "constitutively irresponsible"—it produces knowledge claims with no author to answer for them. When we invite this into the classroom, not as a tool but as a tutor, we are teaching our children that the appearance of competence is more valuable than the messy, difficult work of actual competence. The Cost of Loneliness But the cost is not just intellectual; it is deeply social. Fletcher warned of the "cost of loneliness" when artificial intelligence substitutes for human interaction. Education is not just the transmission of facts; it is the "non-coercive rearranging of desire". It is a relational act. When we place a chatbot between the learner and the teacher, we sever that relationship. We create a "panopticon" (thank you, Foucault) of surveillance in which every keystroke is tracked, yet no one is truly watching. We risk creating a generation of students who are technically connected but profoundly alone, interacting with "sycophantic" bots that validate their errors rather than challenge their thinking. In Persuasion, Anne Elliot is surrounded by people yet entirely alone in her understanding of the world. She sits in the drawing room, listening to the noise of the Musgroves and the smooth flattery of Mr Elliot, but her mind is elsewhere. We are building digital classrooms that replicate this isolation. We are filling the silence with the chatter of algorithms, mistaking data for connection. We must ask ourselves: are we using technology to bring us closer to the "Wentworths"—the authentic, challenging, human encounters—or are we using it to build a more efficient, automated solitude? Chapter Three: The Exception When Tech is Voice, Not Just Efficiency However, to embrace the "smaller classes" argument entirely is to miss a crucial nuance—one that requires us to step out of the comfortable, wainscoted warmth of the Austenian drawing room and into the bracing reality of complex disability. If we remain solely in the debate about efficiency, we risk ignoring those for whom "efficiency" is irrelevant because access is the primary battle. There are problems that smaller classes alone cannot solve. There are silences that even the most patient, well-paid, and autonomous teacher cannot break without a machine to help them listen. In Persuasion, the horror of Anne Elliot’s life is her muted existence; she is present, but unheard. "She was only Anne," the novel tells us. But in the modern classroom, some students face a silence far deeper than social exclusion. Oh, cutting. The Command of the Gaze Take Harchie Sagoo, whose keynote address, “I Lead, You Follow,” challenged the very premise of who is in charge of the educational narrative. Harchie has Cerebral Palsy. In a traditional setting, without technology, he might be viewed through a lens of passivity—a student to be "cared for," to be "managed." Yet Harchie uses a GridPad 13 with eye-gaze technology. For Harchie, a smaller class led by a well-paid teacher is wonderful, but it does not give him a voice. The technology does. In his presentation, Harchie described how his setup allows him not just to complete schoolwork but to exert agency over his world. He uses his eyes to answer the Ring doorbell to scare the postman. He uses it to turn off the shower when his father is midway through washing. These are not "learning outcomes"; they are acts of glorious, mischievous rebellion. They are the proofs of a personality imprinting itself on the world. From here, EdTech is not about "efficiency"—it is not Mr Elliot trying to streamline the estate. This is EdTech as liberation. It transforms the user from a passive recipient of care into a leader who can, quite literally, tell the world to "follow." The Voice from the Silence If Harchie represents the power of the visible gaze, Dr Rosie Woods took us into the realm of the invisible. Her session, “Giving a voice to those who cannot speak,” highlighted the frontier of sub-vocal speech recognition for people with Profound and Multiple Learning Disabilities (PMLD). Dr Woods challenged the assumption that people with PMLD are "pre-linguistic" simply because they cannot articulate sounds. She introduced us to the concept of "sub-vocal speech"—the silent, internal speech that occurs in the brain and muscles even when no sound is produced. Using specialised microphones and software, her team recorded and amplified this internal voice. The results were striking. One participant, Lizzie, who was previously unable to communicate clearly, was recorded saying: "I can’t write… but I can talk. I know what’s planned, I feel safe". Pause on that for a moment. “I know what’s planned.” No amount of academic freedom, no reduction in class size, and no amount of teacherly intuition can decode sub-vocal speech without the hardware. Without the tech, Lizzie is trapped in a room with no doors. With the tech, the door opens. Here, the technology is not a replacement for the human; it is the bridge to the human. It is the only thing that allows the "well-paid teacher" to actually do their job: to listen. The Access Paradox This brings us to the Access Paradox. The critique of EdTech in Chapter One stands firm on the neurotypical, mainstream experience: we do not need AI to grade essays or generate lesson plans that a human should craft. That is "lazy" tech. But for Harchie and Lizzie, technology is the "Wentworth" factor. It is the vessel of their vitality. It is the tool that allows them to reclaim their "bloom." To dismiss all EdTech as a neoliberal ploy to replace teachers is to inadvertently condemn these students to silence. We must distinguish between the technology that automates the human experience (bad) and that which enables it (essential). The former is a cage; the latter is a key. Chapter Four: The Synthesis Tech Needs the "Wentworth" Factor So, where does that leave our original provocation? If we accept that technology is essential for access (as Harchie and Lizzie demonstrated), does that mean we must submit to the hollow efficiency of the "Mr. Elliots"? Must we accept the premise that machines should replace the expensive, messy work of human teaching? Not at all. In fact, the evidence from the conference suggests that the original blog prompt was half-right. Technology does not solve problems in a vacuum. It fails spectacularly and expensively when treated as a replacement for human expertise rather than as a tool that requires more of it. The answer lies in the Kingspark School case study presented by Paula Kane and Eimer Galloway. Their journey offers a blueprint for what happens when you stop buying "solutions" and start investing in souls. The Investment in Character Kingspark faced a familiar dilemma. They had the technology—the DriveDecks, the switches, the hardware—but it wasn't being used effectively. The "Mr Elliot" of the situation (the shiny equipment) was present, but the relationship was cold. Why? Because the staff lacked confidence. They were paralysed not by a lack of desire, but by a lack of support. Their solution was not to buy more software. It was to invest in the "Wentworth" factor—human competence, constancy, and autonomy. They secured funding not for gadgets, but for a person—specifically, an Assistive Technology Team Leader. They understood that technology is inert without a champion. They established a "Community of Practice", a dedicated space for staff to share knowledge, mirroring the camaraderie of Wentworth’s naval officers rather than the isolated competition of the Elliot family. Crucially, they listened to their staff who demanded "interactive and functional training that takes place in directed time". They realised that you cannot learn to wield these powerful tools in the margins of a frantic day. They ring-fenced time. They prioritised "hands-on" experience. They proved that for technology to work, schools need exactly what the original blog prompt demanded: time, autonomy, and specialised roles. Look, what flexibility and time can do! The Map and the Territory This necessity for human rigour is reinforced by the systemic work of Rohan Slaughter and Tom Griffiths in their presentation, “Developing an AT Competency Framework”. If Kingspark provided the narrative, Slaughter and Griffiths provided the map. They argue that we cannot simply drop tools into a classroom and expect miracles. That is the "Mr Elliot" approach—all surface, no substance. Instead, we need a "training ecosystem". Their framework breaks down the necessary human skills into four distinct phases: Assessment, Provisioning, Ongoing Support, and Review. Note that the technology itself is only a fraction of this cycle. The rest is human judgment, human observation, and human adaptability. They highlight that "AT is not the prevail of one particular job role – everyone has a role". This dismantles the idea of the "plug-and-play" solution. It suggests that true technological integration requires a "Captain Wentworth" level of discipline and skill. It requires a professional class who are not merely "users" of a system, but masters of it. The Piano and the Pianist The synthesis of these arguments brings us to a singular truth. The technology did not "solve" the problem at Kingspark in isolation. The technology was merely an instrument, like a fine piano sitting in a drawing room. It required a pianist with the training, the time, and the passion to practice. When we view EdTech through this lens, the conflict between "tech" and "teachers" dissolves. We do not need fewer teachers; we need more teachers, and we need them to be more highly skilled than ever before. We need them to be the "Wentworths" who can navigate the complexities of sub-vocal recognition and eye-gaze calibration with the same confidence that they navigate a curriculum. The danger is not the technology itself. The danger is the "persuasion" that the technology allows us to be cheap. The danger is believing Mr Elliot when he says we can fire the pianist because the piano can play itself. Chapter Five: The Second Spring At the very end of Persuasion, Anne Elliot is granted what the narrator calls a "second spring" of youth and beauty. Crucially, this renewal does not come because she has acquired a new accessory, or a better carriage, or a more efficient way to manage her household accounts. It comes because she has reclaimed her connection to Captain Wentworth. She has chosen the difficult, vibrant, human path over the safe, calculated hollowness of Mr Elliot. (It’s the trousers.) The lesson for us, as we navigate the noisy marketplace of modern education, is that EdTech and "Human Tech" (teachers) are not binary opposites, though they are often sold as such. We are constantly subject to the same "persuasion" that plagued Anne. We are persuaded to buy the software because it is cheaper than hiring a teaching assistant. We are sold the chatbot because it is easier than reducing the caseload. We are told that if we just adopt the right platform, the structural cracks in the walls will cease to matter. The Inertia of the Machine But the evidence from TechAbility 2025 shatters this illusion. It proves that the most powerful technology is utterly inert without the warmth of human expertise to animate it. Consider the work of Dean Hall at Treloar’s. His session on 3D printing was not a paean to the printer itself—a machine of plastic and heat. The "miracle" was not that the machine could print a joystick knob; the miracle was that Dean, with his engineering background and human empathy, could design a bespoke "magnet assessment knob set" to allow a specific child to drive their own wheelchair. The printer is just a tool; Dean is the architect of access. Consider Kirsty McNaught’s work on block-based coding. The software existed, but it was full of barriers—drag-and-drop interfaces that locked out eye-gaze users. It took a human expert to dismantle those barriers, creating a "keyboard accessible" bridge so that a physical disability does not preclude a digital education. And consider Harchie Sagoo and Dr Rosie Woods. The technology—the GridPad, the sub-vocal sensors—was the vessel. But the cargo was the human personality. The technology did not replace the need for connection; it created the possibility of it. As Harchie’s presentation title reminds us, the goal is not for the machine to lead, but for the human to say: "I Lead, You Follow". Holding Out for the Real Thing There isn't a single problem solved by EdTech alone. A 3D printer in a cupboard solves nothing. An eye-gaze camera without a trained therapist is just expensive glass. But there are miracles achieved by EdTech when it is placed in the hands of a teacher who has been given the freedom, the time, and the support to use it. When we invest in the "Wentworths"—the staff, the specialists, the time to care—the technology sings. We must stop letting the Mr Elliots of the tech world persuade us that they can replace the heart of the profession with a dashboard. We need to stop apologising for the cost of human expertise. We need to hold out for the real thing. Only then will education see its second spring. With sincere thanks to the presenters and attendees at TechAbility 2025 for their insights, and particularly to Harchie Sagoo for reminding us that while technology is the tool, independence is the goal. I learned so much. Dedication To you who persuaded me to pick up books again. Thank you for cracking the spine of stories I thought were shelved and for proving that while the machine processes the text, it takes a human to find the subtext. References & Further Reading
The Conference & The Case Studies:
Trigger Warning: A Detective’s Notes on Joey Barton’s War Against Women Chapter One. The Monday Morning Drop
I didn't want to open the file. You know the type, it smells like stale beer and fragile egos before you even read the first page. But in my line of work, you don't get to look away just because the details turn your stomach. The digital street corner known as X doesn't sleep, and neither do the ghosts haunting its servers. The subject was Joey Barton. Ex-footballer, ex-manager, current loudmouth-for-hire in the attention economy. The dossier on my desk was thick with the kind of vitriol that stains your fingers. He had reinvented himself from a midfield enforcer into a self-styled' culture warrior,' a general in the anti-woke brigade. The brief was simple, but the implications were messy: track the fallout of a man who decided that his retirement hobby would be tearing down women in sport. We call these 'Trigger Events'-moments that ignite larger conversations about misogyny and systemic violence in digital spaces-in the academic journals, a sterile, white-coat term for what is essentially a digital drive-by. Like a private investigator digging through the trash of a corrupt city official, my team and I scraped the data. We pulled thousands of posts, looking for patterns in the noise. What we found wasn't just "trolling" or "banter." It was a coordinated, ballistic hit job on the very idea of women occupying space in the game. The file listed three primary targets, each chosen with the precision of a predator looking for a soft underbelly. First, there was Mary Earps. She was the golden girl, the Lioness, fresh off being crowned Sports Personality of the Year in December 2023. A moment of national validation. But Barton couldn't stand the shine. He clocked in to dismantle her, calling her victory "nonsense" and sneering at the audacity of "A Women's Goalie" taking the spotlight. He didn't just critique her game; he attacked her biology and dignity, calling a world-class athlete a "big sack of spuds". He boasted he could score "100 out of 100 penalties" against her, reducing her professional excellence to a playground bet he would win "twice on a Sunday". It was a classic shakedown: strip the woman of her accolades until she is just an object of ridicule. Then kick her again. Then the target shifted to Eni Aluko. This was uglier. This was where the file got heavy. Aluko is a former professional, a pundit, a woman who knows the game in her bones. But Barton didn't see a colleague; he saw a threat. He launched a campaign of "misogynoir," that toxic cocktail of anti-Black racism and sexism. He compared her and fellow pundit Lucy Ward to Fred and Rose West, invoking the names of notorious serial killers to describe two women talking about football tactics. He accused them of "murdering" the listeners' ears. He dipped into the oldest, dirtiest inkwell of misogyny, implying she had "slept her way" to the top and "violated marriages" to get her seat at the table. The harassment was so severe, so relentless, that Aluko admitted she was scared to leave her house, effectively exiled from public life by a man with an iPhone and a grudge. Finally, there was the kid. Ava Easdon. A seventeen-year-old goalkeeper for Partick Thistle. She made a mistake in a cup match, the kind of error every young player makes on the road to greatness. But Barton didn’t offer grace; he provided blood. He posted a critical takedown of a schoolgirl to his millions of followers, creating a pile-on that shifted the atmosphere from sporting critique to child bullying. When the public recoil hit him, he didn't blink. He escalated. He labelled the women’s game "Lesbo-ball," weaponising homophobia to degrade a teenager. I looked at the timestamps. I looked at the engagement numbers. This wasn't an isolated incident; it reflected a systemic pattern where misogyny is amplified by online algorithms, revealing how digital culture sustains systemic violence. Barton was acting as a 'misogyny influencer,' broadcasting hate because the algorithm rewards engagement, regardless of the human cost. He was the ringleader of a digital mob, and these women were the collateral damage in his war for relevance. I poured a black coffee and started typing. It was going to be a long week. Chapter Two. Three Bodies of Evidence The investigation focused on three specific incidents. Call them the crime scenes. We laid them out on the corkboard, connecting the threads with red string until the picture was undeniable. Turning to the first points of evidence, there was Mary Earps. The date was December 19th, 2023. She had just been crowned Sports Personality of the Year, a moment of gold-plated validation for a goalkeeper who had practically carried the nation’s hopes in her gloves. But Barton couldn’t stand the shine. He clocked in immediately, dismissing the victory as "f****** nonsense" and sneering at the idea of "A Women's Goalie" taking the pedestal. He didn’t just critique the award; he dismantled the woman. He called a world-class athlete a "big sack of spuds," an insult designed to strip away her athleticism and reduce her to something lumpy and inert. He bragged he could score "100 out of 100 penalties" against her, dismissing her professional excellence with the casual cruelty of a man who thinks his own opinion is a physical law. It was a classic opening gambit: humble the target, delegitimise the achievement, and wait for the mob to applaud. Then he went after Eni Aluko. This was uglier. This was where the file turned from a harassment case into something visceral. In January 2024, Barton locked his sights on the former professional and current pundit. He didn’t just critique her analysis; he reached into the darkest corners of British criminal history. He compared Aluko and her colleague Lucy Ward to Fred and Rose West, the notorious serial killers who buried bodies under their patio. Think about that. He invoked mass murderers to describe two women talking about football tactics. It was violent, hyperbolic rhetoric designed to dehumanise, to paint them as monsters infiltrating the beautiful game. He didn’t stop there. He dipped his pen in the ink of old-school misogyny, accusing female pundits of "violating marriages" and implying they had "slept their way to the top" to gain their positions. The fallout was precisely what you’d expect from a hit this precise: Aluko later admitted she was "scared to go out," effectively exiled from public life by a digital terror campaign. But the one that really made me want to pour a stiff drink (even though, I am teetotal) at 10 AM was Ava Easdon. March 2024. A seventeen-year-old goalkeeper. A kid. She makes a single mistake in a cup match, the kind of error that serves as tuition for every young player, and Barton descends like a vulture. He didn't offer veteran wisdom; he delivered a bully's scorn, mocking a minor to his millions of followers. When the press and the girl's father called him out for punching down, he didn't back down. He doubled down. He escalated the rhetoric into open bigotry, branding the women's game "Lesbo-ball". He took a teenager's bad day at the office and turned it into a referendum on her sexuality and her right to exist on the pitch. This isn't "banter." It isn't "opinion." It’s a strategy. It is a calculated series of strikes designed to signal to every woman in the sport: You are not safe here. You will be ridiculed. I will squash you. Chapter Three. Decoding the Glyphs: Emoji Violence In the smoke-filled rooms of the old noir paperbacks, the threat arrived in a jagged ransom note, letters sliced from magazines to hide the sender's hand. Today, the threat arrives in bright yellow pixels, beaming directly into your palm. It looks like a cartoon, but it cuts like glass. One of the most insidious patterns we uncovered in the Barton file was the systematic weaponisation of these symbols. We termed it "Emoji Violence". To the untrained eye, or the willfully blind moderation bot, a snowflake or a crying-laughing face looks innocuous, a splash of colour in the grey text. But in the context of the manosphere, they are intended to hide the digital dog whistle behind jokes. They are the secret handshake of a mob gathering its stones. Barton, the ringleader of this digital circus, has mastered this lexicon. He repeatedly deployed the snowflake emoji, a slang term repurposed to label his critics, and, by extension, women who ask for respect, as fragile, weak, and "too easily upset". It is a dismissal intended to prevent the witness from testifying. But the code got darker. We tracked the use of the aubergine emoji. On dating apps, it is a flirtation; in Barton’s hands, it was a slur. He used it to allege that female pundits had "slept their way to the top," reducing their hard-won professional expertise to a transaction of flesh. It is a way to call a woman a whore without tripping the profanity filter. The mob took his cue and escalated the violence. We found knives, guns, and bombs paired with female-identifying emojis, direct death threats, and smuggling themselves into the timeline under the guise of pictorial slang. We saw symbols of fear and anxiety weaponised to intimidate. We saw the "shush" emoji used not to ask for quiet, but to enforce silence, to tell women that their voice was unauthorised in this space. We saw animal emojis used to dehumanise, stripping the targets of their humanity until they were just game to be hunted. Even the "poo" emoji was weaponised, smeared across posts to visually degrade the quality of women's football and the women who play it. Barton's content is a code of silence and intimidation, a sophistication of cruelty that allows abusers to smuggle threats past the algorithmic gates that are supposed to keep the peace. The visual nature of these symbols amplifies the hate, drawing the eye and fueling the spread of the violence far faster than text alone. It is the digital equivalent of a brick thrown through the front window in the dead of night—deniable, perhaps ("it's just a picture"), but the message shattered on the living room floor is crystal clear: We know where you live, we hate that you are here, and we want you out. Chapter Four. The Deep Rot: Misogynoir Sara Paretsky’s V.I. Warshawski knows that corruption is rarely a single layer deep. In the Chicago underworld, if you find a crooked cop, you usually see a crooked judge standing right behind him. The digital beat is no different. Scratch the paint off the misogyny, and you typically find the rusted iron of racism waiting underneath. When we pulled the thread on the attacks against Eni Aluko, the investigation took a darker turn. We weren't just looking at sexism anymore. We were looking at misogynoir. Though the term appears as a buzzword from the seminar room, it is a specific, forensic term for a specific type of violence. Coined to describe the unique, toxic intersection where anti-Black racism meets sexism, misogynoir is the distinct brand of hatred reserved for Black women. And Joey Barton weaponised it with the precision of a man who knows exactly which buttons to push to incite a lynch mob. The file on Aluko showed that Barton sought to question her competence AND to erase her legitimacy entirely. He framed her not merely as wrong, but as an alien invader in the white, male sanctuary of football punditry. He played into centuries-old colonial tropes, casting her as the "aggressive" or "uppity" Black woman who had risen above her station. The rhetoric was suffocating. Barton and his followers repeatedly deployed the "diversity quota" argument, claiming Aluko only held her microphone because of "woke box-ticking" rather than her 102 caps for England or her law degree. But the ultimate weapon in his arsenal was the "race card." When Aluko or her defenders pointed out the racial undertones of the abuse, Barton flipped the script. He accused her of "playing the victim," a classic gaslighting tactic used to silence Black women when they dare to speak about their own oppression. By framing her reaction to racism as a manipulative ploy, he effectively stripped her of the right to her own defence. This is the grim reality of the "intersectional violence" we mapped. The hate doesn't just add up; it multiplies. The "Rose West" comparison we noted earlier wasn't just a shock tactic; in the context of misogynoir, it was a brutal dehumanisation designed to place a Black woman outside the boundaries of human empathy. The damage was tangible. In a noir novel, the victim might end up in the hospital. In this digital thriller, the violence was psychological but no less disconcerting. Aluko, a veteran of the pitch, was forced to flee the country, admitting she was "scared to go out" for fear of her physical safety. The digital mob Barton unleashed had successfully hunted her out of the public square. This wasn't just "mean tweets." It was a displacement event. It was the "deep rot" of the system exposed—a reminder that for Black women in sport, the cost of visibility is often their own peace. Chapter Five. The Verdict I tossed the Barton file onto the desk. It landed with a thud heavier than the paper it was printed on, displacing the stale air of the office. The investigation was closed, the evidence catalogued, and the patterns undeniable. But in this line of work, knowing the truth and seeing justice are two very different things. The data was conclusive. Joey Barton isn't an outlier, a rogue operator, or a "bad apple." He is a feature, not a bug, of a system designed to monetise cruelty. We identified him in the report as a "misogyny influencer". That’s the academic term. On the street, you'd call him a grifter. He is a man who has realised that in the current economy of attention, hate pays better than analysis. He broadcasts abuse because the algorithm—that great, invisible fence for stolen dignity—rewards engagement regardless of the cost. The verdict? He walks. That’s the horror of this particular noir story. There are no handcuffs at the end of this chapter. No judge is banging a gavel. Barton is still out there, phone in hand, presiding over the "Manosphere", a digital subculture that is loud, angry, and terrified of its own obsolescence. He frames women in sport not as athletes or colleagues, but as invaders in a sacred male space, treating the pitch as a fortress that must be defended against the encroachment of diversity. But while he counts his likes and retweets, look at the bodies left in his wake. Look at Mary Earps, a world-class professional reduced to a punchline about vegetables by a man who couldn't handle her shine. Look at Ava Easdon, a seventeen-year-old kid who had to learn the hard way that a grown man with a verified checkmark feels entitled to bully a minor for "content". And look, most hauntingly, at Eni Aluko. She didn't just log off; she fled. The relentless campaign of misogynoir, the comparison to serial killers, the accusations of sexual impropriety, and the erasure of her professional merit forced her to leave the country for her own safety. That is the physical toll of this digital violence. The content appears as pixels on a screen, but it is also genuine fear, absolute displacement, and absolute silence. The platforms that host this carnage? They act like the crooked casino owners of old Chicago. They claim neutrality while raking in the vigorish from every fight that breaks out on their floor. They amplify the "Trigger Events" because outrage keeps the users glued to the screen, creating a contagion effect that spreads the vitriol faster than we can track it. But here is the thing about investigations: once you have the evidence, you can’t unsee it. We know now how the machinery works. We know that online abuse is a "virtual manhood act," a desperate performance of masculinity for an audience of other angry men. We need better policies. We need platforms that stop acting as safe harbours for hate speech and start treating safety as a human right. We need to strip the profit margin away from the misogynistic influencers. Until then, I’ll keep my running shoes on. The Barton file is closed, but the server farms are still humming, and the next drive-by is already being drafted in a notes app somewhere. The system is rigged, the game is dirty, but I’m not walking away. The beat goes on, and there are more files to open. References from the Case File:
Chapter One. The Estate of Pierce Inverarity Have you ever had the unsettling experience of reading Thomas Pynchon? You really should. It is neither pleasant nor fast; it is confusing, labyrinthine, and slow. But try. (It is a very short fiction.) His narratives often blend voice and intention until you get lost, and this is precisely the vertigo I feel regarding the rush towards a 'golden age' of AI. I stand, much like Oedipa Maas at the beginning of The Crying of Lot 49, staring down the slope of a new and sprawling legacy. But instead of the grid of San Narciso, with its printed circuits and hieroglyphic streets, we confront the interface of a Large Language Model (LLM). We have been named executrix of a chaotic inheritance, a technology that promises everything and explains nothing. In the novel, Oedipa returns home from a Tupperware party, a scene of unsettling suburban banality, to find she has been made responsible for the estate of her former lover, the wealthy and shadow-casting Pierce Inverarity. She is not a lawyer; she is not a tycoon. She is a woman who, until that moment, felt her life was a "Rapunzel-like" confinement in a tower of her own boredom. Suddenly, she is tasked with untangling a web of assets that seems to encompass all of America: stamp collections, factories, motels, and secret societies. We occupy the same precipice. We have returned from the digital equivalent of a Tupperware party, our scrolling, our emailing, our basic digital lives, to find that the tech giants have died (or rather, disrupted themselves) and left us the keys to the kingdom. We are the executors of the entire internet's knowledge, compressed into a single blinking cursor. Like Oedipa, we feel a strange, jolted duty to organise this mess. We assume the role of the executrix not because we are qualified, but because the will was read, and our name was on it. Oedipa's motivation is not greed; it is a desperate need to find a pattern in the noise. When she looks down at the city of San Narciso, she sees it as a "printed circuit," a hieroglyph that surely, if she just looked hard enough, would reveal a "transcendent meaning." The city is just real enough in this context, yet it remains uneasy for the reader to grasp its reality. This is precisely the sensation of the modern "Prompt Engineer." We gaze at the blank face of the AI and convince ourselves that if we just find the correct incantation, the proper acronym, the proper sequence of R-T-F or S-O-L-V-E, the circuit will close, and the meaning of the legacy will be revealed. Into this chaos steps the modern consultant, the influencers, clutching their maps. They tell us, as noted in a recent viral post, that the population is divided. There are the "90% of ChatGPT users" typing into the void with fundamental ignorance, and then there are the Elect, the "other 10%" who are using it to "print money in their sleep." The distinction, we are told, lies in the code. Not a software code, but a linguistic one. A set of frameworks designed to tame the stochastic beast. The image accompanying this proclamation presents eight sigils, each an acronym such as R-T-F (Role, Task, Format) and D-R-E-A-M (Define, Research, Execute, Analyse, Measure). They are presented not merely as tips, but as the liturgy required to access the machine's grace. If you can just arrange your words into the shape of R-I-S-E, the "exponential leverage" will flow, and the tower of boredom will finally fall. Chapter Two. Maxwell's Demon and the S-O-L-V-E Framework Deep within the paranoid architecture of The Crying of Lot 49 lies the Nefastis Machine, a device containing Maxwell's Demon. Pynchon presents this theoretical intelligence as a tiny sorter tasked with the impossible labour of defeating the second law of thermodynamics by separating fast molecules from slow ones to create a perpetual cycle of energy without heat loss. The modern obsession with prompt engineering reveals itself as a digital reenactment of this thermodynamic fantasy. We seek to build our own Demon within the chat interface, believing that the correct sequence of words might finally extract pure order from the chaotic swirl of the internet. If Maxwell's Demon represents the thermodynamic fantasy of the era, the prompt engineer represents a revival of an older, more theatrical deception: the Mechanical Turk. In the late 18th century, Wolfgang von Kempelen dazzled the courts of Europe with a chess-playing automaton, a turbaned mannequin that appeared to defeat human opponents through pure mechanical logic. In reality, it was a hoax; a human chess master was cramped inside the cabinet, guiding the mannequin's hand by candlelight. The modern practice of prompt engineering effects a curious reversal of this illusion. We are no longer the audience marvelling at the machine; we have become the human operator squeezed inside the box. When we employ frameworks like S-O-L-V-E, we are contorting our natural language into the rigid, uncomfortable shapes of "Situation," "Objective,"and "Vision" to ensure the machine functions. We provide the logic, the context, and the strategic foresight, performing the cognitive heavy lifting while cramped within the narrow cabinet of the prompt window. The AI takes the credit for the checkmate, but it is the human user, twisted into the posture of a bureaucrat, who is actually moving the pieces. Consider the rigid geometry of the S-O-L-V-E framework, which demands the user delineate Situation, Objective, Limitations, Vision, and Execution. These acronyms serve as bureaucratic incantations intended to filter the heated, hallucinogenic potential of the Large Language Model into the cold, orderly work of capital. The framework promises that a sufficiently specific "Vision" combined with strict "Limitations" will bypass the messy friction of actual thought to produce a frictionless automation of workflows. There is a distinct, almost tragic irony in applying these stiff corporate methodologies to a machine built on probability. The user attempts to shackle a psychedelic mirror to the grid of 1950s middle management. By commanding the infinite latent space, a high-dimensional manifold of semantic relationships, to role-play as a "Commercial Director" via the R-I-S-E method or a "Brand Strategist" through R-T-F, the user forces the sublime and terrifying chaos of the model into the beige suit of a mid-level executive. This effort parallels the "crying" of the lot itself. In legal and auctioneering terms, the crying represents the vocal assertion of value and finality over a collection of discarded debris. Oedipa Maas wanders through the wreckage of Pierce Inverarity's estate, overwhelmed by the sheer volume of unconnected things, waiting for the auctioneer to cry the lot and impose a binding definition upon the confusion. The prompt engineer acts as this auctioneer. They shout their frameworks into the void, engaging in a hysterical sorting of molecules to produce "qualified inbound leads" while ignoring the encroaching night of total entropy. Chapter Three. The Trystero and the Digital Elect The most Pynchonesque element of this new movement resides in the class anxiety it diligently cultivates. The viral proclamation separates the world into a stark binary that mimics the theological division between the Elect and the Preterite - the chosen few and the passed over. It posits a hidden layer of reality where a "smart" ten per cent operate a clandestine machinery of wealth, while the unenlightened ninety per cent wander the streets of the internet, posting their basic queries into government-approved boxes and receiving only silence in return. This is the new Trystero. In the novel, the Trystero is a secret postal network used by the marginalised to communicate outside the official monopoly. Here, the dynamic is inverted: the secret network belongs to the "high performers." Those inducted into this underground possess the frameworks as if they were passkeys to a shadow economy. They wield C-A-R-E (Context, Action, Result, Example) and T-A-G (Task, Action, Goal) not merely as organisational tools but as the alchemical formulas required to transmute the leaden text of a chatbot into the gold of exponential leverage. The act of typing ceases to be communication; it becomes a ritual invocation of a hidden order. This reveals the distinction between the "smart" and the "basic" user to be less a division of skill and more a revival of the Cargo Cult. Richard Feynman famously described the post-war Pacific islanders who, having witnessed the material abundance brought by military aircraft, constructed elaborate mock airstrips from bamboo and straw. They carved headphones from wood and stood in makeshift control towers, waiting in faithful silence for the planes to return. They had perfectly replicated the technology's form while remaining entirely ignorant of its mechanism. The modern user constructs similar effigies out of language. Rigid acronyms like R-A-C-E serve as the digital equivalent of the bamboo control tower; users mime the structure of computer code in the superstitious belief that if the liturgy is performed correctly, the "cargo" of intelligence will descend from the latent space. Such divisive rhetoric fosters a pervasive paranoia that the actual signal remains forever just beyond the threshold of perception. Oedipa Maas found herself haunted by the image of a muted post horn scrawled on latrine walls and sidewalk surfaces. The modern user stares at the D-R-E-A-M framework (Define, Research, Execute, Analyse, Measure) with the same fervent suspicion, convinced it contains the encoded map to salvation. The belief takes hold that the market's chaos will align into a perfect vector of profit if only the correct acronym is whispered into the machine. This phenomenon illustrates Jean Baudrillard's dark prophecy concerning the precession of simulacra. Baudrillard argued that in the postmodern condition, the map no longer depicts the territory; rather, the map precedes and engenders the territory. The viral infographic acts as precisely this sort of hyperreal cartography. The distinct demographic of the "Top 10% of Super Users" did not exist as an empirical reality until the influencers drew the lines of demarcation. These digital cartographers invented a class system solely to sell the navigation tools required to ascend it. The users scrambling to master R-T-F are not uncovering a hidden truth about AI; they are desperately attempting to become the territory depicted on the slide. They seek to inhabit a demographic that is nothing more than a marketing hallucination, proving that the simulation of competence has finally become more lucrative than competence itself. Oedipa eventually wonders whether she has stumbled upon a real conspiracy or is merely projecting meaning onto static, much like a digital Hamlet driven to madness by the ambiguity of signs. The prompt engineer faces an identical vertigo. They seek to organise the sprawling, hallucinatory output of the AI into the rigid columns of R-A-C-E, hoping that structure will save them from the void. Yet the suspicion remains that the "Top 10%" is less a statistical reality than a shared delusion, a frantic attempt to bind the encroaching entropy with the fragile logic of a LinkedIn slide. Chapter Four. The Muted Prompt One cannot deny the functional value of the frameworks. Structure acts as the primary antagonist to the blank page, and the definition of a "Role" or the setting of "Limitations" effectively prevents the AI from drifting into the entropic haze that Pynchon so frequently chronicled. These acronyms serve as necessary scaffolding for thought, preventing intent from dissolving into the white noise of the model. Yet the divergence between the map and the territory looms large. Thomas Pynchon maintains a ghostly presence as an author who constructs labyrinths to reflect the disintegration of meaning, famously vanishing to let the complexity of his text stand alone. In stark contrast, LinkedIn influencers position themselves as the new authors of certainty, placing their personal brands at the centre of the narrative. They peddle the seductive illusion that the sprawling, chaotic text of the world can be condensed into a single page of bullet points. While Pynchon embraces the noise, the creators of the R-T-Fand S-O-L-V-E cheat sheets seek to banish it. They present themselves as high priests of a digital order, promising that the correct incantation will subdue the ghost in the machine. The terror inherent in The Crying of Lot 49 resides in the ambiguity of the conspiracy. Oedipa Maas never receives confirmation that the Trystero exists or if she is merely projecting order onto random debris. A similar vagueness haunts the prompt engineer. The secret society of the "10%" who have supposedly unlocked the universe likely does not exist outside the marketing copy. The frameworks function merely as frameworks rather than magical keys, remaining useful, dry, and ultimately limited tools that offer the comforting illusion of control over a stochastic process. This obsession with correct formatting reveals a sociological pathology that Robert Merton identified as Bureaucratic Ritualism. Merton described a mode of adaptation in which the subject, overwhelmed by anxiety or blocked from achieving the organisation's actual goal, abandons the organisation's goal but adheres obsessively to its rules. The "smart"10% of users are not necessarily innovators; they are ritualists. They have elevated the means of production, the R-T-F framework, and the perfect context-setting above the ends. They care more about filling out the form correctly than the quality of the creative output. By demanding that every interaction be prefaced with a Role, a Task, and a Format, they are effectively doing the paperwork for art. They have turned the wild, unpredictable act of creation into a compliance exercise, convinced that if the bureaucratic ritual is performed with sufficient exactitude, the result will matter. It is a hollow victory of method over meaning. End. Treating AI solely as an engine for "exponential leverage" via rigid acronyms ignores the strange, vibrant weirdness of the tool. Such a utilitarian approach reduces the clandestine intrigue of the W.A.S.T.E. system to the pedestrian efficiency of FedEx. One might employ R-T-F and S-O-L-V-E while remaining deeply suspicious of their reductive power. Behind the "Role" and the "Context," the unpredictable human pulse continues its search for meaning in the lot's crying, waiting in the silence for the auctioneer to finally speak. Dedication In a landscape crowded with artificial intelligence, I remain hopelessly devoted to the genuine article. My sincere thanks to the Jester who dares to laugh at the machine, and for possessing the kind of dangerous, un-prompted intellect that keeps this Professor on her toes. While the rest of the world searches for the secret code to unlock the universe, I am content knowing I’ve already found the only signal in the noise. Links & ReferencesThe Theory (Decoded):
$15,000 for a chatbot that customers despise because it cannot answer a fundamental question. $8,000 for an "AI scheduler" that routinely double-books appointments, forcing human staff to apologise for the machine's incompetence. $12,000 for a document processor that cannot read the specific industry forms for which it was purchased. $10,000 for a customer service tool that simply escalates every query to a human anyway. The receipt reads like a breakdown of a heist, except the victim signed the checks willingly.
This could be the opening of a dystopian novel; buuuuut, it is the actual, brutal balance sheet of a small-business owner on Reddit who spent $50,000 last year chasing the glowing promise of the AI revolution, only to find that half of their investment is already obsolete. (r/AiForSmallBusiness). We need to stop calling this "early adoption pains." It is something far more sinister. This is a systemic extraction of wealth from the real economy, the bakeries, the clinics, the local logistics firms, to the speculative economy of AI vendors. It is a transfer of capital from the people who do the work to the people who sell the hype. And for the small business owner standing in the wreckage of their budget, staring at a suite of tools that don't work, it feels less like innovation and more like a stupidity tax levied by Silicon Valley on anyone desperate enough to believe the pitch. Chapter One. The Moon in the Poultry Shed: The Conflation of Automation and Intelligence In R.C. Sherriff's 1939 novel The Hopkins Manuscript, the moon does not arrive as a saviour. It approaches as a slow, glowing inevitability, humanity watching with a mix of scientific prestige and deep denial until it eventually crashes into the Atlantic Ocean. This literary apocalypse shares a distinct DNA with the satire of Don't Look Up. Both narratives expose our fatal tendency to stare at the spectacle while ignoring the physics of the crash. We are currently living through our own Hopkins moment. We are staring at the glowing orb of "Artificial Intelligence," mesmerised by its lunar brightness, while ignoring the fact that we mostly just need to feed the chickens. The Reddit user who incinerated $50,000 on "AI solutions" only to find that half were obsolete is not a fool. They are a modern Edgar Hopkins who was sold a telescope to watch the moon when they really just needed a better coop. They note with a painful clarity that the only tools that actually survived the crash were "dead simple: basic automation for repetitive tasks." This brings us to the great Trojan Horse of the current hype cycle. We have allowed vendors to rebrand standard "if-this-then-that" scripts as AI to justify a tenfold price hike. They have taken the boring, reliable utility of a spreadsheet macro, the digital equivalent of Hopkins' reliable breeding hens (yes, really, read the book, it is GREAT!) and wrapped it in the volatile, shimmering skin of Generative AI. The industry is selling us the moon. They promise a celestial body that glows with reasoning and creativity. But what a small business actually needs is gravity. They need the deterministic certainty that if a file is placed in Folder A, it will move to Folder B. This straightforward automation provides real utility. It is unsexy. It does not hallucinate. It does not require a GPU cluster subscription. Instead, businesses are being sold on Speculation. They are buying Large Language Models (LLMs) that try to "guess" (cough, exploit) what the customer wants, rather than scripts that simply execute a command. We are paying a premium for magic that turns out to be a parlour trick. When the moon finally crashes into the earth in Sheriff's novel, the result is not a new utopia but a muddy, desperate scramble for resources. The business owner who spends $15,000 on a chatbot that customers hate has realised too late that they purchased a falling rock instead of a foundation. Chapter Two. The Escalation Tax: Friction Farming at the End of the World In The Hopkins Manuscript, as the moon descends to crush the British Isles, the protagonist Edgar Hopkins finds himself increasingly entangled in the petty, bureaucratic absurdities of his local village committee. They debate the proper storage of cricket bats while the tides are rising to swallow them whole. (Marvellous part about stocks and shares in crockery for you to discover too, as prices will go up if everyone's glassware is broken when the moon slams into the earth). There is a maddening disconnect between the scale of the catastrophe and the system's capacity to respond. The $10,000 customer service AI described by the Reddit user acts as the digital equivalent of this village committee. As I see it, this shows a layer of expensive, performative insulation designed to delay the inevitable collision between the business and the reality of its customers. The Reddit user notes that their expensive "customer service AI" simply "escalates everything to humans anyway." This reveals the tool for what it truly is. It is not a gatekeeper. It is a digital bouncer. I am going to apply my "Unsuitable Job" critique to this software. The promise of AI is that it will replace labour, but in practice, it merely displaces frustration. It acts as a friction farm. The business has paid a premium to install a barrier between itself and its clientele, a digital obstacle course that the customer must navigate before they are deemed worthy of human attention. By the time the customer finally breaches the wall and reaches a human staff member, they are no longer just a customer with a query. They are a survivor of the chatbot loop. They are exhausted, confused, and angry. The human staff member, therefore, does not do less work. They do more complex work. They are no longer starting the conversation at a neutral point; they are starting from a deficit of trust. Very likely, they will spend the first ten minutes of the interaction apologising for the machine's incompetence. The AI has not solved the problem. It has simply curated the misery. It has skimmed off the easy, low-stakes labour of the initial greeting and left the heavy, emotional lifting of conflict resolution to the human. Just as Hopkins fretted over his poultry while the world ended (Broodie!), these businesses are obsessing over "efficiency metrics" even as their customer relationships are quietly being pulverised by the very tools they purchased to save them. Chapter Three. The Obsolescence Trap: Building Castles on a Tidal Wave As the moon draws terrifyingly close to Earth in The Hopkins Manuscript, the scientific consensus shifts at nauseating speed. What was a mathematical certainty on Tuesday is a debunked theory by Friday. The experts constantly revise the trajectory, the impact zone, and the severity of the collision, leaving the layperson to build defences against a catastrophe that keeps changing its shape. Edgar Hopkins digs his dugout, but he is haunted by the suspicion that by the time he finishes it, the "science" will have rendered his spade obsolete. He is correct. The Reddit user's lament, "Half of them are already obsolete", echoes this exact existential dread. It exposes the dirty secret of the AI Gold Rush: these tools are being shipped in a state of permanent beta. A $12,000 document processor purchased in 2023 is not an asset; it is a fossil. It has become legacy tech by the end of 2025, not because it broke, but because the tectonic plates beneath it shifted. The underlying model, the "moon" of this metaphor, moved from GPT-3.5 to GPT-4 to whatever decimal point comes next, rendering the previous wrapper useless. We must recognise the economic violence of this model. Small businesses are being treated as unpaid beta testers for venture-backed startups. In the world Hopkins understood, the world of poultry and paddocks, investment meant permanence. If you buy a tractor, it depreciates slowly over twenty years. It is there in the morning. It does not require a firmware update to plough the field. But buying an "AI Solution" today is not an investment in infrastructure; instead, it's much more like buying a ticket to a movie that ends in fifteen minutes. You do not acquire a tool; you rent a seat on a hype train that moves too fast for you to ever get a return on investment. The business owner is left holding a subscription to a service that has already pivoted, standing in their backyard with a telescope pointed at a patch of sky where the moon used to be, while the developers have already moved on to selling tickets for the next apocalypse. Chapter Four. The Magic vs. The Metric: Grading the Falling Moon In the final, terrifying chapters of The Hopkins Manuscript, the moon ceases to be an astronomical curiosity or a source of scientific wonder. It arrives. And upon its arrival, the mysticism evaporates instantly. The moon is revealed not as a glowing god or a celestial guardian, but as a massive, heavy, and inconveniently physical object that has plunged into the Atlantic Ocean. It causes mud. It causes floods. It knocks over the tea service (oh, the crockery!). The magic of the event is stripped away by the brutal physics of the collision, leaving Edgar Hopkins to confront a reality that is wet, cold, and entirely devoid of enchantment. The Reddit user's final conclusion, "AI isn't magic. It's just another tool", is the digital equivalent of this collision. It is the moment the moon hits the water. For too long, we have permitted a fog of "magical thinking" to pervade the technology sector. The sales pitch for these tools relies heavily on the Black Box mystique. We are told not to worry about how the sausage is made (uh-oh, I've seen Soylent Green), or how the neural net weighs its parameters. We are told to simply trust the algorithm, to treat it as an oracle that operates on a plane of logic too complex for our linear minds to grasp. We treat software like a deity when we should treat it like a dishwasher. When you strip away the magic, what remains is often staggering incompetence. Consider the scheduler that double-books an appointment. In the current lexicon of AI, we are encouraged to use soft, forgiving language. We say the model is "hallucinating." We say it is "drifting." We say it is "still learning." We anthropomorphise the error, attributing it to a quirky, almost charming cognitive slip, as if the software is a precocious child trying its best. We must stop grading AI on a curve. Especially when it is not fit for purpose. If a human receptionist consistently double-booked high-value clients, they would not be described as "hallucinating." They would be described as incompetent. They would be retrained or fired. If a toaster burned the bread fifty per cent of the time, we would not marvel at its "emergent properties." We would return it to the store. Yet, when an AI tool destroys a workflow or fabricates a legal citation, we are told it is "emerging tech." Oh, how innovative. This is the great deception. A tool that cannot perform the basic function of the job, reading a form, booking a slot, summarising a meeting without lying, is not an innovation. It is a defective product. Like Hopkins standing in the ruins of his village, staring at the mud where his prize poultry used to be (Broodie the hen does survive), small businesses are realising that the celestial glow of the AI marketing machine has distracted them from the wreckage on the ground. We must reject the alchemy that promises to turn silicon into gold and return to the honest machinery of things that actually work. We must stop looking for magic and start demanding the metric. Does it work? If the answer is no, it belongs in the Atlantic Ocean, along with the rest of the falling moon. Chapter Five. The Billionaire's Charity: Buying the High Ground While the Moon Falls This dynamic, the extraction of wealth from the productive economy to the speculative elite, is not limited to the software market. It is the gravitational pull of our current moment. In The Hopkins Manuscript, as the catastrophe approaches, there is a distinct shift in how the wealthy prepare compared to the villagers. While Edgar Hopkins worries about the structural integrity of his hen house, the elite recede into fortified positions, insulated from the tides they know are coming. Consider Hmmm, let us look closer. I don't recognise this as philanthropy; it is a purchase of policy. It is the building of a private dugout at the expense of the village. Just as the AI vendor sells a broken tool to a small business to extract their capital, these billionaires are "donating" to a political project that is actively dismantling the regulatory state, the very state that might tax their wealth or protect the workers they exploit. They are not giving money to help children; they are investing capital to ensure the tax burden remains on the working class, while the top 1% retain their hoard. To my mind, the parallel to our Reddit user's plight is stark and convincing. The Small Business Owner buys a "magic" AI tool hoping it will solve their efficiency problem, only to find it is a broken toy that drains their budget. They are Edgar Hopkins, buying a telescope to watch the disaster that will bankrupt them. The Public is sold a "philanthropic" initiative by tech billionaires, hoping it will solve a social problem, only to find it is a Trojan horse for deregulation that drains the public purse. In both cases, the promise is innovation and support. In both cases, the reality is a transfer of wealth from the many who work to the few who own. The $50,000 spent on broken AI and the millions "donated" by the Dells are part of the same economic architecture: a system designed to convince the productive class to fund their own obsolescence. End. We are left, like the characters in Sherriff's finale, standing in the mud of a ruined landscape, realising too late that the glowing object we were told to admire was never a saviour. It was just a heavy rock, and it has finally landed on us. The small-business owner, desperate for efficiency in a crushing economy, is sold a digital homunculus, a promise of labour without the labourer. But what they receive is a parasite. It eats their capital, frustrates their clientele, and leaves them, in the end, exactly where they began: reliant on the only intelligence that has ever truly sustained the marketplace, the human capacity to listen, to understand, and to respond appropriately. Let us reject the alchemy that promises to turn silicon into gold, and return to the honest machinery of things that actually work. Besides, if women are educated for dependence, that is, to act according to the will of another fallible being, and submit, right or wrong, to power, where are we to stop? Are they to be considered as viceregents, allowed to reign over a small domain, and answerable for their conduct to a higher tribunal, liable to error? Mary Wollstonecraft Chapter 3. The Same Subject Continued. Wollstonecraft, M. (2004). A vindication of the rights of woman. Penguin Books. (Original work published 1792). The tech internet is breathless with a fervour that borders on the religious. The headlines circulate with viral efficiency, proclaiming a new gospel of access: “I just learned that the $200,000 Stanford AI degree just became worth a lot less.” The narrative is seductive, familiar, and pernicious. And currently viral on LinkedIn. We are told the gatekeepers have unlocked the gates; the ivory tower has lowered the drawbridge. Stanford has uploaded its flagship AI and Machine Learning curriculum to YouTube, and now, we are assured, the only obstacle standing between the common person and a career in the bleeding edge of AI is their own lack of willpower.
A beautiful story of democratisation. It is also a lie that masks ongoing systemic inequalities in access and privilege. While the release of these materials--CS221, CS224N, the legendary CS229—is undoubtedly a boon for the curious autodidact, framing this as a levelling of the playing field is a dangerous oversimplification. It is a specious homage to equity paid by an institution that thrives on exclusivity. Take a moment. Pause. Question: When an elite institution gives away its content for free, what are they actually selling? And more importantly, what privileges are they securing for themselves? 1. The Commodification of Content vs. The Aristocracy of Context The prevailing argument is that “you don’t need a degree, you need the knowledge”. This relies on a fundamental misunderstanding of the university’s function in a capitalist society. It conflates information with instruction, and worse, it confuses learning with credentialing. Access to Andrew Ng’s lecture slides is not the same as access to Andrew Ng’s office hours. Watching a video on Backpropagation does not equate to the rigorous, graded feedback loop of a problem set, the pressure of a peer group, or the structured mentorship of a lab. By dumping raw content onto YouTube, Stanford has effectively commodified information that was already widely available in textbooks and papers, while retaining the context (the network, the mentorship, the credential) as a luxe good, thereby emphasising the disparity between content and meaningful learning. While a celebration of the dismantling of hierarchy is taking place, I want the audience to feel concerned about how this reinforces inequalities. It is concretised in a two-tier system of knowledge: the wealthy and the lucky receive the education (the dialogue, the critique, the social capital), while the rest of the world receives the PDF. It is the difference between being invited to the banquet and being allowed to read the menu from the street. 2. The Certification Industrial Complex: The Funnel of False Hope We must recognise this ‘gift’ for what it truly is: a loss leader in the grand supermarket of higher education. Stanford and platforms like Coursera have engineered a business model where the content, the lectures, the readings, the knowledge itself, is given away for free, not out of benevolence, but to devalue it. By flooding the market with open access, they have rendered the act of learning insufficient. In this new economy, knowledge is cheap, but proof (actual certification of your skills) is a luxe good. This is a trap that structurally disadvantages the autodidact (you teach yourself). You may watch every lecture and master every concept, but without the watermarked seal of the institution, your knowledge carries no currency in the labour market. They have created a system where you are strongly encouraged to purchase their $18,900+ “Graduate Certificate” to validate the very skills they claim to be giving away. Technically, you don't have to purchase it to learn; you have to purchase it to get the credential. So, not the democratisation of education; instead, the democratisation of the advertisement for their paid products. Such online courses have not opened the gates; they have simply moved the toll booth to the exit, ensuring that while anyone can enter the library, only those with the means can afford the receipt that proves they were there. 3. The Pedagogical Monoculture: Intellectual Imperialism in Code as The “Stanford Way” And, there is a sharper, more critical edge to this ‘gift’ that involves the exertion of soft power. By making their curriculum the global default for ‘free’ AI education, Stanford is effectively homogenising the discipline itself. It is time to confront the deeper, more insidious erasure at play here: intellectual colonialism. When thousands of self-taught engineers across the Global South, Europe, and Asia learn AI exclusively through the lens of CS224U or CS329H, Stanford’s approach limits the diversity of thought essential for inclusive development. Instead, we export Silicon Valley’s specific flavour of AI ideology, often accelerationist, often blind to social harm, as the neutral, objective standard for the world. We are exporting a specific, highly local ideology, one that prioritises hyper-scale, friction-free speed, and profit maximisation, selling it to the world under the guise of ‘neutral math.’ When the whole world learns to code from Silicon Valley, the entire world loses the vocabulary to critique Silicon Valley. A student in Mumbai or Lagos who learns AI exclusively through this syllabus is being trained to define “problems” and “solutions” through the narrow lens of a Palo Alto venture capitalist. They are taught to optimise for metrics that matter to the NASDAQ, not necessarily for the resilience of their local communities or the preservation of specific cultural contexts. In universalising this single mode of thought, we delegitimise any form of intelligence that does not fit the template. We are seeing the standardisation of the Stanford syllabus, ensuring that the next generation of builders, wherever they live, will build the world in Silicon Valley’s image. In doing so, we say ok to colonising markets and colonising the future’s imagination, ensuring that tomorrow’s builders can only dream in shapes approved by today’s monopolists. 4. The Externalisation of Training: A Subsidy for the Oligarchs So, who profits most from this sudden flood of ‘free’ expertise? It is not the student; it is the corporation. By establishing ‘Stanford-level knowledge’ as the prerequisite for entry, Silicon Valley has effectively externalised the cost of training its own workforce. In a previous era, corporations bore the burden of training junior employees, investing time and resources to bring them up to speed. Today, that cost is shifted entirely onto the individual. The aspiring engineer must now spend hundreds of unpaid hours consuming this “free” curriculum just to reach the starting line. Stanford has not liberated the learner; they have simply created a mechanism that allows Meta, Google, and Amazon to demand senior-level theoretical knowledge from entry-level applicants without paying for it. It is a massive, invisible subsidy for the most profitable companies on earth, paid for by the unpaid labour of the hopeful. 5. The Tyranny of Time and the ‘Bootstrap’ Myth The viral commentary surrounding this release asks a pointed, accusatory question: “What’s stopping you from diving into AI learning now that these barriers are gone?” Caution here. This is the classic neoliberal trap, a sentiment that Mary Wollstonecraft herself might have recognised as the tyranny of circumstance disguised as moral failing. It shifts the burden of structural inequality onto the individual. It implies that the only barrier to entry was the tuition fee, conveniently ignoring the massive, invisible infrastructure required to actually consume this content. To engage meaningfully with CS229M (Machine Learning Theory), one requires not just advanced calculus and linear algebra, but high-speed internet, a powerful GPU for training models, and, most crucially, time. Who has the leisure time to audit graduate-level Stanford courses for free? Not the working-class professional juggling two jobs to survive the cost-of-living crisis. Not the single parent negotiating the ‘double shift’ of care and labour. Not the caregiver juggling everything. This ‘free’ access may raise the audience’s awareness of systemic barriers, highlighting that resources, not just content, determine access. 6. The Hollow Liberty of Flexible Access Let’s look at the specious promise of flexibility with a cold, discerning eye, increasingly peddled to the marginalised. The architects of these modern educational programmes proclaim that they have opened the gates and that the digital classroom offers a flexibility of access that liberates the mother, the carer, and the weary. The outsiders are now on the inside??? Yet this is a hollow liberty. It is a flexibility of entry only and not a flexibility of learning. They grant the student the right to log in at midnight, but not the right to learn in a way that deviates from the rigid, linear norms of a curriculum built by and for the privileged and unencumbered male. We are told that the walls have been removed, but in truth, they have simply been rendered invisible. By shifting the site of learning from the collective and public space of the university or the office back into the private and domestic sphere, we are not liberating women. We are confining them. We are asking them to bear the double burden of domestic administration and professional acquisition without the sanctuary of a dedicated space. The flexibility to learn from home is too often the freedom to be interrupted, divided, and ultimately diminished. It is a trap that relies on the learner’s isolation to function. Similarly, we must consider the nature of the space we are asking these students to occupy. It is a space stripped of the protective friction of human mentorship. Increasingly, in the name of efficiency, we have replaced the wandering path of the apprentice with the streamlined perfection of the AI tutor. But authentic learning requires the right to be wrong and to know why you made those mistakes. It requires elbow room to make more mistakes without them becoming fatal to one’s professional identity. Removing the human interaction-infrastructure of learning, we create a system that demands perfection from those who can least afford the risk of failure. Rather than ‘an education’, such access mirrors a filtering mechanism that selects for those already indistinguishable from the machine. In doing so, students are reduced to zombie data. Elite universities and platforms like Coursera measure success by enrolment, not completion or competence. By flooding the web with free content, they boost their “impact” metrics (“We reached 10 million learners!”) without disclosing that 95% of those learners watched 2 videos and quit because they lacked the support to continue. Even the free users are generating data. Every pause, rewind, and quiz failure is data that can be used to refine its own educational AI models or sell to partners. The 'free' learner is not just a potential customer; they are a test subject for the next generation of ed-tech products. They are farming us for engagement metrics to justify their tax-exempt status, not measuring whether we actually learned anything. How about a new kind of space? Not merely the digital permission to access a server but the social permission to exist as a complex and fallible learner. One in which we can reject the efficiency that treats the student as a vessel to be filled with data and reclaim the inefficiency that allows the student to become a learner who unfolds with new knowledge. Until we do so, the open door of these programmes will remain nothing more than a gaping maw. It consumes the time and hope of the marginalised while offering nothing but the illusion of progress. 7. The Devaluation of Junior Labour and the Reserve Army Eventually, ‘Stanford-level knowledge’ is becoming the baseline expectation for entry-level roles simply because the material is free. The bar for entry does not lower; it rises. This move creates a ‘reserve army of labour’, a glut of semi-qualified individuals that drives down the value of junior roles. Employers can now demand that junior developers possess theoretical knowledge previously reserved for PhDs, without offering the pay or training to match. “Why should we train you?” they will ask. “The videos were on YouTube.” This is not a hypothetical danger. A dear friend who is senior engineer at a large tech firm recently told me she is already fighting this battle on the ground. She, and note that it is she, is performing the invisible, unpaid labour of protecting her junior staff from management’s abdication of duty. She is acting as a human shield against the logic of efficiency, filling the training gap with her own time because the institution has decided that 'free access' and vibe coding with a Chatbot absolves them of the responsibility to teach. It accelerates the credential arms race. If everyone has read the slides, the slides no longer distinguish you. The distinction moves back to the one thing you cannot download from YouTube: the pedigree. The degree, the brand, the handshake. 8. The Strategy of the Benevolent King: Reputation Washing The timing of this ostentatious largesse arrives at a precise historical moment when the elite university is increasingly, and correctly, characterised as a tax-exempt hedge fund with a tiny fraction of educational subsidiary (for the public good). In this light, the release of free curriculum is a strategic exercise in reputation washing. Not very revolutionary at all. It is a performance of noblesse oblige designed to purchase the moral high ground at a negligible cost. By scattering these digital crumbs, Stanford postures as a benevolent philanthropist, a gesture that conveniently distracts from the fortress of its $37.6 billion endowment, while the academy itself increasingly relies on an army of precarious, underpaid adjunct labour to function. This ‘gift’ (wearing out the quotation mark keys on my keyboard) allows the institution to cloak itself in the rhetoric of open access without engaging in the dangerous work of actual redistribution. Stanford and others are not SheRa. They have not shared their power; they have simply televised their prestige to ensure that, even in an open market, they remain the monarchs we must thank for the privilege of learning. Conclusion: The Library of Minds Far from dismantling the hierarchy, gestures like this serves only to fortify it. We must be careful not to confuse a repository with a school, nor a data dump with equity. Stanford has positioned itself as the benevolent monarch of the intellect, scattering the bread and circuses of 'open access' to the masses. At the same time, the actual keys to the kingdom, the networks, the laboratories, the whispered introductions to venture capital, remain safely vaulted behind the tuition paywall. Consume the content, by all means. Master the calculus. But do not be beguiled into calling this a revolution. The walls of the walled garden have not been breached; they have merely been fitted with glass, ensuring that while we may now clearly see the machinery of their privilege, we remain just as barred from touching it. Links & References
A literary co-conspirator recently asked me a question that has carried on rattling around my brain like a loose pebble. Do graduates actually aspire to work for tech giants like Google, Amazon, OpenAI, SpaceX or Meta anymore? Or has that ambition curdled into something far more complex, like resistance, resignation, or even shame, in the conditions of what they represent?
The question cuts through the glossy recruitment brochures and the curated videos on social media. Applications still flood in because economic necessity is a powerful motivator. But you need only dig a little deeper into the class of 2025/26 to find a generation distraught by their limited options. They are the first generation to feel the machine actively pushing back against them. They face what we might call a Sophon Blockade. In Cixin Liu's The Three-Body Problem, the Sophon is a proton-sized supercomputer sent by an alien civilisation to halt human scientific progress. It creates a ceiling on physics, ensuring humanity can never technologically surpass its oppressors. Big Tech has deployed its own functional equivalent. AI acts as a Sophon for entry-level talent. By automating the drudge work of basic coding and data cleaning, corporations remove the very ladder rungs junior employees use to learn. We are witnessing a real-world blockade in the graduate job market, where the junior space has been colonised by algorithms. The Algorithmic Executioner "Unfortunately." This single word has become the defining soundtrack of the class of 2025. It serves as the standardised automated greeting of the algorithmic executioner. I spoke with several high-flying graduates from my courses this week, and they all shared the same screenshot. Their inboxes are filled with rejection emails that begin with that exact same AI-generated adverb, unfortunately. They are not even getting to the interview stage. Automated systems like Applicant Tracking Software (ATS) now reject up to 75% of resumes before a human ever sees them. This wall of rejection initially appeared as a glitch but has since evolved into the shockwave of a massive structural collapse. Recent reports confirm that the UK tech sector has cut graduate hiring by nearly half, specifically because bots are now doing the entry-level work that used to serve as the industry's training ground. This algorithmic gatekeeping removes any chance of equity. It squashes graduate hope because they have no choice but to adopt the very tools that are excluding them. To even compete, they must use AI to write their resumes and cover letters just to pass the machine's test. They have to mask their humanity to be accepted by a system that demands their compliance while actively engineering their obsolescence. The Great Flattening The drudge work of coding and analysis was once an apprenticeship. It was the safe (even fun) sandbox where junior developers broke things, fixed them, and learned the deep architecture of their trade. It was the mechanism for transferring tacit knowledge. It provided the unwritten wisdom of senior engineers that cannot be captured in a manual but is learned through the friction of solving complex problems. By automating this layer, the industry has burned the ladder while shouting at graduates to climb. We might hope that universities would step into this breach by supporting graduates to hone their skills to a higher strategic level. But how can you hone a talent you were never allowed to practice? If every entry-level software engineer is trained using AI, then we are creating a generation of AI-dependent operators with a flattened, homogenised skill set. They will possess the breadth of the internet but the depth of a puddle. Crucially, we have removed the social infrastructure of learning, eliminating opportunities for human error and correction in a team setting. We have lost the moment where a junior admits a mistake and hears a senior colleague offer a solution. That interaction is how you learn to project manage, negotiate, and exist in a team. When the fix comes instantly from a chatbot, that social contract is broken. We are replacing the messy, productive failure of the human team with the silent, sterile efficiency of the machine. This ushers in an era of 'knowledge collapse', ensuring that the next generation of tech workers remains permanently junior and tethered to the algorithm for their professional survival. One has to admire the computational irony here. While the Trisolarans achieved total lockdown with a single proton, humanity is achieving the same effect by building monuments to excess. We are currently pouring billions into infrastructure, such as Microsoft and OpenAI's proposed "Stargate" supercomputer and Amazon's massive investment in data centres. We are stripping the grid and boiling the oceans to build the machine that ensures the next generation cannot learn how to build the machine. (Apologies, I am enjoying a lot of sci-fi atm!) The blockade is not merely technological. It is financial. We witness a pincer movement on human potential where the corporate sector automates the junior role while the university sector is intellectually strip mined by fiscal policy. The latest data on higher education funding for teaching reveals a catastrophic erosion of resources. In real terms the funding available to teach each student has plummeted from a peak in 2012 to levels significantly lower than they were over a decade ago. We see a trajectory that slopes downward with the terrifying inevitability of a landslide. Universities are expected to arm graduates against the Sophon of AI while operating with a war chest that has been raided. They charge premium fees for a product that is being financially hollowed out from the inside. The infrastructure required to teach complex human skills in the age of the machine is expensive yet the investment per head is in freefall. The Spiral of Tech Shame For a decade, the narrative remained simple. You get a Computer Science or Business degree. You get a hoodie. You get a massive salary. That pipeline is rusting. Conversations on platforms like Reddit reveal a growing sentiment of tech shame. Graduates view Big Tech as a moral compromise rather than a playground for innovation. We see this in the physical world with students at Durham University protesting STEM careers fairs. They refuse to let their universities funnel them into companies they view as complicit in global harms. The evidence for this disillusionment is tangible. The "Techlash" has moved from regulatory hearings to the campus quad. Student groups actively target recruitment events to highlight the intersection between Big Tech and the defence sector. Contracts like Project Nimbus or the use of AI in autonomous weaponry have shattered the illusion of neutrality. A 2023 survey by networking app Handshake noted that "impact" and "mission" are now primary drivers for Gen Z talent. They are voting with their feet by looking toward climate tech or NGOs. The prestige of the FAANG acronym has evaporated. It has been replaced by the uncomfortable realisation that working for these entities often means optimising addiction algorithms or refining surveillance capitalism. Gendered Obsolescence The blockade is not applied evenly. A business woman this year designed her own AI to take care of the administrative tasks for her professional role in beauty aesthetics. After releasing and sharing this with different tech communities, it was largely panned as obsolete. Such dismissals reflect a broader systemic devaluation of feminine-coded labour. While male-led projects automating challenging technical tasks are hailed as revolutionary tools, women-designed projects to manage the complex administrative load of pink-collar industries are frequently dismissed as trivial. A bot that writes code is treated as a genius assistant, while a bot that manages a salon's client relationships is viewed as mere digital secretarial work ripe for displacement rather than investment. This creates a confidence gap where women are less likely to adopt AI tools due to fears of being labelled unethical or lazy. The industry frames innovation in a way that validates the male creator while sneering at the female utility-focused tool. The Dark Forest The most bitter pill is how this technology is forced down their throats in education. Students are besieged by AI. Take the recent case at Staffordshire University, where students realised their lecturer was effectively an AI voice reading off slides. (Confession, when tired and weary, I am a little robotic myself). They felt robbed of knowledge. At the same time, universities scramble to police students for using the very tools the industry demands they master. It is a disjointed experience. We tell them they must be AI-literate to survive, yet we tell them that if they use AI to help them think, they are cheating. In Cixin Liu's sequel, he introduces the Dark Forest theory. The universe is a dark forest where every civilisation is a silent hunter. The moment you reveal your location, your humanity, or your vulnerability, you are wiped out. For the class of 2025, the job market is their Dark Forest. They are terrified to reveal their true, unpolished selves. They feel pressured to use ChatGPT to write their cover letters and fix their code. They hide their human noise behind a synthetic signal just to get past the Applicant Tracking System filters. They camouflage themselves as machines to be accepted by machines. So, to answer my friend's question. No. They do not simply aspire to work for Meta. They are trying to survive a system that demands they merge with the very tools designed to replace them. It is a dangerous navigation of a world that is actively trying to edit them out of the script. The arrival of a neurodivergence diagnosis, especially when it arrives in adulthood alongside the same diagnosis of one’s own child, is less a lightning bolt and more a gradual, dawning touch of light on a landscape left in the dark for decades.
Living in Yorkshire, the stark beauty of the moors mirrors the isolation felt by many families navigating the SENCO/SEND support system. I have come to view the diagnosis process as a bureaucratic checkpoint that marks the boundary between hope and the great, silent void that follows. We are told that the diagnosis is the key, the golden ticket that unlocks understanding and accommodation (like an EHCP), yet for so many of us, including the families whose voices echoed with such painful clarity in recent reports on the crisis in SEND provision, that key opens a door to an empty room. The obscene waiting times, meaning years of suspended animation where children drift unmoored through an education system that was never built for them, are a national scandal. But it is what happens after the diagnosis that I find myself compelled to critique with urgent, furious veracity. Many of us, myself and my daughter included, are living through a systemic abandonment that is being quietly plastered over with the thin, digital veneer of "innovation." In the absence of human support, in the vacuum left by the dismantling of accessible education and the chronic underfunding of SEND (Special Educational Needs and Disabilities) services, we are witnessing a dangerous shift. The Hollow Recommendation: "Just Use AI" To my horror, my daughter and I are offered a new, hollow recommendation in our support plans: "Use AI." It appears as a throwaway comment, a suggestion that generative artificial intelligence can act as an executive function prosthesis, a scheduler, a drafter of difficult emails, or a summariser of the dense texts we struggle to process. On the surface, to the neurotypical observer, this might seem like a modern, efficient solution. But to those of us living inside the neurodivergent experience, this recommendation is not just unhelpful; it is an insidious form of harm that misunderstands the very nature of our exhaustion. My daughter is 9.5 years young. She is legally too young to hold the very account credentials that are being prescribed as her salvation. This recommendation acts as if the internet is a safe, neutral library, rather than a surveillance engine designed to harvest attention. When a support plan says "Use AI" without specifying which tool, whose safety guardrails, and what data privacy protections are in place, it is not a strategy; it is negligence. There is no dosage instructions on this prescription. Which AI is she supposed to use? The one that hallucinates facts? The one that reinforces gender biases? (hell, no). The one that scrapes her input to train its next iteration? And how does this function in a classroom that is likely banning smartphones? Is she to be the exception, navigating the social stigma of being the "cyborg" student while her peers use pencils? We need to ask: For what purpose? Are we teaching her to think, or are we teaching her to prompt? By handing her a text box instead of a hand, we are not offering her a scaffold for her executive function; we are feeding her developing mind into a black box that offers no duty of care, no empathy, and absolutely no guarantee of safety. To suggest that a neurodivergent person, already drowning in the sensory and cognitive overwhelm of a world designed for linear brains, should simply "adopt AI" is to ignore the immense cognitive tax required to operate these systems. To make this recommendation for a child is... well, I am at a loss for words that do not rhyme with ‘cluck’ or ‘spit’. We are being asked to learn a new language, to master the art of prompt engineering, and to navigate an interface that is fundamentally designed for data extraction rather than human care. When a support plan offloads the work of scaffolding onto a chatbot, it ignores the reality that using these tools requires a high degree of executive function. These are the very resource we are often depleted of. We must formulate the request, sift through the generated noise, fact-check the hallucinations, and integrate the output into a reality that rarely matches the machine’s statistical average. AI IS NOT SUPPORT. This is additional labour disguised as a life hack. The Data Extraction Trap There is a darker current running beneath this technological solutionism, one that connects the crumbling walls of our classrooms to the gleaming campuses of Silicon Valley. The AI-bro oligarchy, those architects of Large Language Models (LLMs) who preach the gospel of efficiency, have no vested interest in the messy, non-linear, divergent goals of our community. Their technology is built on a foundation of normative data, training models that flatten out the spikes of human variance into a smooth, predictable curve. By relying on these tools, we risk forcing our own minds and our children’s minds into a feedback loop that prioritises neurotypical mimicry over authentic neurodivergent existence. Such systems are effectively turning our need for support into unpaid labour for the very tech giants that exclude us. We are not users to be supported; we are resources to be mined. The Political Abdication This digital deflection serves a political purpose as well. It allows the state to abdicate its responsibility. If the answer to a child’s inability to access the curriculum is "use ChatGPT to summarise the lesson," then the school no longer needs to invest in smaller class sizes, sensory-friendly environments, or specialist teaching assistants. The burden is shifted back onto the individual, back onto the parent who is likely already burnt out from fighting for the diagnosis in the first place. The "cliff edge" of support that the National Autistic Society has campaigned against for years, highlighting how thousands of adults and children are left stranded after diagnosis, is now being populated by chatbots instead of social workers. This is a devastation of the social contract. Research and campaigns from the National Autistic Society repeatedly show that without the right support at school and home, autistic people are at risk of developing serious mental health problems, yet the response is to offer a subscription to software rather than a relationship with a human being. This systemic abandonment is actively weaponised by political opportunists who have found a convenient scapegoat in the very families they are meant to serve. We need look no further than the incendiary rhetoric of figures like Reform UK’s Richard Tice, who has grotesquely dismissed the rising tide of neurodivergent diagnosis as a ‘dodge,’ branding it the modern-day equivalent of a ‘bad back’ used to evade economic productivity. This reflects the brutal calculus of a system that views human variance as an inefficiency to be purged. It reveals a political class with their noses firmly planted up the arse of the AI-bro oligarchy, eagerly adopting a Silicon Valley worldview where citizens are reduced to data points and anyone who cannot be seamlessly integrated into the algorithm is discarded. The Harari Hazard: A Note on Futurism In a chilling echo of Yuval Noah Harari’s warning about the rise of a ‘useless class,’ these leaders are collaborating to build a future where the state abdicates its duty of care to software. However, while Harari serves as a useful starting point for futurist exploration, we must be deeply skeptical of the veracity of his so-termed populist science, which often sacrifices rigorous accuracy for the sake of a compelling, terrifying narrative. As the neuroscientist Darshana Narayanan has sharply critiqued, Harari’s work is riddled with scientific errors and a reductive biological determinism that should sound alarm bells for the neurodivergent community. (Hello)! When Harari speculates about "fixing" autism by rewriting genetic code, here we are treating complex human variance as a mere software bug, he is simplifying science and he is reinforcing the dangerous eugenicist undertones that often lurk beneath the shiny surface of Silicon Valley ideology. His storytelling serves the interests of surveillance capitalists by presenting their dominance as an evolutionary inevitability rather than a political choice. By accepting the premise that humans are hackable animals whose worth is determined by data processing efficiency. In doing so, w/he inadvertently validates the very dehumanisation we are fighting against. No. We are not obsolete algorithms waiting to be upgraded or discarded; we are complex, non-linear human beings whose value exists entirely outside their metrics of utility. Instead of eyeing up the next generation’s blood for some vampiric wellness hack, why not stick to the classics? Get a portrait and hide it in the attic. A Call for Human Infrastructure The education system, particularly here in the North (read, not London) where waiting lists for assessments can stretch years beyond those in the South, is in a state of collapse. We see this in the stark disparity of waiting times, a postcode lottery that leaves families in Yorkshire waiting over a thousand days for an answer, as highlighted by the Child of the North reports. When the answer finally comes, it arrives in a world where schools are under-resourced and teachers are overwhelmed. To introduce AI into this breach without proper scaffolding, without a human guide to help interpret and filter the technology, is to set neurodivergent people up for a new kind of failure, and even less social support. We do not need a tool that generates more text, more options, and more information to process. We need flexibility. We need reduction. We need calm. We need human empathy that understands why a task is difficult, not a machine that simply completes the task in a way that mimics a neurotypical standard we can never sustain. True support for neurodiversity requires protecting vulnerable people from AI-tech-bro capitalist efficiency. It requires us to reject the idea that a person’s value is tied to their productivity or their ability to interface with a complex system. We must recognise that the tech-fix is often a trap, a way to privatise support while stripping it of its humanity. I reject the premise that the only bridge across our exclusion is an algorithm. As a mother, I will not teach my daughter that she must merge with the machine to be valid. We need to rebuild the human infrastructure of care, to demand education systems that are accessible by design, not patched up with plugins. The AI revolution bubble is leaving us behind, not because we cannot use the tools, but because the tools were never built to hold the weight of our beautiful, complex, divergent lives. The silence at the end of the diagnosis process cannot be filled with code. It must be filled with community, with understanding, and with the radical refusal to be flattened. The Unacceptable Contract So here is my refusal. I am returning this recommendation to the sender, marked 'Incompatible with Human Life.' Do not offer my daughter a chatbot when what she needs is a chance. Do not offer me a productivity hack when what I need is a society that does not view my neurology as a glitch to be patched. Or exploit it where I can be hyper-focused to the point of burnout. We are not interested in becoming more efficient data points for your Large Language Models. We are not interested in hacking our way out of a systemic failure that you have engineered. If the only bridge you can build across the chasm of our exclusion is made of code, then burn it. We will not cross it. We will stay on this side, in the messy, inefficient, beautiful reality of our divergent ways of being and feeling, and we will build our own infrastructure. It will be built of patience, not prompts. It will be powered by empathy, not electricity. And it will not require us to flatten ourselves to fit through the slot of your machine. To the politicians calling our existence a "dodge," to the AI-tech bros mining our exhaustion for data, and to the futurists predicting our obsolescence: We are not your "useless class." We are the only ones who are awake. Sincerely, A Mother, A Professor, and A Human Being who refuses to be automated. My notes from these sources: [1] County Councils Network Report, Nov 2025 [2] Ticking Timebomb, The Guardian, Mar 2025 [3] National Autistic Society. (n.d.). Autism assessment waiting times, Nov 2025 [4] Reforms UK's Richard Tice says children wearing ear defenders in school is 'insane', Independent, Nov 2025 [5] Yuval Harari's blistering warning to Davos in full, World Economic Forum, Jan 2020 [6] The Dangerous Populist Science of Yuval Noah Harari, Darshana Narayanan, Current Affairs Org, July 2022 [7] N8 Research Partnership. (2024). Child of the North Report.
The most terrifying sound in the technology industry today is not the roar of a hostile algorithm or the crash of a market correction; it is the silence of the woman who has just decided that speaking up is no longer worth the risk. She has disappeared herself, the brilliant mind who has quietly calculated the cost of her visibility and found the price too high. She has realised that while she was busy doing the heavy lifting of diversity work, the water around her had filled with sharks.
When I presented my latest evidence before the Coalition for Academic Scientific Computation (CASC) recently, I opened with an image that often lurks in the subconscious of every underrepresented person in our field. The shark seemed a fitting image. It could represent a specific person. It could also be interpreted as a caricature of a bad boss or a hostile colleague or a politician. To me, the shark represented the water we are now swimming in. It represented a danger that does not need to bite to be effective because it just needs to be visible enough to make us afraid to move. My talks to both CASC in the US and a version of the same talk to the Exobiosim/HPC group in the UK were driven by urgent data gathered this year from interviews with women working in computing, predominantly HPC, who are watching the tide turn against them. In these sessions, we mapped the anatomy of this new hostility. We discussed how diversity work has historically relied on a model of "good citizenship" as a volunteer-based "vibe" without actual resources or institutional protection. This precarious model is now collapsing under the weight of leadership hostility and resource cuts. My slides, which I share below, document the direct quotes from participants who feel they have "gone back decades," who describe the air as "thick with unspoken threats," and who see former allies retreating into silence to protect their own careers. We categorised the external threats, which ranged from legal challenges to "anti-woke" political pressure, but the most chilling finding was the internal retreat: the self-censorship of women who no longer feel safe to advocate for themselves or others. We are witnessing the erosion of allyship in real-time, leaving the most vulnerable to navigate these shark-infested waters alone.
For decades, the work of diversity in technology has been a slow and arduous swim upstream. We told ourselves that if we just worked harder, if we just leaned in, if we just mentored enough girls, the current would eventually change. But recently the current has not just stalled. It has reversed. We are no longer just fighting against inertia. We are fighting against a stark cultural shift fuelled by fear, legal threats, and a political climate that has turned equity into a dirty word. The result is a phenomenon that is perhaps more dangerous than the external attacks themselves. It is the silence of self-censorship.
This silence is the sound of survival. In my research, I spoke to women and marginalised groups across working in international teams. I heard that it feels like we have gone back decades. I heard that people are afraid to speak up because they fear repercussions. This is not a knee-jerk reaction. It is a calculated act of self-preservation in an ecosystem that has suddenly become hostile to our existence. We are seeing a retreat from DEI and EDIA initiatives not just in the White House but in the boardrooms of major corporations and in the quiet hallways of our own universities. Allies who were vocal two years ago are now waiting to see which way the wind blows, engaging in what my participants described as a calculated silence. They are testing the water while we are drowning in it. This retreat is documenting itself through a digital disappearing act. We are witnessing a systematic "going dark" of EDIA resources, a phenomenon confirmed by recent reports from both the tech and academic sectors. Major technology giants like Google and Meta have quietly cut staffing for their DEI programs or ceased releasing the detailed diversity reports that once served as industry benchmarks for transparency. In the academic and scientific computing sphere, the erasure is even more literal. Universities and research institutions, bowing to mounting political pressure and the threat of funding freezes, have begun scrubbing their public-facing websites. The diversity statements are being deleted from hiring pages at major institutions like MIT and the University of Utah, and entire directories of LGBTQ+ faculty and support resources are vanishing behind firewalls or 404 error codes, as seen recently at Northwestern University and the University of Chicago. My research highlights that this administrative action has moved beyond simple funding cuts to the explicit censorship of language. We are seeing a sanitisation of vocabulary where terms like "equity," "privilege," and "systemic" are being surgically removed from mission statements to avoid triggering political targeting or losing federal grants. This is a survival strategy for the institutions, as a way to fly under the radar of "anti-woke" legislation, but for the individuals relying on those support structures, it is an act of erasure. It signals that our identity is now a liability too dangerous to even name in public.
Table summary:
The Digital Disappearing Act: A Record of Erasure
This retreat forces us to confront the uncomfortable truth I wrote about in my book, An Unsuitable Job for a Woman. We have built our house on sand. For too long, diversity work in tech has relied on the volunteer time of the very people it is supposed to help. We have treated equity as a form of good citizenship, a vibe we create without actual resources. We have relied on the passion of the marginalised to fix the systems that marginalise them. We have asked women to do the heavy lifting of repairing a culture that was built to exclude them.
I have called this set of observations the "intimacies of labour" (again, see my book). It is the identity work that women must perform just to exist in these professional spaces. It is the mental calculus of deciding whether to be one of the boys or to embrace the label of "Woman in Tech." It is the exhausting effort of bridging the gap between our gender and our professional legitimacy. We are expected to be soft enough to be likeable but hard enough to be competent. We are expected to fix the pipeline while navigating a workplace designed for a man who has no caregiving responsibilities and a wife at home to manage his life. As I see it, the label "Woman in Tech" itself has become a straitjacket. It is a status characteristic that signals difference rather than competence. It implies that our gender is the problem to be solved. It suggests that if we just had more training, or more confidence, or better negotiation skills, the inequality would vanish. This deficit model absolves the industry of its responsibility. It allows tech companies to paste pictures of diverse faces on their websites while their internal cultures remain toxic and exclusionary. Now we are doing this heavy lifting while swimming with sharks. The emotional labor required to sustain our careers is compounded by the fear of political and professional backlash. The hostility from leadership is palpable, with diversity initiatives facing increasing backlash under the guise of preventing reverse discrimination or protecting free speech. My research uncovered reports of senior leaders celebrating the savings from cutting diversity programs and framing equity work as a distraction from excellence. The message from the top is clear: diversity is a waste of resources. AI-bro culture actively resists and prejudices representations of and the inclusion of women. We are, shark bait. We are also F***-ing exhausted. The women I interviewed told me they are considering leaving the field altogether because it is not worth the constant battle. They described the current environment as a full-blown assault on their right to exist. If we lose this generation of women in HPC and technology, we do not just lose diversity numbers. We lose innovation. We lose the future. This is why we must stop worshipping at the altar of metrics. In the data-driven world of computing, we love to count things. We count heads. We count retention rates. We count the percentage of women in the room. But Goodhart’s Law reminds us that when a metric becomes a target, it ceases to be a good metric. We have focused on the appearance of diversity rather than the reality of inclusion. We have allowed organisations to game the system and test the water without ever jumping in. The result is a surface-level diversity that collapses the moment the political weather changes. We have a lack of tracking on retention rates for underrepresented groups because we have been too busy counting who walks in the door to notice who is walking out. We need to stop asking how many women are here and start asking who feels safe enough to speak here. The path forward requires us to name the fear. We must stop pretending this is business as usual. We need to acknowledge the hostility from leadership and the fear of repercussions. We cannot fight a shark we refuse to see. We must reclaim the language of equity and refuse to let the anti-woke agenda define our terms. Diversity is not reverse discrimination. It is essential for robust science and innovation. We must also reject the trap of the volunteer revolution. The isolated volunteer is vulnerable to the shark. The coalition is a fortress. We need to look at the broader landscape of resistance, such as the lawsuits organised by civil rights organisations and the collective actions taken by NGOs and educational bodies. We need to connect our internal struggles with these external movements. We need to build structures that do not rely on the free labor of women to sustain them. We need to professionalise this work and resource it properly. Most importantly, we must refuse to accept the premise that we are the problem. The problem is not women. The problem is a dominant bro-tech culture that protects its own power at the expense of everyone else. The problem is an industry that demands we do the heavy lifting of inclusion while it actively dismantles the supports we built. SO WHAT CAN WE DO ABOUT THIS? I've been reading with avid curiosity and a certain gritting of teeth, the popular science writings of Yuval Noah Harari recently (good to go way outside your comfort zone). As I understand it, Harari's main conceit is Homo sapiens rules the world because we are the only animal that can cooperate flexibly in large numbers. We do this by creating shared stories and experiences. BUT, all too often, these stories are to favour money, nations, and corporations (uh-oh). For the last thirty years, the technology sector has operated on a specific, damaging fiction: the myth that supporting diversity and accessibility is a moral luxury, a charitable add-on to the real machinery of innovation. We have told ourselves that the inclusion of women and minorities is a matter of politeness, rather than a matter of default support. Over the past six-nine months actively excluding groups of people is something AI-bro and tech-bro culture is out in the open actively celebrating. A homogeneous team building a global system is not just unfair; it is computationally incompetent. It creates blind spots that are no longer just social inconveniences but systemic vulnerabilities. If we wish to own the ocean rather than merely survive the swim, we must stop treating equity as a social crusade and start treating it as an engineering specification. We need solutions that do not rely on the benevolence of the powerful or the exhaustion of the marginalised. We need structural hacks that rewrite the code of the institution itself. (I write about this in my book btw, arguing we do not ask more of minority groups to advocate solely by themselves). So here's my shopping list of stuff to action (in response to the question of "what can/should we do to support DEI in the current climate?" asked both at CASC and ExoBioSim events): First, we must reclassify homogeneity as a security risk. In cybersecurity, we do not ask the virus to be nicer; we build firewalls. Similarly, we should stop asking male-dominated teams to "be more inclusive" and start treating extreme gender imbalances as a critical failure in project auditing. Funding bodies and shareholders should view a team of ten men not as a culture fit, but as a high-risk asset prone to groupthink and data bias. We must demand that Red Teaming (as in the practice of rigorously challenging plans and code) be applied to human capital. If a team lacks diverse cognitive inputs, it should be flagged as unstable, and its funding paused until the security flaw is patched. This shifts the burden from the woman raising her hand to groups of people auditing together. Second, we must shatter the illusion of the meritocracy by introducing radical financial accountability. For decades, we have allowed leaders to outsource their conscience to volunteer committees. We must now attach their survival to the survival of their staff. Executive bonuses and grant renewals should be mathematically tethered not to recruitment—which is easy—but to retention, which is hard. So, in this scenario, if the women leave, the money leaves. If the shark drives talent away, the shark starves. This aligns the selfish interest of the leader with the collective health of the group. Finally, we must harness the power of the Strategic Glitch. The current system functions because women and minorities act as the shock absorbers, smoothing out the friction of a toxic culture with their unpaid emotional labor. We organise the events, we mentor the juniors, we soften the blows. It is time to stop. We must allow the friction to be felt. If the "good citizenship" work is not paid, it should not be done. And louder at the back, IF THE GOOD CITIZENSHIP WORK IS NOT PAID, IT SHOULD NOT BE DONE. Let the panel be all male. Let the report go unwritten. Let the vibe of inclusivity collapse so that the raw, jagged reality of the exclusion is visible to everyone. I am advocating for a dysfunctioning system to be allowed to crash. Martyr's Trap I can feel the pushback on my 'finally' point. Is this the Martyr's Trap? So let me join the dots more comprehensively. The current system functions only because women mask the liabilities. We fix the bad PR before it happens. We smooth over the HR disasters. We make the dysfunction look functional. By stopping, we are not quitting; we are simply returning the risk to its owners. Here's my take, in withdrawal from unpaid effort. In doing so, we are handing leaders a liability. We push them into a market that will punish their blindness. An all-male AI development team is not a club; it is a lawsuit waiting to happen. It is a product recall in the making. It is an evolutionary dead end. Case in point, an AI-enabled teddy-bear caught talking sex and knives. We are not fighting a battle for "kindness." We are fighting for the cognitive capacity of our species to navigate the future. The shark is in the water only because we keep feeding it. It is time to change the diet. How to Avoid the Trap: The Art of the "Bureaucratic No" However, the danger to the individual is real. I've spoken to women and minority groups who have lost professional roles, lost work, lost contracts, lost their jobs. If you simply stop doing the work, you risk working in a culture that cuts across your morale code and violates your sense of right and wrong. For many the anxiety here is too much and they self-censure, or disappear altogether. To avoid this, the glitch, in the way I see it, must be engineered with the same precision as the system itself. We do not just stop; we reclassify. How?
A Note on the Architecture of my Argument Hold the line! I must, however, pause to acknowledge the specific architecture of my own position. I write this as a researcher based in the UK, where despite the turbulence of the sector, I possess a degree of contractual security that many of my colleagues in the US tech industry or precarious academic roles do not. It is undeniably easier to advocate for a strategic crash when you are standing on relatively firm ground. Yet, it is crucial to state that the strategies I outline here, the reclassification of labour, the risk assessments, the collective refusal, are not abstract theories born in the safety of a university office. They are the direct, distilled output of the research I have conducted this year. They echo the exact frustrations and desires expressed to me by the computational professionals, the HPC engineers, and the data scientists I interviewed. They told me they felt as though they "were holding up the sky". They told me they "wanted to let go". These recommendations reflect exactly what people I have spoken to told me they needed to do to survive. The Ultimate Reframing I often read articles concerning the "They" in reference to toxic leadership and individuals in positions of power who directly impact tech culture. Let's be specific here. "They" are the beneficiaries of a system designed to extract value from our silence. They are the leaders who view equity as overhead and diversity as a cosmetic feature rather than a structural necessity. They want us to act as the invisible load-bearing walls of an institution they treat as a mere façade, absorbing the stress so they can occupy the penthouse without feeling the tremors. By refusing to do the unpaid work, we are not abandoning the structure. We are unionising the resistance. We are collectively handing back the weight of their own negligence. This is the power of the coalition and the trade union. It transforms a personal refusal into a structural renegotiation. When we stand together to enact this Strategic Glitch, we force the leadership to confront the cost of their own apathy. If they choose to let the infrastructure collapse rather than resource it properly, then let it fall. We are not the help. We are the engineers. And when the dust settles, it will be our collective blueprint that determines what rises next.
The conscientious objector. I find myself looking at her with a mix of profound admiration and a distinct, sharp pang of the wiggly gut-guilts.
Recently, I’ve seen allies in academia, scholars I deeply respect, drawing a line in the sand. They are showing us what it means to resist the AI-creep, They are calling on journals to allow authors to explicitly state: “No generative AI was used to prepare or write any part of this article.” I worry about the AI already embedded in journal gateways to check the paper references and editorial style... but let's park that part of the exchange. For now. Actively resisting AI in the research publication process is a beautiful, defiant stance. It is a reclaiming of human labour, a protection of the cognitive sweat that makes research and writing an act of thinking rather than an act of prompting. To be an AI Conscientious Objector is to choose the protection of human values in a much muddied ecosystem. It is a moral clarity and stance that I crave. But, as I sit here, staring at the blinking cursor of my Outlook inbox, where Microsoft’s Copilot is already, without my asking, suggesting how I might reply to a student, I realise that this clarity is out of my reach. During the research publication process, I want to object. God, I want to object. But I am tired and in a bind. And more importantly, I am entangled. The Myth of the Binary Choice The current discourse around AI in higher education is trapping us in a binary that is as harmful as it is false. We are told we either "adopt" or we "resist." We are either the tech-utopian evangelists or the Luddite holdouts. I'm going to drift away now from journal publication gateways to broader Higher Education policy on AI. That is a hot mess. The current discourse frames AI adoption as a simple yes/no proposition, implying that I can opt out by sheer force of will. It suggests I can hang a 'Do Not Disturb' sign on my professional life and the algorithms will politely walk on by. But the door doesn't lock. The algorithms are built into the hinges. Ok, perhaps I do not have sufficient willpower, is this the problem? Let’s look at the architecture of my working day. My university, like yours, has integrated AI into the very bedrock of our infrastructure. It is in the Blackboard and Moodle sites where I must upload my teaching and research materials; these platforms now use AI for accessibility scanning and content prediction. It also creates quizzes. It is in the email client I cannot turn off. It is in the "suggested actions" in my calendar. I receive automatic summaries of meetings I do and do not attend with action points. (Handy or hell?) To be a true conscientious objector in 2025 is way above refusing to use ChatGPT to write a paper. It would require me to dismantle the entire digital scaffolding of my employment. It is a state of resistance I cannot survive. The Policing vs. Vague Innovation Thirst Trap I have spent the last few days trawling through the digital archives of higher education, reading the public domain AI policies of universities across the UK, the US, and Australia. It has been a descent into a very specific kind of bureaucratic bleakness. The landscape I found is arid and it is hostile. The tone of these documents, ranging from the draconian to the delusionally optimistic, reveals exactly why so many of us feel trapped. In parsing the legalese and the strategic ambiguity, I have realised that our institutions generally fall into two distinct camps, neither of which seems to care about the consent of the humans doing the actual work. On one side, we have the Policing Camp, which views every student as a potential criminal and every AI tool as a weapon to be confiscated. On the other, we have the Vague Innovation Camp, a corporate thirst trap that uses buzzwords like 'literacy', 'employability' (ha), and 'opportunity' to mask a massive, unfunded mandate for staff up-skilling. So, here is a map of the cages as I see them. I have analysed the public policies of major institutions globally, and what becomes immediately clear is that these texts are rarely static. They are euphemistically called "living documents." In theory, this suggests agility and responsiveness. In practice, for the staff whose labor they govern, a "living document" is a nightmare. It means the rules of engagement are quietly updated in the dead of night, often without announcement or consultation. The ground beneath our feet is being shifted by administrative edits, turning our daily workflow into a game of compliance roulette. I am using this mapping to anchor my argument for the Reluctant Cyborg. This analysis proves that current AI Education policy is often just a shifting set of demands that requires you to constantly up-skill and adapt while offering zero protection for your intellectual property, your data privacy, or your right to say "no." The reality is stark: No one has sufficient policy for what they are actually doing. We are building the plane while flying it, but the university has decided that the cost of the fuel, our cognitive load (God help you if you are already burnt out or neurodivergent), our creative data, and our autonomy, is a price they are willing to let us pay. TLDR: No-one has sufficient policy for what they are doing.
Table 1: Policing & Surveillance Camp
Table 2: The "Vague Innovation" Camp
Table 3: The "Explicit Policy" Camp (Rare)
What is Missing? (The Labour Hole)
In analysing these documents, what is absent is even more telling than what is present. * No Right to Disconnect: None of these policies mention a staff member's right not to use AI in their workflow (e.g., turning off Copilot in Outlook). * No Intellectual Property Protection for Staff: They talk about protecting university data, but rarely about the fact that your lectures, notes, and feedback are being used to train the models you are forced to use. * No Workload Allocation: "Becoming AI Literate" (Russell Group) takes hours of weekly study. None of these policies allocate hours in the workload model for this "mandatory" learning. Key Pull Quotes * From Stanford: "Absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person." (Translation: If you don't write a specific policy for every assignment, you are failing.) * From Yale: "Faculty members are expected to provide clear instructions on the permitted use of generative AI tools for academic work and requirements for attribution. Likewise, students are expected to follow their instructors’ guidelines about permitted use of AI for coursework." (Translation: Use AI tools to be efficient, but if the tool lies, it's your fault.) * From Russell Group: "Universities will support students and staff to become AI-literate." (Translation: Resistance is illiteracy.) We are researching and teaching in a cage is built of vague principles and guidance that shift the liability and labour onto the individual, while the door (consent) has been removed entirely. Camp 1: These policies view AI exclusively as a weapon in the hands of cheating students. * Carnegie Mellon University offers syllabus language that explicitly "forbids the use of ChatGPT or any other generative AI tools at all stages of the work process, including brainstorming." * Monash University frames unauthorised AI use as a straight "breach of academic integrity," placing the burden entirely on the individual to prove their innocence. This approach turns us into cops. It demands we spend our precious marking time acting as forensic digital investigators, scanning for the smell of synthetic text. It destroys the trust between learner and teacher. Camp 2: Is perhaps more insidious. These are the policies that use words like "opportunity," "literacy," and "enhancement" to mask the increase in our workload. * The Russell Group Principles (UK) state that "universities will support students and staff to become AI-literate." Sounds nice, right? But "support" here is often code for "mandatory up-skilling on top of your existing workload." * University College London (UCL) tells students that AI can "reduce the need for critical engagement," yet simultaneously encourages its use for "ideas generation or planning." What is missing from all of these documents? Consent. Nowhere does it say: "Staff may choose not to use tools that scrape their intellectual property." Nowhere does it say: "We will not integrate AI into your email client without your permission." The policy is: You're on your own. Use it, but don't get caught using it wrong. Be efficient, but don't be lazy. Be transparent, but don't slow down. Neurodiverse Cyborg And then there is the body. Or, more specifically, the neurodiverse brain in a system designed for neurotypical endurance. My research area is technology to support assistive learning and neurodiversity. I have spent years advocating for tools that level the playing field. For years, I have relied on software like Dragon Dictate to bridge the gap between the speed of my thoughts and the limits of my executive function or physical capacity. Here lies the rub: The tools I use to survive are now AI tools whether I opted for those elements or not.Dragon Dictate, Grammarly, the screen readers, the speech-to-text synthesisers, they have all been retrofitted with Generative AI. To conscientiously object to AI is, for me, to conscientiously object to the ramp that lets me enter the building. The volume of work required of a modern academic is crushing. For someone who is neurodiverse, the cognitive load of administrative violence, the forms, the emails, the compliance metrics, is a mountain that grows daily. AI offers a scaffold. It offers a way to handle the sludge work so I can save my remaining spoons for deep thinking. I need access to work tools. I must persist in this space (my daughter and I depend on me supporting us). But how do I reconcile this dependency with my deep ethical discomfort? How is this healthy? I feel a lot of guilt and shame. I spent my entire education knowing I wasn’t good enough in that system. A career as an academic means you experience being continually flattened, or at least your extreme edges rounded out to ‘fit’ in disciplines, theory, pedagogies and other buckets. Now there’s another ‘unfit’ moment. I feel/know the architecture of AI is toxic and dangerous. I know the energy required of data centres is killing the planet. I also know current investment in technology is in AI and this integrated into many points of contact we have daily and I cannot remove myself fully from them. The Tech Thirst Trap of the Business School I teach Business and Computer in a Business School. My students are not entering a world where they can choose to be purists. They are entering industries that demand fluency in these tools. If I refuse to engage with AI, if I treat it solely as a plague to be avoided, I am failing in my transference of key skills and the conditions of the professional tools they will be automatically adopting. I have to teach them the skills industry wants, even as I loathe the extractionist logic of that industry. I have to show them how to use the tool, while simultaneously teaching them to critique the hand that holds it. It is a dizzying, hypocritical dance. It does make me feel unwell. From Objector to Critical Survivor So, where does that leave me? I cannot be the Conscientious Objector, standing pure on the outside of the machine. My survival depends on the machine. But I refuse to be the uncritical cheerleader. Perhaps we need a new category. Not the "Objector," but the Critical Survivor. Or perhaps the Reluctant Cyborg. We need to acknowledge that adoption is happening with and without our consent. The "conscientious objection" image is powerful because it highlights agency. But for many of us, disabled scholars, overworked staff, precarious workers, and especially our students, that agency is an illusion. I am deeply unhappy with the current state of affairs. I resent that my emails are being scraped to train a model I didn't ask for. I resent that my assistive tools now come with a side order of environmental destruction and copyright theft. But I am here. I am inside the cage. And if I am to remain a curious learner, I cannot simply close my eyes and pretend the beast isn't in here with me. I have to look it in the eye. I have to figure out how to use it to break the bars, rather than letting it consume me. We must stop shaming the individuals using AI to survive a collapsing infrastructure, and instead direct our rage at the institutions that broke the system so thoroughly that automation became the only scaffold left. Image, Dragon Toes and Nose, by Mariann Hardey, 2025 A recent comment on my LinkedIn feed stopped me in my tracks. A superstar researcher in dyslexia, asked a question that was both practical and profound. She asked: What is one adjustment that has made an AI tool actually work for your neurodivergent students?
It is a fabulous question. It is the kind of question that comes from a place of care and a desire for solutions. Yet as I sat down to answer it, I realised I could not provide a bucket answer. There is no single app, no specific prompt, and no digital overlay that solves the equation of the neurodivergent mind. To offer a list of tools would be dishonest. It would imply that neurodiversity is a static problem waiting for a software patch. Tempting though, right? The reality of my lived experience, and the experience of so many others, is that a one-size-fits-all approach does not just fail. It suffocates. The Shifting Sands of Survival Every morning I wake up and face a different internal landscape. The executive function strategies that made me a productivity machine yesterday might be the very things that paralyse me today. Yesterday, I was a master of logistics. I managed administrative tasks with ease. The tools that helped were structural and visual. Trello was my best friend. It organised my chaos into neat, satisfying cards. It felt like a scaffold holding up a building. Today is different. Today, I am skirting the edges of burnout. That same Trello board is no longer a scaffold. It is a place of overwhelm. The sheer volume of information on the screen is a sensory assault. Padlet is not my friend. The notifications are not helpful nudges. They are demands I cannot meet. So I turn to pen and paper. I retreat to the tactile, slow friction of ink on a page. Later, I might take a photo of these notes and ask an AI to transcribe my handwriting into a Google Doc. But note the distinction here. The AI is not the solution. The AI is merely the janitor cleaning up after the real work was done. The solution was the permission to abandon the digital tool entirely. The Carnival of Online Diagnosis This brings me to my deepest fear regarding the intersection of AI and neurodiversity. I worry that the nuanced (aghhh, I am now allergic to this word as it is over-used by AI’s but here we are…), human-led pathway to diagnosis will soon be paved over by an algorithm. If you spend any time online, you have seen them. The ads are relentless. They are predatory and harmful. They scorch the earth of genuine clinical inquiry with thirty-second clips designed to pathologise normal human behaviour. A frantic millennial points at text bubbles floating above their head. Do you doom scroll? You have ADHD. Do you find small talk exhausting? You are Autistic. Do you have a drawer full of cables you might need one day? Here is a subscription to our app. These online tests are a farce. They are digital carnival games rigged to funnel you toward a monthly payment plan. They rely on the Barnum Effect offering statements so vague that they could apply to anyone with a pulse and a smartphone. Do you sometimes lose focus? Do you ever feel tired? (uh-oh). Of course you do. You are a human being alive in late-stage capitalism. And everything is unsettling. (Ok, I am about to nerd out about this aspect, a classic autistic trait, stay with me). The Barnum Effect as the Digital Clinic: To understand why the "Are You ADHD?" ads on TikTok/Insta and so on feel so uncannily accurate, and why they are so dangerous, we have to go back to the 1940s. The Barnum Effect (also known as the Forer Effect, thank you Wikipedia) is a psychological phenomenon where individuals believe that personality descriptions apply specifically to them, even though the description is actually filled with information that applies to everyone. It is named after the showman P.T. Barnum, who famously declared that a good circus has "something for everyone." In the classic 1948 experiment, psychologist Bertram Forer gave his students a personality test. A week later, he handed each student a unique psychological profile based on their answers. The students were amazed. They rated the accuracy of these profiles as 4.26 out of 5. Ooooo, science, right? Every single student had received the exact same text, which Forer had copied from a newsstand astrology book. It contained statements like: “You have a tendency to be critical of yourself.” “At times you are extroverted, affable, sociable, while at other times you are introverted, wary, reserved.” “You have a great deal of unused capacity which you have not turned to your advantage.” These are Barnum Statements. They work because they are high-frequency, low-stakes generalisations. They rely on subjective validation: our brain's desire to find connections between generic information and our own lives. The Weaponisation of the Barnum Effect In the 20th century, the Barnum Effect was mostly used for harmless vanity. Horoscopes and Myers-Briggs tests used "flattery" to keep us hooked. They told us we were "critical thinkers" or "misunderstood geniuses." They played on the Pollyanna Principle, where we are more likely to accept positive feedback than negative feedback. But the algorithm has mutated this effect into something far more sinister. We are now witnessing a Medicalised Barnum Effect. The modern algorithmic ad does not try to flatter you. It tries to pathologise you. Instead of telling you that you are "disciplined but insecure" (a classic Forer statement), the modern Instagram ad asks: “Do you have a drawer full of cables you might need one day?” “Do you hate small talk?” “Do you doom scroll at night because you didn’t feel productive during the day?” “Are you a woman who is spacey? Forgetful? Or chatty?” (basically a person with a personality) These are the new Barnum Statements. They take universal human experiences, boredom, clutter, procrastination, social fatigue, and reframe them as symptoms. These diagnostic algorithms are worse than a polygraph test on The Secret Lives of Mormon Wives. At least reality TV admits it is spectacle. At least when the wires are hooked up on screen, we know it is for the drama. These online tools masquerade as medicine. They wear the lab coat of authority but underneath is nothing but a data-harvesting engine. They reduce the complex, lifelong architecture of a neurodivergent brain into a binary output. Pass or Fail. Subscribe or Leave. Real diagnosis is an archaeology of the self. It requires digging through layers of masking, trauma, and learned behaviours. It requires a human witness who can see the difference between anxiety and autism, or between trauma and ADHD. An algorithm cannot see the history in your eyes. It can only calculate your click-through rate. My fear is that future generations, including students I teach, will be handed a QR code instead of a conversation. The Threat of Flat Stanley We are promised a future of seamless voice interactions with AI. I assume this will function much like the speech-to-text apps I currently use, which have been lifesavers at times. However, there is a cost to this convenience that we rarely discuss. When I speak to an AI, I am feeding the machine. My data, my cadence, and my real voice are harvested to train a model that prioritises averages and norms. As a neurodivergent woman, I fear what happens when my distinct creativity is processed through these algorithms. Will (yep, goes my brain) my thoughts be flattened out like Flat Stanley? Will the jagged, interesting edges of my thinking be sanded down to fit a generic model of "professional communication"? Side note, ask an AI to compose an ‘out of office for a university professor’, and the default pronoun will be ‘He’. Isn’t that something. No, thank you. I want to keep my womanly dimensions. Diagnoses and Dragons This year has been a watershed moment for my daughter and I. We both received new diagnoses. I am dyslexic, and I have now been diagnosed as autistic. My daughter has started her own journey during her school years. I look at her and I see the difference in our paths. I am an adult who survived my entire education without support. I built my coping strategies out of necessity and instinct. My daughter has only been in school since 2019. She has not yet modelled these extensive defences. What she does have is a strong, innate sense of self. She knows what she likes. She knows what causes her to feel "blurgh." She knows what is fun. She is fun. It would be horrific if a diagnosis report simply stated: Have D use an AI. Why? For what purpose? If we simply shovel AI tools at her, we are bypassing the human work of understanding how she learns. We are replacing a helping hand with a predictive text generator. My fear is that we are heading toward a future in Education where human support is considered out of reach. By the time my daughter reaches university, I imagine professional support services will be rarer than a dragon with golden toes. I fear students will be handed a generic "AI Toolkit" and told to get on with it. This is already happening, btw. The Philosophy of the Ape and the Wolf Mark Rowlands (I'm reading a lot of his work lately) reminds us that there is a difference between the instrumental value of the "ape”, who schemes and plans for a future result, and the intrinsic value of the "wolf”, who lives entirely in the moment of being. AI is the ultimate tool of the ape. It is obsessed with efficiency, output, and results. It tries to civilise the wildness of our thoughts. But the neurodivergent mind often has more of the wolf in it. It does not always want to be efficient. It wants to wander. It wants to make connections that an algorithm would label as errors. I can answer the original question without a single piece of software. What makes the work possible for me? What would make it possible for my daughter? Flexibility and time. That is it. We need the flexibility to use Trello on Tuesday and burnout on Wednesday. We need the time to process the world without a predictive engine rushing us to the end of the sentence. If you apply flexibility and time to neurodiversity, you will be surprised by what happens. We have sophisticated, instinctive strategies that bloom when we are not being forced into a standardised shape. This is not a superpower btw. The answer is not in the code. It is in the space we leave for the human. Image by Mariann Hardey, 2025 My Utopian Double, Simon’s Argument, and the Oligarchs Who Own Us
Writing this post is an act of memory. It is also an act of urgent, unfinished conversation. Last year, my dearest friend and intellectual collaborator, Simon J. James, and I wrote a chapter together. It was called "Wellsian Doubles: Digital Space as Modern Utopia." This year, Simon died suddenly and unexpectedly, our research collaborations cut short, my dearest friendship lost. Re-reading our words in the shadow of that loss, and in the glaring, toxic light of our current technological landscape, I find our arguments have accrued a haunting weight. The intellectual journey we took, weaving Simon's brilliant and exciting scholarly understanding of H.G. Wells with my own research into digital life, now feels less like an academic exercise and more like a map we were drawing of a territory we had just begun to explore. The warnings we issued, the connections we made, now seem desperately prescient. The world is dominated by AI, a term that has become shorthand for a future being rapidly and unilaterally defined by a small, homogenous class of tech oligarchs. Their vision is narrow, it is neuronormative, male, and it is relentlessly dystopian, dressed in the flimsy language of utopian progress. Simon and I were writing about a "digital utopia," but the future we are being sold is its inverse. The conversation he and I started must now continue. The Man Who Met His Perfected Self Our argument hinged on a moment of profound, uncanny self-confrontation in H.G. Wells’s 1905 A Modern Utopia. The narrator, a thinly veiled Wells, is transported to a parallel utopian world. To be registered by this perfect global state, he must provide his thumbprint. This biometric data, of course, already exists. It belongs to his Utopian double. The encounter that follows is not one of joy, but of critique. This ‘other’ Wells is a ‘perfected’ version of the narrator. He is "a little taller than I, younger looking and sounder looking; he has missed an illness or so, and there is no scar over his eye". This double is not genetically different; he is the product of superior social conditions, a "superior being" grown from the same "natural... material". Wells, the narrator, is forced to see himself not as he is, but as he could have been. He is confronted with the "waste of all the fine irrecoverable loyalties and passions of my youth" - where all the potential squandered, the scars inflicted, by his own flawed, imperfect world. The two Wellses stand as a "grotesque 'before and after' image," a living testament to the power of a society to either elevate or destroy the individual. The Digital Twin: Our Utopian Phantom The argument Simon and I built connects this 1905 literary device directly to the central, lived experience of 21st-century digital life. We are all, now, the narrator in A Modern Utopia. We all live in constant, immediate dialogue with our own perfected doubles. This double is the curated social media feed, the edited LinkedIn profile, the flawless, performative self we project onto a myriad of digital screens. This is the "digital self-work" I have written about as the relentless, iterative, and anxious labor of crafting an "enhanced iteration of our own selves". We are all engaged in building a digital twin, an aspirational phantom who, like Wells's double, has "missed an illness or so" and bears no scars. Like Wells, we "come to meet ourselves" in this digital space , and we almost always find our real, "mucky, humbling" flesh-and-blood existence wanting. We are caught in a permanent state of comparison, not just with others, but with the perfected, artificial version of our own being. The Dystopian Pivot: A Question of Ownership Here, the entire utopian-dystopian axis of our argument pivots on a single, devastating question. It is the question that now defines our digital reality: Can a self-portrait be utopian if the canvas, the paint, the brushes, and the gallery are all owned by a corporation that profits from the exhibition of your perfected image? The answer is, and must be, a firm no. This is the core of the e-topia. We do not own our doubles. We are not the beneficiaries of our own "digital self-work"; we are the raw material. Our "digital twins" are the property not of the self, but of the corporation. We are performing our identities within an architecture we did not build and whose blueprints we are not allowed to see. And this architecture is far from neutral. It is the product of the very tech oligarchies, the new "ruling elite," that are defining our age. Their vision of "perfection" is the one that is algorithmically rewarded. As we argue, the “enduring social hierarchies, encompassing gender, age demographics, commodified cultural expression, sexuality... and the body as a major locus of regulation" have not been erased. They have been amplified, codified, and turned into vectors for profit. The algorithm is not a mirror; it is a mould, enforcing conformity to a narrow, marketable, and often deeply damaging ideal. This is the male AI tech dystopia in practice, a system that mistakes surveillance for community and data-harvesting for connection. This is a system that commercialises a children’s teddy bear with an AI chatbot that included advice on where to find knives, how to light matches, as well as explanations of sexual kinks. This is where the cage is built. In another piece, I reflected on our impulse to build "cages" around AI, to treat it as something that must be constrained and controlled. But in re-reading the chapter Simon and I wrote, I see the parallel with horrifying clarity: the cages we build for AI are just mirrors of the cages we have already built for ourselves. The tech oligarchs are not building a truly curious or creative intelligence. They are building an administrator. This is the most overlooked, revelation in Wells's book. His perfected double, the man raised in Utopia, is not a writer, not a creator, not a public intellectual. He is an administrator, one of the "samurai" elite who manage the system. His specialisation, in fact, is "the psychology of criminals”. Even in a supposed utopia, his job is to manage the "imperfect or abject". This is the goal of the tech dystopia. It doesn't want creators; it wants managers. It doesn't want creativity, spontaneity and change; it wants "certainty and stability". The "perfected" digital self, the flawless digital twin, is not a liberated self. It is an administered self, a self that has internalised the logic of the cage, performing its perfection for the "eye of the State”, and which is now the eye of the algorithm The Price of Admission: Total Surveillance The most chilling parallel, the one that truly closes the trap, is that Wells himself understood the price of his perfect world. Simon and I pointed to Wells’s own chilling admission that his utopia required a near-total loss of privacy, a sacrifice Wells, and many others today, seem to deem "worth making". Wells’s utopian state is only possible through constant surveillance, through "the eye of the State that is now slowly beginning to apprehend our existence... focussing itself upon us with a growing astonishment and interrogation". This parallel is precise. We have accepted the "invasion of life by the machine" that Wells predicted. We have made the same Faustian bargain, trading our privacy, our autonomy, our "irrecoverable loyalties and passions" for the "privilege" of performing our perfected selves in the corporate-owned digital space. The "eye of the State" is now the eye of the corporation, and its surveillance is total. And what of the utopian double, the perfected Wells? He wasn't a writer, a creator, or a critic. He was an administrator. This, too, was a warning. The system does not want critics. It does not want artists. It wants managers, it wants compliant users, it wants data points. In Wells's own words, the utopian world has a "death instinct" for the genre of utopian writing itself, a desire to "perfect the world so far as to render such a genre of writing unnecessary". It seeks to cancel the very critique that spawned it. A "Poiesis of the Self": The Unfinished Argument These thoughts brings me back to the idea of the learner and curiosity I have previously posted about. If we are building cages for AI, we are simultaneously killing our own curiosity. The act of creation, of utopian thinking, is an act of profound, open-ended curiosity. It is what Simon and I, following Wells, called poiesis, meaning as a state of creative, restless, self-improving change. This poiesis is the exact opposite of the cage. It is the "universal becoming of individualities". It is Wells's insistence that "nothing endures, nothing is precise and certain... perfection is the mere repudiation of that ineluctable marginal inexactitude which is the mysterious inmost quality of Being". The male AI tech dystopia is built on the repudiation of this. It is a cult of "perfection," of "certainty," of "precise" and "restrictive" categorisation. It cannot tolerate "marginal inexactitude," because that is where humanity, and genuine learning, resides. If AI is a "learner," what are we teaching it? We are teaching it to be an administrator of our cages. We are teaching it that the poietic self, the flawed, scarred, creative, unpredictable narrator, is "abject" and must be "corrected" into the flawless, manageable, and ultimately sterile administrator. It is this final part of our shared argument that I now hold onto. Simon and I saw a potential way through this deterministic, corporate-owned dystopia. We argued that utopia, for Wells, was not a final, static place or a "perfection." He insisted that his modern utopia must be "in motion," "fluid and tidal". The goal was not being, but a "universal becoming of individualities". To achieve this, Wells used a concept from Plato: poiesis. Again, poiesis is creative action, the act of making, of bringing something new into being. Wells’s utopia needed Poietic inhabitants to keep it in a constant "state of creative change". Simon and I proposed that digital modes offer a "poiesis of the self". This is the hopeful path, the difficult, necessary act of intellectual and personal resistance. It is the refusal to be a static, reified, commodified product. It is the insistence on reclaiming our "co-creative" agency , to see our digital lives not as a performance for an algorithm, but as a "resolute engagement with the world and the self". Our identity, like Wells's utopia, must be "fluid rather than fixed". Utopia, we concluded, is "an ever-ongoing project". Simon is gone. But our shared project, the poiesis of our collaboration, is not. The task now is to rescue our digital lives from the administrators, to refuse to be mere "flawed and reified representation[s]" , and to insist, against the deterministic pull of the oligarchs, that a "better way of being" is still one we can creatively, curiously, and collectively make for ourselves. This is the only way to honour the conversation we started. This post is in memory of Simon J. James. who was brilliant and is missed. All readers should have Open Access to our chapter, please get in touch with me ([email protected]) if you have any difficulty locating our words. Image, paperbag academic, by Mariann Hardey, 2025 The Red Pen as a Sledgehammer
There is a moment every academic knows. It is the pause before opening the email with "Decision on your manuscript" in the subject line. It is a moment of vulnerability, a baring of intellectual self to the anonymous judgment of the field. We steel ourselves for critique. We hope for engagement. We accept that rejection is part of the process. What we do not, and should not, accept is the demolition. I received a desk rejection recently. It came after a long delay, flagged with an apology from a new editorial leadership. The rejection itself was not the problem; we are all used to rejection. The content of that rejection, however, was a masterclass in everything that is broken in academic culture. It was not peer review. It was a takedown. But, not in a catchy K-Pop tunes manner. This Is Not About Rejection Before we go any further, I want to be perfectly clear. This post is not an angry rant because my co-authors and I research was rejected. Rejection is a fundamental, and often productive, part of the academic ecosystem. Good research is forged in the fires of rigorous, critical, and even harsh peer review. We get "no" far more than we get "yes," and that is the price of admission. I had fabulous rejection the other week (more on this later)… This post is about the weaponisation of feedback. It is about the specific, toxic culture of intellectual grandstanding that hides behind the veneer of "maintaining standards." The heart of the problem is not the rejection; it is the abruptness and absurdity of the message. It is the choice to use power not to build knowledge, but to humiliate and exclude. The problem is an email that is not a critique but a cudgel, a message so disproportionately cruel and dismissive that it ceases to be a professional assessment and becomes a personal attack. This is not about the outcome. It is about the method, and what that method reveals about the wielder. A Performance of Power I’m going to let you look at the feedback's core message. It was a piece of performative grandstanding, a judgment delivered from on high. The author of this feedback refused to engage with a piece of research; instead, asserting their own superiority over it. The entire text was laced with elite posturing, designed to signal that my co-authors and I were not part of the 'sophisticated' club. Our analysis was dismissed as simplistic, a mere summary lacking any theoretical depth. A core part of our methodology, a well-respected method for analysing social media content, was declared entirely irrelevant to the questions we posed. Our interpretation was labelled as nonexistent. The letter concluded with the stunningly arrogant assessment that the entire manuscript was an incoherent shambles. It was, in essence, a Prince Ronald moment. In the children's story, "The Paper Bag Princess," Princess Elizabeth dons a paper bag to outsmart a dragon and save her fiancé a prince named Ronald. But when she rescues him, the prince doesn't thank her. He looks at her soot-covered face and her paper bag and says, "Elizabeth, you are a mess... Come back when you are dressed like a real princess." This is the very essence of academic elitism. This is the braying of an individual, likely senior academic secure in their own elite standing, who believes their position grants them the right to not just disagree, but to demean. The Collateral Damage of Elitism This is where the personal and the systemic collide. My primary reaction was not just frustration for myself, but a cold fury on behalf of my co-authors. This was to one of their early publications. Imagine, as a junior scholar, stepping into this arena for the first time, only to be met with this wall of contempt. This is how academia bleeds talent. This is how gendered exclusion operates. It’s not always a slammed door. Sometimes it’s an email, dripping with disdain, that tells you your work, your thoughts, your very presence, are a complete failure. It is an act of intellectual violence, a symptom of a much larger disease: a culture that romanticises burnout and offers no structural support. This feedback is the voice of that toxic system, an individual who chooses to use their power to inflict a wound. The Practice of True Parity This kind of gatekeeping is precisely why the way we collaborate matters so much. Working with a mix of people at different career stages is not about mentorship, at least not in the traditional, hierarchical sense. This idea of the senior academic "bringing up" the junior scholar is itself a form of patronising elitism. It reinforces the very power structures that allow such toxic feedback to exist. The real, radical act is to build collaborations based on parity. It is to create a partnership where all forms of expertise are valued equally. My co-authors, one at the start of their career, brings a methodological rigour and a fresh perspective that I, as a more established academic, benefit from enormously. My experience navigating the brutal landscapes of peer review is simply another form of expertise, not a superior one. True allyship is a structural commitment. It means deciding from the beginning that we are building a single project from two equally vital toolkits. The role of the senior academic is not to "protect" the junior one, but to use their privilege to absorb the bureaucratic violence, contextualise the feedback as a systemic failure. To be the first to say, "That prince is a bum." The Empathy Failure This entire episode is a catastrophic failure of empathy. Brené Brown’s research defines empathy not as sympathy, not as feeling for someone, but as feeling with them. It is the vulnerable choice to connect. Sympathy stands at the top of the hole, shouting down, "It's messy down there." Empathy climbs down into the hole to say, "I know what it’s like down here, and you are not alone." This editor, like prince Ronald, is armoured in his own ego. He is the person standing at the top of the hole, declaring that the hole itself is unsophisticated. This feedback is a performance of anti-empathy. It is a strategic choice to use judgment as armour, because to engage empathetically would require him to be vulnerable, to connect with the act of intellectual creation, and to be a colleague. Instead, he chose the power of the pedestal, dismissing the work because it is far safer to judge than to connect. Rigour vs. Cruelty in the Age of AI Slop Now, let me be clear. Empathy is not a participation trophy. It is not the pat on the head for a good try. I say this as an editor myself, one who is currently drowning. The sheer volume of work is crushing, but it's the nature of the new volume that is truly corroding. We are all facing a new, specific, and soul-crushing fatigue from the deluge of AI-generated slop. Please, please stop with this slop I repeat as my mantra when I open up my own Editors digital desk. As an editor, I receive a relentless slurry of meaningless, plagiarised, hallucinated text. It’s an endless signal-to-noise problem that wastes our most precious and finite resource: our cognitive load. (Which is precious when you are neurodiverse). This new fatigue is a specific kind of burnout. It’s the weariness of a lifeguard watching thousands of bots pretend to drown (they're just waving, right?) It makes you calloused. It makes your trigger finger for the 'reject' button itchy. You start to assume bad faith in every submission. But this is precisely the moment where empathy becomes a non-negotiable professional obligation. Our exhaustion with the system does not give us a license to be abusive to the individual. Empathy is the critical tool of discernment that allows us to distinguish between a bad-faith, automated submission and a good-faith, flawed human effort. The AI-generated paper deserves a form rejection. The human-authored paper, even if it is deeply flawed and requires rejection, deserves a response that respects the labor. Empathy is what allows us to be rigorous without being cruel. It is the choice to critique the work, not the person. It is the difference between saying, "The theoretical contribution is not clear," and "Your work is a big mistake and I am better than you.” The View from the Other Chair Again, I am also a journal editor. When I read this letter, I do so with a profound sense of professional failure—not on my part, but on the part of this journal’s new leadership. In my own editorial practice, this feedback would be a good reason not to go near peer-review ever again. It would never, under any circumstances, leave my desk and go to an author. My job as an editor is to be a custodian of the field. It is to find the value, to guide the author, to protect the integrity of the review process and the human beings who participate in it. We reject papers constantly. But a rejection should be a tool for improvement, not a weapon of humiliation. This editor's failure to distinguish between critique and abuse is a stain on the journal. Their final, hollow wish that we would not be deterred is perhaps the most insulting part, a feigned elitist politeness after an act of deliberate cruelty. Rejection as Success! Now, let me contrast this with a rejection I received from another journal for a different paper. This one was also a desk reject, but it was the polar opposite in its effect. The editor began by validating the work, calling the topic "highly timely" and acknowledging the "rich longitudinal, multi-method qualitative design". The "no" was just as firm, but it was not a demolition. Instead, what followed was a precise, structured, and generous roadmap for improvement. The feedback was a model of clarity, pointing to specific, actionable issues: an "uneven" integration of theory and data , a lack of transparency in how the different methods were "combined analytically" , and a "conflation" of descriptive observations with conceptual claims. This, right here, is what good editorial practice looks like. This is not a "mess"; this is a checklist. For any writer, this is a gift. But for neurodiverse writers, who often struggle with the unspoken rules and subtext of academia, this kind of explicit, logical, and depersonalised critique is an act of essential inclusion. It removes the emotional guesswork and replaces it with a clear-cut task. I didn't feel humiliated; I felt seen, respected, and, most importantly, I knew exactly what to do next. Again, I know what some readers might still be thinking: "This is just an academic pissed off over a rejection." It is a convenient way to dismiss this entire reflection as sour grapes. But that would be a fundamental misreading of the problem, and a missing of the entire point. We are all built to handle rejection; it is the ink we swim in. This post was never about the "no." It is about the how. It is about the profound, unprofessional, and systemic failure that occurs when an editor, an individual in a position of immense trust and power, chooses to issue not a critique, but a personal demolition. This is not about my wounded pride, I have boxing strategies for this part. It is about an abusive culture that masquerades as rigour, a system that protects the egos of its prince Ronalds while it burns the next generation of Elizabeths. This is not a complaint. It is diagnosis. Tojan Horse image by Maz HardeyA few days ago, my brilliant friend and education practitioner sent me a link to a Google blog post on AI and learning. On the surface, it’s the usual optimistic fare: AI as a tool for personalised learning, for bridging gaps, for efficiency. And for a moment, a fleeting, optimistic moment, I saw the shimmering potential. Then, the cold, hard slap of reality. Not the reality of AI's limitations, but the reality of its deployment, its framing, and the deeper, insidious currents it often serves. I am a professor. I am autistic. I am dyslexic. And like many others, my mind is not a neat collection of separate cognitive functions that conveniently slot into diagnostic categories. It is a messy, vibrant, sometimes terrifying convergence. To speak of "my dyslexia" or "my autism" as distinct entities is like trying to describe the flavour of a tom yum soup by isolating the salt. The essence is in the blend, the unpredictable, sometimes overwhelming symphony of sensations. And often, that symphony culminates in a profound, exhausting mush. This is the ground upon which the grand narratives of inclusive technology are so often built. These are narratives that, I increasingly suspect, function less as bridges and more as Trojan Horses. The Siren Song of the AI Education Silver Bullet The rhetoric around AI in education is seductive. It promises to "level the playing field," to "personalise learning," to "empower neurodivergent students." For a moment, it sounds like salvation. For the dyslexic, AI will summarise dense texts; for the autistic, it will organise schedules or draft emails. And yes, in isolated moments, it can do precisely that. I can attest to the small victories. The AI summariser that can cut through a thicket of academic prose, saving days of concentrated cognitive effort. Or maybe, academics should write with clarity and avoid dense and inaccessible flourishes in their work… The executive function assistant that helps me wrangle a chaotic inbox. These are not trivial gains. They are moments of respite in a landscape that often feels like an uphill battle. But here’s the rub: these isolated victories are often presented as evidence of a systemic solution. And this is where the Trojan Horse comes in. The promise of inclusion via technology is hoisted over the walls of traditional pedagogy, not as a radical reimagining of the city itself, but as a new, more efficient weapon in an old war. The Hidden Costs: Cognitive Exhaustion and the Illusion of Choice Mark Rowlands often writes about the animal mind, the embodied cognition, the way our being in the world shapes our understanding. Our neurodivergent minds are profoundly embodied. Our energy is not an infinite resource; it's a carefully managed, precious commodity. And often, it’s already depleted. The Google blog, like so many others, extols the virtues of these new tools. But who speaks of the cognitive overhead? Who calculates the hidden tax levied on a neurodivergent brain simply to learn a new tool, to integrate it into a workflow, to debug its inevitable failures? Here’s an insight into how my mind works. I cannot simply isolate the task itself and ask an AI to ‘run it’. I need scaffolding around the task. For neurotypical individuals, adopting a new app might be fun, and enhance their efficiency or productivity (regardless of how toxic this mindset is…). For a mind that already expends disproportionate energy on executive function, sensory filtering, and processing complex information, another solution can feel less like an aid and more like another brick dropping on your head. We are told, "Just learn to prompt better!" "Explore its features!" "Maximise its potential!" “Use it ‘critically’” (Whatever that means). These questions are not helpful; it's an additional layer of homework. It's a constant, low-level hum of anxiety: Am I using it correctly? Is it actually helping or just adding another step? Is this "aid" actually a subtle form of digital gatekeeping, where only those with the energy to master it truly benefit? Sometimes, the promise of support through technology simply shifts the burden. Instead of changing the inaccessible structure, we are handed a more complex hammer and told to adapt the world ourselves. And I want to be clear, it is apparent that AI was never designed with neurodiversity in mind, this is significant challenge for anyone who encounters AI, especially if you are told to simply ‘play’ with the technology. That’s a very scary place to be. The Real Battle: Not Tools, But Systems The truly critical edge here is that the focus on technological fixes often sidesteps the more fundamental, uncomfortable truths about our educational systems. Why do we need AI to summarise dense papers? Because academic writing is often needlessly convoluted, exclusive, and antithetical to effective knowledge transfer. Why do we need AI for executive function? Because curricula are often rigid, assessments inflexible, and institutional structures demand a standardised mode of engagement that disregards the vast spectrum of human cognition. Instead of demanding that professors teach differently, that universities reform their assessment methods, or that academic culture embraces diverse forms of expression, we are offered a technological bypass. The argument morphs: "Oh, it's not the system that's flawed, it's just that some brains need extra tools to fit into it." Neurodiversity, in this context, becomes a convenient vehicle – a Trojan Horse – for the uncritical adoption of technology. It grants moral legitimacy to the tech giants, allowing them to frame their products as benevolent instruments of inclusion, rather than as profitable enterprises that may, in fact, exacerbate existing inequalities. The "neurodivergent user" is championed, not because the system fundamentally changes to accommodate them, but because their challenges provide a compelling justification for deeper technological integration. And in this process, the very concept of "neurodiversity" is subtly reshaped. It moves from being an argument for systemic change and varied human experience to a consumer category for technological solutions. "You're neurodivergent? Here's your app! Here's your AI co-pilot!" The inherent value of diverse ways of thinking is lost in the scramble to digitally "fix" difference. (Screams)! Reclaiming the Narrative The future of education, for minds like mine, isn't about more tools to navigate a hostile education and professional landscape. It’s about cultivating a landscape that is less hostile to begin with. It's about assessments that celebrate varied forms of intelligence, not just rapid-fire recall or perfectly formatted essays. It's about curriculum design that anticipates a spectrum of processing styles. It's about institutional empathy that understands the finite nature of cognitive energy. Let the AI summarise. Let it organise. But let us never mistake these tactical aids for strategic victories. Let us be vigilant against the insidious notion that our complex, beautiful, sometimes chaotic brains are simply problems awaiting a tech solution. And we need to agree on which AI to use and why. The true conversation for AI in education shouldn't be about "is this cheating?" or even just "who is this including?" It needs to be: Who is this demanding more from, who is it truly serving, and are we using the genuine need for neuro-inclusion as a convenient smokescreen for a deeper, more problematic technological agenda? Because sometimes, true inclusion isn't about adding more, but about stripping away the unnecessary, the rigid, and the burdensome, allowing all minds the space to simply be and to thrive. Our minds are not a market for your solutions; they are a reason to change your systems
The curious learner. I find myself thinking about her a lot.
I even drew her, in a simple sketch, to try and make sense of the unease I was feeling. I call her 'The Learner'. She isn’t a difficult student. She’s not the one in the back of the lecture hall, disengaged, scrolling through her phone. She’s the one who is diligent. She’s the one who is curious. She’s the one who, after a session, stays behind to ask a question that lights up her whole face. A question that, in a healthier world, would be the entire point of education. A simple, wonderful question: “Could I?…” “Could I,” she might ask, “try to use this… this new AI thing… to help me brainstorm?” “Could I,” she’d continue, a little quieter, “see if it can help me structure my argument? Not write it! Just… help me play with the ideas?” She would like to experiment. She would like to play. She is standing at the edge of the most significant technological shift since the internet, a tool that will fundamentally reshape her world and her career. And her first, pure, academic instinct is to poke it, to test it, to see how it works, and to understand how she can think with it. And what do we do when she asks this question? We shame her. We don’t do it intentionally. We don't do it because we are cruel. We do it because we are, as an establishment, terrified. And so, when this curious learner holds up her spark of an idea, we douse it with the cold water of our own institutional panic. From all sides, the voices come. The ones I drew in the speech bubbles, floating over her head, pressing down. “Using AI shows you are lazy,” whispers one voice. This is the voice of moral panic. It equates a new tool of augmentation with an old tool of shirking. We are shaming her for her curiosity, labelling it as a moral failure, a lack of character. “You must show evidence of critical thinking,” insists another. This is the voice of deep irony. We say this while simultaneously discouraging her from critically engaging with the most important new tool of our time. We are, in effect, telling her that the only way to show critical thinking is to pretend this technology doesn't exist. “The uni has an AI policy,” says a third, definitive voice. This is the wall of bureaucracy. A policy almost certainly drafted from a place of fear, not of exploration. A document designed to prevent rather than to guide. It is a shield for the institution, not a map for the learner. “You already have teaching support.” This one is perhaps the most heartbreaking. This is the voice of dismissal. It fundamentally misundersstands what she is asking. She is not asking for help because she is struggling; she is asking for permission to be curious. We are telling her that the established, "correct" pathways are the only ones she is allowed to walk. So, what happens to The Learner? She gets shamed. Over and over again. And finally, she gets stuck. Her curiosity, once a spark, is now a liability. She learns the real lesson we’re teaching her: "Don't ask. Don't experiment. Don't play." She learns that the goal of education is not to explore the frontier, but to produce a piece of work that can be "evidenced" in a way that makes the institution feel safe. She learns to perform her "critical thinking" in a neat little box, far away from the messy, complex, and fascinating tools that she knows will define her future. She becomes stuck. And we, the educators, are the ones who stuck her there. This, I believe, is a profound failure. We are in the middle of a revolution, and we are spending all our energy trying to build higher walls, instead of teaching our students how to be architects. What if we changed our response? What if, when she asked "Could I?...", we leaned in and said, "I don't know. Let's find out together." What if we built sandpits, not cages? What if we designed modules specifically around "playing" with these tools? What if we asked, "Show me what you made with AI, and then write me a reflection on what it got wrong, what it got right, and what it taught you about your own thinking process." What if we stopped writing policies based on a panicked desire to "catch" cheaters, and instead started developing pedagogies based on a genuine desire to cultivate co-thinkers? Because The Learner is still there. She's still curious. But she's stopped asking. And that should frighten us far more than any AI ever could. And you might like my co-authored book on Generative AI and Education. Image copyright Mariann Hardey, 2025 On November 11th, I gave the opening keynote at the Deepfake and Society Symposium at the University of Otago. My colleague, Dr. Wasim Ahmed, and I were invited to set the stage for a day of critical humanities research. My talk was designed as a "zine-note" to explore the human, cultural, and political stakes of our new reality. Here is the script from my presentation. MY ROOFER, THE DISINFO-ARCHITECT (A TRUE STORY)
Before we talk about AI, algorithms, and global networks, I want to tell you a very analogue story about a roofer. On January 1st this year, I had a serious leak from my roof, which I had repaired. A few months later, a man knocked on my door. He didn't try to sell me a new roof—that would have been too obvious. He did something much smarter. He introduced himself as an 'expert tradesman'. He pointed up at my chimney and said, with a deeply concerned frown, "I've noticed... you've got a missed bit". He then spun this incredibly detailed story. This tiny, specific flaw, he explained, was going to let water run down between my house and my neighbour's, leading to 'significant, costly, hidden damage'. But, he had his tools. He could solve my problem, right there and then, for £450. Now... I didn't let him 'repair' my roof. What stayed with me was the algorithm. Not a digital one, but a human one. A simple, three-step script for hacking trust. He sold me a narrative of 'urgent, hidden danger'. He sold me fear. And most importantly, he sold me 'privileged access to a 'truth' that I couldn't see for myself'. He'd identified his target—a woman, living alone, who he perhaps assumed was 'easy to manipulate'. My roofer was a disinfo-architect in analogue. He proved that a compelling fiction is more powerful than a boring truth. This analogue con is the exact logic of modern disinformation. It’s not the bald-faced lie. It always starts with the 'missed bit'. It's the 'cherry-picked statistic'. It's the '10-second video clip cut from a 2-hour speech'. It's the 'leaked' email. It's a tiny, specific 'flaw' presented as the key to a much larger, hidden danger. It is a 'performance of authenticity'. We, especially as researchers, have been trained to believe that 'truth will out'. That facts will win. But the roofer proves that's not true. The antidote to a bad story isn't a fact. It's a better story. WHAT IS DISINFORMATION? (Inspired by a Roofer and Warhammer) This is the core of our problem. Disinformation isn't just a lie. It's the 'institutionalisation of deception'. It is the roofer's tactic, scaled up by technology. The 'missed bit' is now weaponised to turn a safe home, or a safe society, into a source of fear. The result? 'Truth itself becomes a malleable commodity'. My new roof, successfully reframed as flawed. A fair election, reframed as stolen. And this creates the battlefield we all now live in: the 'Disinfopocalypse'. An environment where it is 'difficult, if not impossible' to tell fact from fiction. Where we are 'drowning in data', but trust... trust becomes our most precious, and most endangered, resource. Disinformation is the Roofer's Tactic, leveraging institutionalisation of deception. A social script, a feigned concern, a performance of an expert. A deliberate, engineered assault on our shared reality. He weaponised a "missed bit" to turn my safe home into a source of fear. The result: Truth itself becomes a malleable commodity. My new roof was successfully reframed as flawed. 'Disinfopocalypse': The Battlefield A present where the very concept of objective truth is under relentless siege. An environment where it is difficult, if not impossible, for individuals to distinguish between actual facts and manipulated falsehoods. The result: We are drowning in data (and warnings), and trust becomes our most precious and endangered resource. THE "GOLDEN AGE" OF FAKES It wasn't always this way. When I started my academic career in the late 1990s and early 2000s, the internet was in its 'Golden Age'. The most dangerous 'fake' I investigated was on an internet dating profile. My early work was on digital behaviour, etiquette, and identity. The "fakes" were catfish. The "lies" were 10-year-old photos. The stakes were personal: a bad date, heartbreak. My research question was: 'How do people perform a 'true' self online?'. As researchers, we were observers—digital anthropologists studying a new tribe with a certain academic distance. The "truth" was still a knowable thing we could uncover. Now the stakes have shifted. They have moved from personal deception to societal manipulation. The lie is no longer a 10-year-old photo; it's a 'deepfake' video... a coordinated, AI-driven campaign. This was the end of our academic innocence. We went from being observers to being participants in what my colleague Wasim Ahmed and I call the "Disinfopocalypse": a state where we are "drowning in data" and have zero "clarity" on information source, legacy, or manipulation. HUMAN MACHINERY OF LIES My colleague, Dr. Wasim Ahmed, who you'll hear from next, will show you the 'battle maps'—the SNA graphs of how lies spread. I'm going to talk about the people on that map. This is the Human Machinery of Lies. It's a simple, two-part recipe. The Seeders: The Architects of the Story. These are the modern "snake oil salespeople". They craft the initial narrative. But their motives are complex: Profit: The "lucrative business" of falsehoods. Clicks equal ad revenue. Conviction: The "true believer" who genuinely thinks they've found a "universal truth" that the mainstream is hiding. They are 'authentic in their inauthenticity'. The Amplifiers: The Unwitting (and Witting) Chorus This is us. The people on Wasim's network maps. We don't amplify because we're malicious. We amplify for the most human reason of all: 'Social Capital'. To belong. To signal our identity. Humans are a storytelling animal. We occupy this planet by creating and sharing fictions. We call them gods, nations, and money. Disinformation works because it leverages our deepest evolutionary drive: the desire to understand and belong. THE AI ENGINE AND THE ARENDTIAN NIGHTMARE So, we have this ancient, human machinery. This brings us to the critical question: what happens when you connect this human machinery to a new, non-human algorithmic engine? You get the great accelerator. The algorithm builds the 'echo chambers' that trap us, feeding us more of what enrages us. It's the 'YouTube rabbit hole' on a societal scale. This technology isn't creating a new problem; it's perfecting an old one. It creates a public that changes behaviour based on 'emotional reaction, not reasoned analysis'—because that analysis has been manipulated or is invisible. Deeeep Fakes. Who built this engine? This algorithm—this AI—is the 'great accelerant' of our times. But it wasn't built in a vacuum. It's the product of a 'tech-bro' culture that lionises disruption and scale over nuance and safety. We don't have to guess its values. Long before generative AI, the academic Safiya U. Noble, in her foundational book Algorithms of Oppression, diagnosed the harms of this culture. She showed us how a simple Google search for 'Black girls' returned almost exclusively pornography. The AI engine is built on a coded logic of gendered and racial humiliation. It should come as no surprise, the very term 'deepfake' wasn't coined by a university lab. It was the username of a Reddit poster in 2017, promoting his 'killer app'. A tool specifically designed to 'paste the faces of female celebrities onto pornographic videos'. An industrial-scale production of non-consensual, gendered humiliation. My point is, this engine 'isn't neutral'. Its goal is not truth. Its goal is engagement. The algorithm doesn't care why you're angry. It just knows you stayed. OUR FIELD GUIDE FOR THE NIGHTMARE A tech dystopia is the nightmare that we have been warned about for over a century. Increasingly, I turn to popular fiction to frame these cultural narratives, where these texts are the diagnosticians of our current state. The Diagnostician: Ray Bradbury My first diagnostician is Ray Bradbury. We all remember Fahrenheit 451 for the fire. We remember the woman who 'immolates herself and her home', a terrifying, final act to protect her 'thoughts, her very life', and her 'fundamental right to share knowledge'. She embodies the human drive to protect truth from an overt, raging fire. But Bradbury's deeper warning—the one for our time—was the 'subtler, perhaps even more terrifying, form of censorship'. What if the books are never burned? What if they are 'simply rewritten'? What if their facts are 'expertly distorted, until public understanding itself becomes malleable'? This is the world AI perfects. The censorship we face is that 'quiet, constant hum within our minds' of the algorithm, endlessly rewriting reality. The Method: George Orwell If Bradbury diagnosed the environment, George Orwell diagnosed the method. In 1984, the Ministry of Truth mandates that 'two plus two equals five'. How? Because the authority and the echo chamber reinforce it. The 'AI-driven echo chamber is this method, perfected'. It algorithmically reinforces the lie until it becomes the only fact you see. The Political Goal: Hannah Arendt However, it's Hannah Arendt who provides the most terrifying and accurate diagnosis of the political goal. This is the sharpest point I can make today: The real horror of the deepfake is not to make you believe a lie. It is to make you believe nothing. It is the 'systematic destruction of a fact-based reality'. The goal is to create a populace so exhausted, so cynical, so disoriented that it 'believes everything and nothing'. A populace that has lost its shared, fact-based world also loses the ability to govern itself. It can 'only react, not reason'. The endgame is to erode our 'shared epistemology' so that democratic argument itself becomes impossible. PROVOCATIONS FOR TODAY'S SPEAKERS So, this is the lens for today. As the opening keynote speaker, I want to offer a provocation for the incredible ideas I see on the programme: When you hear talks on 'digital harm' or 'copyrighting the self', I want you to ask: 'How can we 'copyright' a self that is infinitely reproducible?' How do we define 'harm' when the goal is to destroy the concept of truth itself? When you hear talks on 'critical literacy', ask: 'How do we teach students to critique a text that is designed to bypass the brain and hit the gut?' When you hear talks on 'bioethics' and the 'colonisation of reality', this is the heart of it. A deepfake is the ultimate 'colonisation of the self'. What 'relational ethics' can we possibly have with a 'synthetic self'? I've shown you the 'why'—the humanist elements in crisis. My colleague, Dr. Wasim Ahmed, is next. He will show you the 'how' and the 'where.' He will show you the maps of this new reality. BE THE HUMAN FIREWALL The solution to this humanist crisis will not be an algorithm. It is us. Our 'academic process of verification, critique, and rigorous doubt... is the antidote.' My final provocation is this: Our job is no longer to study this; it is to act on it. 'Our job is to be the 'Human Firewall. In our teaching, in our research, and in our public life.' Thank you. TopCat is an 18-year-old elderly female moggy who is having the time of her life in Scotland. TopCat has been self-tracking for 18 months, with her owner curious about why she had gained weight (a lady of a certain age? ), as well as where she went all day…
TopCat’s tufty area is short and fluffy, her floof jutting v-shaped under her more flexible spine. Her cat frown rises outward from twin creases above a snub nose (she’s a Himalayan Persian) and her pale strawberry blonde fur pushed down from her high flat temple to pick up the V-motif once more. She has the appearance of a blonde cloud. And she’s a regular killer of mice. I’ve been interested in self-tracking since my late father bought a pedometer for our Guildford council house when I was eight years old and we tracked the steps from our council house to the local bakery for fresh doughnuts (3,477 steps). Counting steps has always represented a personal resonance with my surroundings, an interest in health, and a celebration of technological innovation. This is why I began writing my book Household Self-Tracking During a Global Health Crisis in 2020. The goal was to consider how the commercialisation of health promotion through self-tracking technologies is symptomatic of a larger social and cultural health change marked by increased individual investment in and image construction of fit and healthy living. What I hadn’t anticipated was the same level of investment and interest in self-tracking with (not just for) pets. TopCat had three additional ‘homes’ and four ‘owners’ who sought to cater to her every blonde furry whim, it was discovered by viewing her GPRS tracking data from a fitbit attached to her collar. Perhaps tracking your dog’s daily steps or your cat’s sleep patterns is ridiculous, but I’ve discovered that understanding the more ridiculous forms of household tracking provides better insight into health practices as a way of living in a world that is both in crisis and promoting breakthrough after breakthrough in health technologies. During the course of writing the book, I became aware of how household health data practises extended care routines and opened intimacies in such a way that members (especially pets) could motivate and sustain healthy changes. And, while I passed up the opportunity to conduct direct interviews with pets (for the next book), my research discovered that tracking with pets provided care and affective forces that were important in household relationships. Such absurdity may allow us to investigate new health connections made not only between people, but also between people and their digital devices, pets, and, in TopCat’s case, multiple homes. So, here is an ever-attentiveness to health to describe the caring intimacies and responsibilities deployed in health tracking in households with people and animals. Because tracking is viewed as an analytical category within the home rather than something exclusive to humans, health-related identities mean different things to different generations (human and pet), and focusing on interconnected health narratives allows us to unpack contextualised meanings. Pets, like technological confidence, class, generational, or gender relations, can be used in sociological health studies to understand household dynamics and the implications for other types of tracking and a sense of social responsibility. My observation of self-tracking extending to furry members of households can be summarised as follows: Tracking practices increased support and contributed to the flourishing of happiness — even for animals. Pet health data may be considered novel or less important than people’s health data, but it reveals a strong positive association with tracking, as well as an interest in and preservation of intimate data. The novelty of the tracking activities (such as TopCat) is a strong motivator for the household to begin health pet tracking; however, this belies the serious point that maintaining such tracking with pets contributes to clear health outcomes and preventative actions, reinforcing the benefits and continuation of such activities. In response to my general question ‘Do you like tracking?’ there was a strong emphasis on pet welfare. These aspects of my study revealed that households were just as interested in tracking with pets as they were in monitoring general health interests symbolising attachment to informal digital health practices and extension of responsive and caring approaches in the enactment of health monitoring, whether for people or animals. In reading this book, I hope that you have a sense of the different aspects of health data and the combination of tracking behaviour. There is so much to untangle in household tracking, from the commercial organisations seeking to profit from health data, to the policymakers closely reviewing and analysing social uses of health data, to the education required to fully understand self-tracking data legacy in our lives. Writing the book, in a state of global uncertainty around health when there were long periods for which we were confined to being at home, was terrifying, empowering, overwhelming, informative, and confusing all at the same time. Talking about household tracking with others immediately raised concerns about when not to track, especially when governments and global health policies are involved and trying to persuade us to adopt health tracking, if not impose it on us. Despite growing policy initiatives, health tracking is a personal choice. There are very active communities focused on user data and privacy rights, patient record access, and open data that can help raise awareness of the different ways people can understand health data. In writing this book, the tension was clear between health data being used as a commercial asset for profit by some organisations, the role of public health providers such as the NHS in the United Kingdom, investment by government agencies and the level of control of users themselves. Households, or ‘bubbles’ as they were determined in the pandemic, provided an appealing and comforting narrative in the context of growing health uncertainties such as those associated with vaccination risk, the need to shield and protect extremely clinically vulnerable groups, and increased apprehension about policy-led decisions. The same bubble helped me in feeling a sense of protection: that I could provide for my family myself. What is striking, having reflected back over the book and the pets featured in the last chapter, is how each of the households believed and invested so passionately in personal health responsibility. Growing fears about the pandemic translated into increased household tracking practices across generations, people, and even animals. I find myself thinking about how positive associations with tracking may obscure the recognition of emerging health anxieties and intolerance toward people who behaved differently from their household and from whom other household members sought to differentiate themselves. The health tracking narratives reveal that households serve as a focal point of meaning for perceptions of responsibility and expected behaviour. This may seem obvious, but research into household health dynamics has led to the expansion of reciprocal care, with adjustments in how commitments to the needs of dependent members were met within the home. Another manifestation of what is viewed as ‘risky’ health behaviour is being modified within households, while also connecting outwards to new social movements and various forms of single-issue and identity politics (e.g. ‘fitsporo’, food sustainability, anti-racism and gender politics), with health tracking helping to create new identities and challenge normative health images. My sense of self is being remade because of my health tracking. I believe that a future of household tracking that allows access to and understanding of personal data is now an essential part of people’s social identities and the prevention of life-threatening diseases. For my part, being immersed in a home environment of household tracking has begun to untangle some of the complexity surrounding the treatment of those who are temporarily or permanently dependent on others for care. Care is a crucial domain that reveals the tensions between ill health and dominant societal values and roles starkly — especially for women. The reader will quickly realise I am not happy with the increasing tendency to encourage profit from commercial health products. And, readers will make their own judgements here. A version of this article was published on Medium. I've been asked to talk about how to "enhance a global reputation" for this professional skills workshop.
Immediately imposter syndrome shouted in my ears, why are they asking you? So I've some advice for myself and others building a global reputation about their research, the projects they are passionate about or anything else you wish to gain prominence in doing well - while, at the same time, imposter syndrome shouts loudly (and often convincingly) at you. My work focuses on identities in tech communities. For example, I've written extensively about the mislabelling of "women in tech". The BBC has featured my research, including Laurie Taylor's BBC Radio4 programme Thinking Allowed, and articles in The Guardian, The Independent, and many other international media publications. I try to embody the notion that self-promotion is just as much promotion of scholarly work, including the communities I research, as the opportunity for enhancing my own professional reputation. Unfortunately, this gem about self-promotion and other possible pearls of wisdom are lost to subsequent self-doubt. So in acknowledging what channels to use for optimum reputation enhancement, we need first to recognise our capacity to feel that we are worthy of sharing our ideas. In terms of self-promotion (especially social media), I have buckled under the nasty criticism of anonymous trolls who throw rebukes laced in misogyny and personal attacks. Self-promotion is being prepared to be vulnerable or open to public attack, in very different ways from defending academic knowledge we are used to at conferences. Different perspectives and disagreements about research are exhilarating. Cyberstalking is terrifying. In the past, I have let systems and processes bury me into silence, temporarily at least. One example is asking for support from journalists and marketing teams who had published my research when a social media pile-on directed at me critiqued 'women in tech' as 'bitches' or 'catty'. There was very little support. I found myself, like the communities I research, once again silenced and singled out for attack. In the process of recovering my voice, I have had to face the reality that speaking out (or not) is just as much about me as it is the communities I research and belong to. Being silenced as a scholar feels unjust. One way I have found to cope is to remind myself that silence is a strong theme in my research. Thinking about overcoming being silenced is when turning to multiple channels to self-promote and engage with different groups has allowed me to connect with others and gain interest in my work. About Impostor Syndrome Self-doubt is not unique to scholars. Nevertheless, for working-class scholars, disabled scholars, women scholars, immigrant and international scholars, our bouts with impostor syndrome — feeling as though we do not belong or are not as good as our colleagues — remind me about the importance of finding networks of support. Some of the best networks have been internal to my institution. For example, I've found solace in the MAMs (Mother's and Mother's to be) University of Durham network and other groups that operate around the academy. I am also a member of different supercomputing and women tech communities who help support and promote research and women in leadership positions. These communities are deliberately closely allied with my research. In terms of building content and targeting channels, be aware that this is a personal decision as much as a professional one. Social media content occupies your personal space. You create and respond to this content today in your home, alongside your loved ones. I encourage my fellow scholars to make this realisation a crucial part of their professional consciousnesses and think about how you can protect yourself from possible unwelcome intrusions or comments about your work, professional image, and even personal life. In building a public-facing professional brand, I have worked with journalists across the board and spent much personal time creating unique content on my website and social media. One comforting thought is that journalists do not care about imposter syndrome. Effective treatments for impostor syndrome, then, should entail raising one's consciousness and, ideally, engaging in and asking about institutional norms and policies. One method could be as simple as asking about the university social media policy and strategies to protect your public profile. As an advocate and researcher of women tech communities, of course, I follow Sheryl Sandberg - Facebook's COO. Sandberg speaks on the "lean in" philosophy. While I do not entirely agree with her conceit, I know for sure that my new found consciousness, including linking the promotion of my professional work with the enhancement of the communities I belong to, has become a way to build a reputation. Self-Promotion And Community-Promotion Beyond recognising self-doubt, I often force myself to accept invitations (if my schedule allows) as a powerful means to overcome my initial self-doubt. For example, I have just been featured as part of the SC21 (supercomputer) conference in a pre-recorded interview. The sole reason I accepted the invitation was that I forced myself to do it, ignoring the internal voice that pointed out that there are more successful and visible experts. Why would I push myself in the face of intense self-doubt? I push myself because the impostor syndrome I suffer from is the same pathology that limits and casts doubt in the minds of other scholars. I push myself because every time I decline an invitation, there is a good chance that another person like me will not be invited or will decline the invitation in my place. This is especially true for some of the large commercial tech events I attend, which lack diverse speakers or make events fully accessible. I push myself because this job will never be easy; academia is a demanding profession by design. Concluding Thoughts If you are already feeling self-doubt and the twinge of guilt for turning requests down, with the stress of being overburdened with new demands, the knowledge that your actions directly affect your communities is more pressure. Notwithstanding, thinking of the positive flip side — promoting your scholarship and perspective helps promote your communities. Having this thought in the back of your mind will help alleviate self-doubt and allows a method for channels to target for self-promotion. This is the remedy that is working for me. TopTen tips
What comes next? I'd like to see an ally skills workshop focused on advocating for one another and moving beyond the concept of 'virtuous rescue.' I don't require rescue. I require empowerment. We require empowerment. Despite continued efforts to pretend otherwise, the new reality for many is work-from-home.
During the period of January to December 2019, 5.1% of the UK population mainly worked from home, compared to 4.3% in 2015 reported by the Office of National Statistics (ONS). The sector with the highest proportion of homeworkers was information and communication was in information and communication, with 14% mainly working from home in 2019, and more than half of workers having ever worked from home. In terms of homeworking patterns, reported by the ONS, those who occupied the most senior roles such as managers, directors, and senior officers were most likely to work from home (10%), followed by those in associate professional and technical occupations (8%) and administrative duties (6%). Before COVID-19, we might have speculated that women would form the majority of the home workforce, however according to the ONS, it was men (11%) who were more than twice as likely to work from home compared to women (5%). With home working now the ‘norm’ for many professionals does this mean a radical shake-up concerning industry initiatives to support a more accessible and inclusive workforce? Or are separate professional conditions continuing to prevail in the home? First reported by Forbes, Google, Facebook, Amazon, Apple, Slack, Microsoft and newly familiar Zoom have implemented new work regimes to allow employees to continue to work from home for the remainder of 2020. For those in the tech sector remote working methods are more familiar than other industries less confident or invested in software to support digital interactions. Prior to COVID-19 remote working practices within the tech sector could have been seen as innovations for other organisations and industry to adopt. However, today, we risk conflating ‘remote working’ with the present conditions of being forced to work from home, which is entirely different concerning support, accessibility and skills. My long-term research investigates the challenges of remote working within the tech sector and opportunities for companies to implement change for a more inclusive workforce. Part of what the tech sector, and others, is dealing with are the opportunities for workplace support. During the most recent interviews with workers, there is anger about the form of support that is mostly self-directed online learning to identity stress areas or take workers through basic meditation exercises. Speaking to workers in the civil service, similar methods of support have been introduced through lock-down. There is a significant tension here between top-down ‘support’ and what is really needed on the ground. Though early days, there are some clearly identifiable themes coming out the current work-from-home conditions: Proper investment in staff training that is accessible to all workers. One senior manager shared his experience of being the primary carer for his daughter. He was unable to attend online training days where he lacked childcare support at home. Obstacles to being able to attend event is not new where there are caring roles involved. One positive spin out of the pandemic is the opening up of previously locked down events/meetings/conferences. Through digital tech, I’ve been able to attend a virtual parliamentary civic briefing; participate in a conference previously out of my reach due to cost/travel/caring responsibilities; benefit from cybersecurity online training; and vote in union elections. What is frustrating is it has taken the restriction of workers to force this opening up, when the same level of accessibility could have been championed and supported a long time ago. What I hope is post-COVID-19 the same level of access will remain. Agile professional needs support. By necessity we are spending a substantial proportion of our day in front of screens. Mental health ‘check-ins’ and wellbeing tools are predominantly conducted through the screen. While technology enables immediate contact, it does not allow periods of rest or disconnection unless the user is able to put these in place. Speaking to an HR director, she shared her technology fatigue. Where the company had invested in a staff wellbeing app, this meant more time in front a screen sharing personal details about sleep patterns and sense of self-worth. Such investment attends to some of the needs of the workforce – if they are interested in sleep tracking. However, it does very little for supporting new work patterns, roles and fatigue. Households are the new workforce. Where organisations have a contract with an individual concerning their duties and responsibilities, this does not translate easily into households. Inevitably different burdens of care and ways of working entrench the home. What is clear from recent media reports and speaking to individuals across sectors in the UK are the difficulties in finding routines, especially when there are caring responsibilities for loved ones within the home. A not uncommon experience is feeling overwhelmed by professional tasks and spiraling out of control from ways to sustain relationships in the home. My own experience echoes that of many, caring for my four-year old daughter, working FT, contributing and running a household without the time and space to perform properly in any of these areas. Let alone download an app and record my sleep-tracking. The acknowledgement here is while individuals are employed by an organisation, it is the household that configures how we can conduct our professional roles at home. Different career enhancement pathways. One of the main challenges now is dealing with the ‘unknown’. One area of growing unease and concern are the new barriers to career progression. This is particularly the case where workers are being asked to prioritise new areas of work, such as the generation of online content, over and above all other tasks. And while online training can provide a great deal of information about ‘how things work’, it is very difficult for those tools to positively enhance different ways of working, especially if those duties are not formally recognised within career pathways and promotion criteria. The push to ‘get online’ takes time, new skills and requires the proper recognition of what the end product should look and feel like. [not] Taking time off. Simply put, stating that workers should use their annual leave won’t alone change the conditions of stress, fatigue and fear. In short, while the required ‘leave’ can be recorded on an excel spreadsheet this does not reflect a period of rest for the individual. For others, it will not be possible to use their leave allowance. This is not about giving people ‘special treatment’, but acknowledging the current conditions that are difficult and in acknowledging this, understanding each other in terms of these being hard times for all. During the crisis, I continue to research the impact of remote working. Yet, this is with a growing unease, as I recognise and share the same challenges as those I interview. However, in deepening this narrative, we can underscore the sharp divide between work-from-home and remote working. To ease the burden of remote work and enable new innovative ways of working in the future, this requires a plethora of change, investment and support beyond the household. |