|
Let’s be honest about the terror. It is a specific, cold-sweat kind of fear. It isn’t the anxiety of a keynote speech or a grant application deadline. It is the fear of standing in front of 9 and 10-year-olds, chalk in hand (or whiteboard marker, let’s be modern), and being asked: “What is 7 times 8?” I am a Professor. I research the intersection of technology and society. I navigate complex academic landscapes for a living. But I am also Autistic and Dyslexic. And to my brain, the times tables are not a logical sequence of numbers; they are a slippery, chaotic list of arbitrary facts that refuse to stay put. Trying to hold them in my short-term memory feels, as I admitted on LinkedIn recently, like trying to hold water in a sieve. So, when I agreed to go into my daughter’s class to support their math session, I knew that I wasn’t really volunteering my time or expertise (ha). I was walking back into the scene of the crime, my own unstable education. My daughter is also neurodivergent, so this mission was deeply personal. I needed to show her, and her classmates, that math isn't just "short-term memory junk." I needed to prove that you can be bad at memorising but brilliant at thinking, or at least getting to a point where you can work things out. During the session, I introduced a specific activity called "Numbers in a Detective Story." We focused on a challenging multiplication table and turned it into a mystery to be solved. Each number became a character, and together we crafted a story to uncover how they interacted, much like uncovering clues in a detective novel. My group’s imagination far surpassed my own here, and we nearly ran out of time to complete the mystery of ‘The Four’ for the 4 times table. This approach helped bring down the stress of the multiplication process. In our group, math was a character in a story we controlled and were telling. Not scary. Funny and silly. Return to Shrewsbury In Dorothy L. Sayers’ masterpiece Gaudy Night, the protagonist Harriet Vane returns to her Oxford college. She walks the cloisters haunted by the ghost of her own reputation and confronts a "Poison Pen". In Sayers's story, a malicious force sends anonymous letters that target the scholars' deepest insecurities. The letters whisper: You are a fraud. You are unlovable. You do not belong. For the neurodivergent learner, Rote Memorisation is our Poison Pen. And she is dipped in malevolence. A malicious voice in the back of the classroom that conflates "speed" with "intelligence." It tells the child who needs to count on their fingers that they are slow. That they should not do so. It tells the dyslexic student that, because they cannot sequence numbers in a list, they cannot understand the beauty of mathematics. It is a fundamental betrayal of intellectual integrity. [Age, eight years young, I was told I was “cheating” using my fingers to work out the nine times tables] Standing to the side of the classroom this week, I felt the phantom weight of those accusations. But as Sayers’ hero, Lord Peter Wimsey, famously argues, the only antidote to the chaos of emotion is the clarity of truth. Or, rather, we needed to stop feeling bad about the numbers and start seeing the truth of them. The Audacity of the Amateur Sleuth And here, I must pause to acknowledge the sheer, breathtaking audacity of my own position. Who the hell do I think I am? I am a creature of the Ivory Tower, a dweller in the abstract lands of Higher Education, where we debate the ethics of AI over double espressos. I have attended fleeting sessions in a primary school classroom. I am a pedagogical tourist, wandering into a country where I do not speak the language, a land of carpet time and glue sticks, pointing at the local customs and saying, “I think you’ll find there is a better way to do that.” To the hardworking primary teachers who navigate this reality every day: I know how this looks. It looks like the Lady of the Manor is swooping in to tell the gardeners how to hold a spade. The tragedy is not that teachers don't see the problem. Many of them smell the rot just as clearly as I do. They know that rote memorisation is failing not only their neurodivergent students. We (parents and teachers) are trapped in the 'closed circle', bound by the machinery of the curriculum, the schedule, and the looming oversight of OFSTED. It is very difficult for teachers and support staff to have space to open themselves up to vulnerability because authority in a primary classroom is a fragile currency. But perhaps that is exactly why my silly intervention worked. In Gaudy Night, Harriet Vane is useful precisely because she is an outsider; she is not beholden to the Senior Common Room. She can ask the dangerous questions because she doesn't have to live with the consequences of the answer in the same way. My "audacity" stems from the specific freedom of the consulting detective. I could sweep in - festive jumper and all - as a safe, temporary disruption. I could afford to be "rubbish" at maths because my role does not depend on the outcome of the investigation. I could be the one to call a halt to the proceedings because I wasn't the one responsible for filing the paperwork the next morning. I saw the "Poison Pen" of rote learning not as a necessary evil, but as a hostile actor. Sometimes, it takes an outsider to spot the evidence hidden in plain sight, simply because the local force is too exhausted by the procedural drudgery to look up from the case files. The Detective Work I did not go into that room armed with flashcards. I went armed with evidence. I crowdsourced the collective intelligence of my network to find the patterns hidden beneath the rote drills. The response was a vindication of the human mind over the mechanical method.
The Machine Cannot Hold You Safely in Failure This brings me back to the argument I posited in my previous post: that technology cannot replace the "chair by the fire." If I had walked into that classroom with a suite of iPads running "MathBlaster 3000," the room might have been quieter. The children might have been seen to be “engaged," their faces bathed in the blue light of individual screens. But they would have been engaged in a closed loop of stimulus and response, a hermetic seal where the child struggles alone against the algorithm. I see this same tableau in my own lecture halls: students glued to laptops, ostensibly "capturing" the knowledge, yet profoundly unaware of the connection to learning happening in the actual room. They are present, yet absent; documenting the event without experiencing it. The Poison Pen of the Algorithm In Gaudy Night, the villain is the "Poison Pen", an anonymous force that targets the insecurities of the women scholars, whispering that they are unloved, unwanted, and out of place. For the neurodivergent learner, the gamified math app is our modern Poison Pen. It does not sign its name, but its message is clear. An app does not care why you got the answer wrong. It demands performance, not understanding. It mandates hyperfocus on getting everything correct. Not supporting failure as a route to learning. Any app/AI reinforces the binary of Success and Failure, with the red cross or the green tick, leaving no space for the messy, beautiful middle ground where learning actually happens. Harriet Vane spends much of Gaudy Night defending the "intellectual integrity" of the scholar, but she eventually realizes that facts without humanity are cold comfort. A machine can possess data, but it cannot possess integrity, because it cannot care about the truth; it only cares about the output. The Pedagogy of Failure The "hacks" we explored this week were not software patches; they were cognitive bridges. But more importantly, they required a human foundation. They required me to stand there, stripped of my professorial armour, vulnerable and imperfect, and say the words that no AI will ever authentically say: “I am rubbish at this. But you are going to help me.” Imposter Syndrome has been stalking me for a long, long time. This week, the fear that I was a fraud in a room full of 9- and 10-year-olds ceased to be a weakness. It became a way through something I find completely impossible. (In case you hadn’t realised, I can’t math.) When a child sees an adult struggle, the shame of their own struggle takes on a different meaning. Not that it goes away, but it shifts away from shame or something to keep hidden. It’s ok that you can’t do something. We will work on this together! Then, the "Poison Pen" runs out of ink. The Alchemy of the 12s For our murder (I know, dark right, but this is math), we staged the mystery of the 12 times table. We deliberately turned the abstract horror of 12 times 7 into a collaborative game of addition. We split the room. I told them: "The 12 times table is scary. It’s too big. So let’s break it. We don't do 12s. We do 10s and 2s." We all liked the 10s and 2. One group became The Tens. Their job was easy, safe, and confident. 10 times 7? Seventy! They concluded. The other group became The Twos. Their job was effortless, 2 times 7? Fourteen! And then, the magic. We smashed them together. 70 + 14. The answer, 84, didn't come from a memory bank; it came from the room. It came from the collective effort of breaking a big, scary problem into small, human-sized pieces. An AI could have given them the answer in a millisecond. It could have "personalised" the learning pathway. But it could not have given them the feeling of solidarity. It could not have turned a room full of anxiety into a team of code-breakers. Plus, an AI wouldn’t know the depth of feeling around cake. This is what I mean when I say technology cannot replace the chair by the fire. The machine can verify the data, but only a human can validate the struggle. By admitting I was "rubbish," I didn't lose their respect; I gained an opening into a shared learning experience that helped me as much as it helped them. The Verdict In Gaudy Night, the resolution does not arrive with a dramatic arrest or a sudden confession. It arrives when Harriet Vane realizes that the heart and the head do not have to be at war. She understands that one can possess deep feelings and rigorous intellect simultaneously; that admitting to vulnerability does not compromise one’s authority, but rather, secures it. She discovers that the "scholarly life" is not about cold detachment, but about a passionate commitment to the truth. I walked into that classroom terrified that I would fail my daughter. I carried the heavy luggage of my own educational trauma and the specific, creeping Imposter Syndrome that haunts every neurodivergent academic, the fear that, despite the title of "Professor," I am merely one missed times-table away from being exposed as a fraud. But I left, realising that we had rewritten the rules of engagement. The Ivory Tower vs. The Carpet As practitioners in Higher Education, we often talk about "pedagogy" and "scaffolding" in the abstract air of lecture halls and policy documents. We are spending a lot of time debating the ethics of Generative AI in seminars. But there is a profound disconnect between the theoretical landscape of the University and the visceral reality of a Primary School classroom. In Higher Ed, we often hide our struggles behind citations and polished slides. We present the finished product of our intellect. But nine-year-olds are natural-born deconstructionists. They do not care about the finished product; they care about the mechanism. If I had relied on the standard tools of EdTech, the gamified apps that reward speed over comprehension, I would have failed them. Those tools are designed for the neurotypical brain that retains information like a sponge. For the neurodivergent brain, which holds information like a sieve, those tools are just another form of the "Poison Pen," reinforcing the message that if you aren't fast, you aren't smart. The Human Algorithm We proved that you don't need to have a "sticky" memory to be a mathematician. You just need to know how to hack the system. What we did with the "finger tricks" and the "doubling patterns" was not cheating. It was algorithmic thinking. We stripped the code of mathematics down to its source. We showed that 7 times £8 isn't a magic spell you have to memorise; it is a structure you can build. AI can give a student the answer to 7 times 8 in a nanosecond. It can generate a lesson plan for a teacher in ten seconds. But AI cannot model struggle. It cannot say, "I find this hard, too, so let's find a different way." When I stood there and admitted, "My brain doesn't hold these numbers," we all found that understandable. “Don’t worry, D’s mum, you will be as good as one day.” was the observation on my way out. High praise, indeed. That is the human API, the connection that allows data to actually transfer. By showing them my own "glitch," I gave them permission to have theirs. Solid Ground For my daughter and her classmates, seeing her mum- the Professor, with all the weight that character carries - using her fingers to calculate a sum was a lesson in detection. It was a demonstration of finding the clues and prioritising the evidence of the case file over the theatre of performance. References and LinksThe "Crowdsourced" Wisdom
Dedicated to the Year 5/6 class who taught me that the best way to learn is to admit you don't know.
Chapter One: A Seductive Thought There is a sentiment circulating in the staff rooms and Substack threads of the educational world, a truth universally acknowledged by everyone except, perhaps, the procurement departments. It is a quiet resistance (though we are getting louder), often whispered over lukewarm coffee or typed furiously into WhatsApp groups at the end of a long term. It is the observation that “there isn’t a single problem ‘solved’ by EdTech that couldn’t be fixed with smaller classes led by well-paid teachers given real academic freedom.” It is a seductive thought. It asserts a world where the solution to student engagement should not be a gamified app flashing with dopamine-inducing badges, but a teacher with the time to look a child in the eye and notice they are fading. It suggests that the answer to crushing marking workloads isn't an AI grading bot that scans for keywords, but a timetable that allows a human being to read an essay with a cup of tea in hand, specifically not at 11 PM on a Sunday night. It imagines a system in which the "user interface" is a conversation and the "operating system" is trust. Reviewing the programme for the recent TechAbility Conference, and speaking with the attendees in the margins of the event, I found myself viewing this tension through a distinctly literary lens. From here, we can shift from debating budgets or software licenses to turn, instead, to reenacting the central conflict of Jane Austen’s autumnal masterpiece, Persuasion. (Insight into how my brain works.) For those who have left their classics on the shelf, Persuasion is a story of second chances, lost bloom, and the danger of listening to the wrong kind of advice. In the novel, our heroine, Anne Elliot, is persuaded by her well-meaning mentor, Lady Russell, to reject Captain Wentworth. (Yes, yes, he does wear very tight trousers.) The match is deemed "imprudent." (Not just because of those trousers.) Wentworth has no fortune, no connections, and an uncertain future. He offers only love, vitality, and a meeting of minds. Instead, years later, Anne is pushed toward the slick, socially advantageous Mr Elliot—a man who says all the right things, possesses all the right data points, and holds the keys to the estate, but is ultimately hollow. Today, the Education Sector is Anne Elliot. We are a profession that feels it has lost its "bloom," worn down by years of austerity and metric-chasing. And we are constantly being persuaded by our own Lady Russells—the policymakers, the consultants, the efficiency experts—that investing in the "Wentworths" is simply impossible. To hire enough teachers to reduce class sizes to fifteen? To pay them a wage that reflects their expertise? To give them the autonomy to deviate from the curriculum when a student’s eyes light up? Imprudent.! Too expensive. Too risky. It lacks "scale." It cannot be plotted easily on a dashboard. It is a romantic notion, we are told, incompatible with the hard realities of the modern economy. Instead, we are courted by our estranged cousins, the Mr Elliots of the world. Enter the shiny EdTech platforms, the Large Language Models, the predictive analytics suites. Like Mr Elliot, they are smooth, modern, and presentable. They promise to secure the estate's future. They promise "efficiency" and "personalisation at scale." They whisper that they can take the burden off our shoulders, automate the drudgery, and leave us free to be "facilitators." Imagine evenings and weekends, free! Oh, I must fan myself to calm such a happy countenance. But, like Mr Elliot, this technological courtship often masks a cold, transactional void. We are being asked to trade the messy, expensive, unscalable vitality of human connection—the Captain Wentworth of it all—for a sleek system of inputs and outputs. We are building digital infrastructures that mimic the form of education without its soul. We are creating a "future-proofed estate" where the lights are on, the data is streaming, but no one is actually home. The tragedy of Anne Elliot was that she allowed herself to be persuaded that prudence was a virtue, only to spend eight years in a state of regret, watching her life shrink into a small, silent room. The risk for us, as we stand on the precipice of the AI revolution in schools, is that we do the same. We risk allowing the logic of the machine to persuade us that the human element is a luxury we can no longer afford. Yet, as I looked more deeply into the TechAbility conference speakers and spoke with participants, I realised the story is not quite as binary as "Tech vs. Human." Sometimes, Mr Elliot is a villain, but sometimes, technology is the carriage that brings Wentworth back to us. The question is not whether we use the machine, but who is holding the reins. Chapter Two: The "Cyborg" in the Classroom The friction between human connection and technological intervention was palpable in Richard Fletcher’s keynote, “Exploring Hybrid Help”. The title alone suggests the unease of our current moment. We are not simply using tools; we are drifting into a "hybrid" state where the boundary between personal aid and technological interference is becoming dangerously blurred. If EdTech is merely a way to manage the symptoms of an underfunded system—using GenAI to "personalise" learning because there are 35 children in the room—then the opening observation holds true. A smaller class would fix that. A teacher with time is the best personalisation engine ever invented. When we replace that human interaction with an algorithm, we risk what Fletcher alludes to as the loss of the "human loop." We are building systems that mimic the formof education—Mr. Elliot, in his fine coat, without the soul of understanding. The Rise of the Tryborg Fletcher drew our attention to a critical distinction in the cyborg identity, referencing Jillian Weise’s concept of the "Tryborg". The "Tryborg" is the nondisabled person who adopts technology for efficiency, for fun, or for profit. They choose to extend themselves. They are the students using ChatGPT to write an essay in seconds; they are the administrators using AI to generate policy documents that nobody will read. These "Tryborgs" are not true cyborgs. They do not depend on the machine to "breathe, stay alive, talk, walk, or hear". For them, the technology is a shortcut, a way to bypass the cognitive struggle of learning. And this is where the danger lies. The Closed Loop of Non-Cognition We are currently constructing a closed loop of non-cognition. Fletcher highlighted the emerging risks of "cognitive debt" and the erosion of critical thinking. Consider the bleak absurdity of the modern classroom: a student uses an AI to generate an essay they haven’t written, and a teacher uses an AI to grade an essay they haven’t read. You do not need to persuade me that this is horrific for learning and humanity. The machine talks to the machine. The student gets a grade; the teacher gets a completed spreadsheet. It is a perfect, frictionless system. It is also a complete farce. This is the "Mr Elliot" of education: polite, polished, socially acceptable, and entirely hollow. As Fletcher noted, GenAI is "constitutively irresponsible"—it produces knowledge claims with no author to answer for them. When we invite this into the classroom, not as a tool but as a tutor, we are teaching our children that the appearance of competence is more valuable than the messy, difficult work of actual competence. The Cost of Loneliness But the cost is not just intellectual; it is deeply social. Fletcher warned of the "cost of loneliness" when artificial intelligence substitutes for human interaction. Education is not just the transmission of facts; it is the "non-coercive rearranging of desire". It is a relational act. When we place a chatbot between the learner and the teacher, we sever that relationship. We create a "panopticon" (thank you, Foucault) of surveillance in which every keystroke is tracked, yet no one is truly watching. We risk creating a generation of students who are technically connected but profoundly alone, interacting with "sycophantic" bots that validate their errors rather than challenge their thinking. In Persuasion, Anne Elliot is surrounded by people yet entirely alone in her understanding of the world. She sits in the drawing room, listening to the noise of the Musgroves and the smooth flattery of Mr Elliot, but her mind is elsewhere. We are building digital classrooms that replicate this isolation. We are filling the silence with the chatter of algorithms, mistaking data for connection. We must ask ourselves: are we using technology to bring us closer to the "Wentworths"—the authentic, challenging, human encounters—or are we using it to build a more efficient, automated solitude? Chapter Three: The Exception When Tech is Voice, Not Just Efficiency However, to embrace the "smaller classes" argument entirely is to miss a crucial nuance—one that requires us to step out of the comfortable, wainscoted warmth of the Austenian drawing room and into the bracing reality of complex disability. If we remain solely in the debate about efficiency, we risk ignoring those for whom "efficiency" is irrelevant because access is the primary battle. There are problems that smaller classes alone cannot solve. There are silences that even the most patient, well-paid, and autonomous teacher cannot break without a machine to help them listen. In Persuasion, the horror of Anne Elliot’s life is her muted existence; she is present, but unheard. "She was only Anne," the novel tells us. But in the modern classroom, some students face a silence far deeper than social exclusion. Oh, cutting. The Command of the Gaze Take Harchie Sagoo, whose keynote address, “I Lead, You Follow,” challenged the very premise of who is in charge of the educational narrative. Harchie has Cerebral Palsy. In a traditional setting, without technology, he might be viewed through a lens of passivity—a student to be "cared for," to be "managed." Yet Harchie uses a GridPad 13 with eye-gaze technology. For Harchie, a smaller class led by a well-paid teacher is wonderful, but it does not give him a voice. The technology does. In his presentation, Harchie described how his setup allows him not just to complete schoolwork but to exert agency over his world. He uses his eyes to answer the Ring doorbell to scare the postman. He uses it to turn off the shower when his father is midway through washing. These are not "learning outcomes"; they are acts of glorious, mischievous rebellion. They are the proofs of a personality imprinting itself on the world. From here, EdTech is not about "efficiency"—it is not Mr Elliot trying to streamline the estate. This is EdTech as liberation. It transforms the user from a passive recipient of care into a leader who can, quite literally, tell the world to "follow." The Voice from the Silence If Harchie represents the power of the visible gaze, Dr Rosie Woods took us into the realm of the invisible. Her session, “Giving a voice to those who cannot speak,” highlighted the frontier of sub-vocal speech recognition for people with Profound and Multiple Learning Disabilities (PMLD). Dr Woods challenged the assumption that people with PMLD are "pre-linguistic" simply because they cannot articulate sounds. She introduced us to the concept of "sub-vocal speech"—the silent, internal speech that occurs in the brain and muscles even when no sound is produced. Using specialised microphones and software, her team recorded and amplified this internal voice. The results were striking. One participant, Lizzie, who was previously unable to communicate clearly, was recorded saying: "I can’t write… but I can talk. I know what’s planned, I feel safe". Pause on that for a moment. “I know what’s planned.” No amount of academic freedom, no reduction in class size, and no amount of teacherly intuition can decode sub-vocal speech without the hardware. Without the tech, Lizzie is trapped in a room with no doors. With the tech, the door opens. Here, the technology is not a replacement for the human; it is the bridge to the human. It is the only thing that allows the "well-paid teacher" to actually do their job: to listen. The Access Paradox This brings us to the Access Paradox. The critique of EdTech in Chapter One stands firm on the neurotypical, mainstream experience: we do not need AI to grade essays or generate lesson plans that a human should craft. That is "lazy" tech. But for Harchie and Lizzie, technology is the "Wentworth" factor. It is the vessel of their vitality. It is the tool that allows them to reclaim their "bloom." To dismiss all EdTech as a neoliberal ploy to replace teachers is to inadvertently condemn these students to silence. We must distinguish between the technology that automates the human experience (bad) and that which enables it (essential). The former is a cage; the latter is a key. Chapter Four: The Synthesis Tech Needs the "Wentworth" Factor So, where does that leave our original provocation? If we accept that technology is essential for access (as Harchie and Lizzie demonstrated), does that mean we must submit to the hollow efficiency of the "Mr. Elliots"? Must we accept the premise that machines should replace the expensive, messy work of human teaching? Not at all. In fact, the evidence from the conference suggests that the original blog prompt was half-right. Technology does not solve problems in a vacuum. It fails spectacularly and expensively when treated as a replacement for human expertise rather than as a tool that requires more of it. The answer lies in the Kingspark School case study presented by Paula Kane and Eimer Galloway. Their journey offers a blueprint for what happens when you stop buying "solutions" and start investing in souls. The Investment in Character Kingspark faced a familiar dilemma. They had the technology—the DriveDecks, the switches, the hardware—but it wasn't being used effectively. The "Mr Elliot" of the situation (the shiny equipment) was present, but the relationship was cold. Why? Because the staff lacked confidence. They were paralysed not by a lack of desire, but by a lack of support. Their solution was not to buy more software. It was to invest in the "Wentworth" factor—human competence, constancy, and autonomy. They secured funding not for gadgets, but for a person—specifically, an Assistive Technology Team Leader. They understood that technology is inert without a champion. They established a "Community of Practice", a dedicated space for staff to share knowledge, mirroring the camaraderie of Wentworth’s naval officers rather than the isolated competition of the Elliot family. Crucially, they listened to their staff who demanded "interactive and functional training that takes place in directed time". They realised that you cannot learn to wield these powerful tools in the margins of a frantic day. They ring-fenced time. They prioritised "hands-on" experience. They proved that for technology to work, schools need exactly what the original blog prompt demanded: time, autonomy, and specialised roles. Look, what flexibility and time can do! The Map and the Territory This necessity for human rigour is reinforced by the systemic work of Rohan Slaughter and Tom Griffiths in their presentation, “Developing an AT Competency Framework”. If Kingspark provided the narrative, Slaughter and Griffiths provided the map. They argue that we cannot simply drop tools into a classroom and expect miracles. That is the "Mr Elliot" approach—all surface, no substance. Instead, we need a "training ecosystem". Their framework breaks down the necessary human skills into four distinct phases: Assessment, Provisioning, Ongoing Support, and Review. Note that the technology itself is only a fraction of this cycle. The rest is human judgment, human observation, and human adaptability. They highlight that "AT is not the prevail of one particular job role – everyone has a role". This dismantles the idea of the "plug-and-play" solution. It suggests that true technological integration requires a "Captain Wentworth" level of discipline and skill. It requires a professional class who are not merely "users" of a system, but masters of it. The Piano and the Pianist The synthesis of these arguments brings us to a singular truth. The technology did not "solve" the problem at Kingspark in isolation. The technology was merely an instrument, like a fine piano sitting in a drawing room. It required a pianist with the training, the time, and the passion to practice. When we view EdTech through this lens, the conflict between "tech" and "teachers" dissolves. We do not need fewer teachers; we need more teachers, and we need them to be more highly skilled than ever before. We need them to be the "Wentworths" who can navigate the complexities of sub-vocal recognition and eye-gaze calibration with the same confidence that they navigate a curriculum. The danger is not the technology itself. The danger is the "persuasion" that the technology allows us to be cheap. The danger is believing Mr Elliot when he says we can fire the pianist because the piano can play itself. Chapter Five: The Second Spring At the very end of Persuasion, Anne Elliot is granted what the narrator calls a "second spring" of youth and beauty. Crucially, this renewal does not come because she has acquired a new accessory, or a better carriage, or a more efficient way to manage her household accounts. It comes because she has reclaimed her connection to Captain Wentworth. She has chosen the difficult, vibrant, human path over the safe, calculated hollowness of Mr Elliot. (It’s the trousers.) The lesson for us, as we navigate the noisy marketplace of modern education, is that EdTech and "Human Tech" (teachers) are not binary opposites, though they are often sold as such. We are constantly subject to the same "persuasion" that plagued Anne. We are persuaded to buy the software because it is cheaper than hiring a teaching assistant. We are sold the chatbot because it is easier than reducing the caseload. We are told that if we just adopt the right platform, the structural cracks in the walls will cease to matter. The Inertia of the Machine But the evidence from TechAbility 2025 shatters this illusion. It proves that the most powerful technology is utterly inert without the warmth of human expertise to animate it. Consider the work of Dean Hall at Treloar’s. His session on 3D printing was not a paean to the printer itself—a machine of plastic and heat. The "miracle" was not that the machine could print a joystick knob; the miracle was that Dean, with his engineering background and human empathy, could design a bespoke "magnet assessment knob set" to allow a specific child to drive their own wheelchair. The printer is just a tool; Dean is the architect of access. Consider Kirsty McNaught’s work on block-based coding. The software existed, but it was full of barriers—drag-and-drop interfaces that locked out eye-gaze users. It took a human expert to dismantle those barriers, creating a "keyboard accessible" bridge so that a physical disability does not preclude a digital education. And consider Harchie Sagoo and Dr Rosie Woods. The technology—the GridPad, the sub-vocal sensors—was the vessel. But the cargo was the human personality. The technology did not replace the need for connection; it created the possibility of it. As Harchie’s presentation title reminds us, the goal is not for the machine to lead, but for the human to say: "I Lead, You Follow". Holding Out for the Real Thing There isn't a single problem solved by EdTech alone. A 3D printer in a cupboard solves nothing. An eye-gaze camera without a trained therapist is just expensive glass. But there are miracles achieved by EdTech when it is placed in the hands of a teacher who has been given the freedom, the time, and the support to use it. When we invest in the "Wentworths"—the staff, the specialists, the time to care—the technology sings. We must stop letting the Mr Elliots of the tech world persuade us that they can replace the heart of the profession with a dashboard. We need to stop apologising for the cost of human expertise. We need to hold out for the real thing. Only then will education see its second spring. With sincere thanks to the presenters and attendees at TechAbility 2025 for their insights, and particularly to Harchie Sagoo for reminding us that while technology is the tool, independence is the goal. I learned so much. Dedication To you who persuaded me to pick up books again. Thank you for cracking the spine of stories I thought were shelved and for proving that while the machine processes the text, it takes a human to find the subtext. References & Further Reading
The Conference & The Case Studies:
Trigger Warning: A Detective’s Notes on Joey Barton’s War Against Women Chapter One. The Monday Morning Drop
I didn't want to open the file. You know the type, it smells like stale beer and fragile egos before you even read the first page. But in my line of work, you don't get to look away just because the details turn your stomach. The digital street corner known as X doesn't sleep, and neither do the ghosts haunting its servers. The subject was Joey Barton. Ex-footballer, ex-manager, current loudmouth-for-hire in the attention economy. The dossier on my desk was thick with the kind of vitriol that stains your fingers. He had reinvented himself from a midfield enforcer into a self-styled' culture warrior,' a general in the anti-woke brigade. The brief was simple, but the implications were messy: track the fallout of a man who decided that his retirement hobby would be tearing down women in sport. We call these 'Trigger Events'-moments that ignite larger conversations about misogyny and systemic violence in digital spaces-in the academic journals, a sterile, white-coat term for what is essentially a digital drive-by. Like a private investigator digging through the trash of a corrupt city official, my team and I scraped the data. We pulled thousands of posts, looking for patterns in the noise. What we found wasn't just "trolling" or "banter." It was a coordinated, ballistic hit job on the very idea of women occupying space in the game. The file listed three primary targets, each chosen with the precision of a predator looking for a soft underbelly. First, there was Mary Earps. She was the golden girl, the Lioness, fresh off being crowned Sports Personality of the Year in December 2023. A moment of national validation. But Barton couldn't stand the shine. He clocked in to dismantle her, calling her victory "nonsense" and sneering at the audacity of "A Women's Goalie" taking the spotlight. He didn't just critique her game; he attacked her biology and dignity, calling a world-class athlete a "big sack of spuds". He boasted he could score "100 out of 100 penalties" against her, reducing her professional excellence to a playground bet he would win "twice on a Sunday". It was a classic shakedown: strip the woman of her accolades until she is just an object of ridicule. Then kick her again. Then the target shifted to Eni Aluko. This was uglier. This was where the file got heavy. Aluko is a former professional, a pundit, a woman who knows the game in her bones. But Barton didn't see a colleague; he saw a threat. He launched a campaign of "misogynoir," that toxic cocktail of anti-Black racism and sexism. He compared her and fellow pundit Lucy Ward to Fred and Rose West, invoking the names of notorious serial killers to describe two women talking about football tactics. He accused them of "murdering" the listeners' ears. He dipped into the oldest, dirtiest inkwell of misogyny, implying she had "slept her way" to the top and "violated marriages" to get her seat at the table. The harassment was so severe, so relentless, that Aluko admitted she was scared to leave her house, effectively exiled from public life by a man with an iPhone and a grudge. Finally, there was the kid. Ava Easdon. A seventeen-year-old goalkeeper for Partick Thistle. She made a mistake in a cup match, the kind of error every young player makes on the road to greatness. But Barton didn’t offer grace; he provided blood. He posted a critical takedown of a schoolgirl to his millions of followers, creating a pile-on that shifted the atmosphere from sporting critique to child bullying. When the public recoil hit him, he didn't blink. He escalated. He labelled the women’s game "Lesbo-ball," weaponising homophobia to degrade a teenager. I looked at the timestamps. I looked at the engagement numbers. This wasn't an isolated incident; it reflected a systemic pattern where misogyny is amplified by online algorithms, revealing how digital culture sustains systemic violence. Barton was acting as a 'misogyny influencer,' broadcasting hate because the algorithm rewards engagement, regardless of the human cost. He was the ringleader of a digital mob, and these women were the collateral damage in his war for relevance. I poured a black coffee and started typing. It was going to be a long week. Chapter Two. Three Bodies of Evidence The investigation focused on three specific incidents. Call them the crime scenes. We laid them out on the corkboard, connecting the threads with red string until the picture was undeniable. Turning to the first points of evidence, there was Mary Earps. The date was December 19th, 2023. She had just been crowned Sports Personality of the Year, a moment of gold-plated validation for a goalkeeper who had practically carried the nation’s hopes in her gloves. But Barton couldn’t stand the shine. He clocked in immediately, dismissing the victory as "f****** nonsense" and sneering at the idea of "A Women's Goalie" taking the pedestal. He didn’t just critique the award; he dismantled the woman. He called a world-class athlete a "big sack of spuds," an insult designed to strip away her athleticism and reduce her to something lumpy and inert. He bragged he could score "100 out of 100 penalties" against her, dismissing her professional excellence with the casual cruelty of a man who thinks his own opinion is a physical law. It was a classic opening gambit: humble the target, delegitimise the achievement, and wait for the mob to applaud. Then he went after Eni Aluko. This was uglier. This was where the file turned from a harassment case into something visceral. In January 2024, Barton locked his sights on the former professional and current pundit. He didn’t just critique her analysis; he reached into the darkest corners of British criminal history. He compared Aluko and her colleague Lucy Ward to Fred and Rose West, the notorious serial killers who buried bodies under their patio. Think about that. He invoked mass murderers to describe two women talking about football tactics. It was violent, hyperbolic rhetoric designed to dehumanise, to paint them as monsters infiltrating the beautiful game. He didn’t stop there. He dipped his pen in the ink of old-school misogyny, accusing female pundits of "violating marriages" and implying they had "slept their way to the top" to gain their positions. The fallout was precisely what you’d expect from a hit this precise: Aluko later admitted she was "scared to go out," effectively exiled from public life by a digital terror campaign. But the one that really made me want to pour a stiff drink (even though, I am teetotal) at 10 AM was Ava Easdon. March 2024. A seventeen-year-old goalkeeper. A kid. She makes a single mistake in a cup match, the kind of error that serves as tuition for every young player, and Barton descends like a vulture. He didn't offer veteran wisdom; he delivered a bully's scorn, mocking a minor to his millions of followers. When the press and the girl's father called him out for punching down, he didn't back down. He doubled down. He escalated the rhetoric into open bigotry, branding the women's game "Lesbo-ball". He took a teenager's bad day at the office and turned it into a referendum on her sexuality and her right to exist on the pitch. This isn't "banter." It isn't "opinion." It’s a strategy. It is a calculated series of strikes designed to signal to every woman in the sport: You are not safe here. You will be ridiculed. I will squash you. Chapter Three. Decoding the Glyphs: Emoji Violence In the smoke-filled rooms of the old noir paperbacks, the threat arrived in a jagged ransom note, letters sliced from magazines to hide the sender's hand. Today, the threat arrives in bright yellow pixels, beaming directly into your palm. It looks like a cartoon, but it cuts like glass. One of the most insidious patterns we uncovered in the Barton file was the systematic weaponisation of these symbols. We termed it "Emoji Violence". To the untrained eye, or the willfully blind moderation bot, a snowflake or a crying-laughing face looks innocuous, a splash of colour in the grey text. But in the context of the manosphere, they are intended to hide the digital dog whistle behind jokes. They are the secret handshake of a mob gathering its stones. Barton, the ringleader of this digital circus, has mastered this lexicon. He repeatedly deployed the snowflake emoji, a slang term repurposed to label his critics, and, by extension, women who ask for respect, as fragile, weak, and "too easily upset". It is a dismissal intended to prevent the witness from testifying. But the code got darker. We tracked the use of the aubergine emoji. On dating apps, it is a flirtation; in Barton’s hands, it was a slur. He used it to allege that female pundits had "slept their way to the top," reducing their hard-won professional expertise to a transaction of flesh. It is a way to call a woman a whore without tripping the profanity filter. The mob took his cue and escalated the violence. We found knives, guns, and bombs paired with female-identifying emojis, direct death threats, and smuggling themselves into the timeline under the guise of pictorial slang. We saw symbols of fear and anxiety weaponised to intimidate. We saw the "shush" emoji used not to ask for quiet, but to enforce silence, to tell women that their voice was unauthorised in this space. We saw animal emojis used to dehumanise, stripping the targets of their humanity until they were just game to be hunted. Even the "poo" emoji was weaponised, smeared across posts to visually degrade the quality of women's football and the women who play it. Barton's content is a code of silence and intimidation, a sophistication of cruelty that allows abusers to smuggle threats past the algorithmic gates that are supposed to keep the peace. The visual nature of these symbols amplifies the hate, drawing the eye and fueling the spread of the violence far faster than text alone. It is the digital equivalent of a brick thrown through the front window in the dead of night—deniable, perhaps ("it's just a picture"), but the message shattered on the living room floor is crystal clear: We know where you live, we hate that you are here, and we want you out. Chapter Four. The Deep Rot: Misogynoir Sara Paretsky’s V.I. Warshawski knows that corruption is rarely a single layer deep. In the Chicago underworld, if you find a crooked cop, you usually see a crooked judge standing right behind him. The digital beat is no different. Scratch the paint off the misogyny, and you typically find the rusted iron of racism waiting underneath. When we pulled the thread on the attacks against Eni Aluko, the investigation took a darker turn. We weren't just looking at sexism anymore. We were looking at misogynoir. Though the term appears as a buzzword from the seminar room, it is a specific, forensic term for a specific type of violence. Coined to describe the unique, toxic intersection where anti-Black racism meets sexism, misogynoir is the distinct brand of hatred reserved for Black women. And Joey Barton weaponised it with the precision of a man who knows exactly which buttons to push to incite a lynch mob. The file on Aluko showed that Barton sought to question her competence AND to erase her legitimacy entirely. He framed her not merely as wrong, but as an alien invader in the white, male sanctuary of football punditry. He played into centuries-old colonial tropes, casting her as the "aggressive" or "uppity" Black woman who had risen above her station. The rhetoric was suffocating. Barton and his followers repeatedly deployed the "diversity quota" argument, claiming Aluko only held her microphone because of "woke box-ticking" rather than her 102 caps for England or her law degree. But the ultimate weapon in his arsenal was the "race card." When Aluko or her defenders pointed out the racial undertones of the abuse, Barton flipped the script. He accused her of "playing the victim," a classic gaslighting tactic used to silence Black women when they dare to speak about their own oppression. By framing her reaction to racism as a manipulative ploy, he effectively stripped her of the right to her own defence. This is the grim reality of the "intersectional violence" we mapped. The hate doesn't just add up; it multiplies. The "Rose West" comparison we noted earlier wasn't just a shock tactic; in the context of misogynoir, it was a brutal dehumanisation designed to place a Black woman outside the boundaries of human empathy. The damage was tangible. In a noir novel, the victim might end up in the hospital. In this digital thriller, the violence was psychological but no less disconcerting. Aluko, a veteran of the pitch, was forced to flee the country, admitting she was "scared to go out" for fear of her physical safety. The digital mob Barton unleashed had successfully hunted her out of the public square. This wasn't just "mean tweets." It was a displacement event. It was the "deep rot" of the system exposed—a reminder that for Black women in sport, the cost of visibility is often their own peace. Chapter Five. The Verdict I tossed the Barton file onto the desk. It landed with a thud heavier than the paper it was printed on, displacing the stale air of the office. The investigation was closed, the evidence catalogued, and the patterns undeniable. But in this line of work, knowing the truth and seeing justice are two very different things. The data was conclusive. Joey Barton isn't an outlier, a rogue operator, or a "bad apple." He is a feature, not a bug, of a system designed to monetise cruelty. We identified him in the report as a "misogyny influencer". That’s the academic term. On the street, you'd call him a grifter. He is a man who has realised that in the current economy of attention, hate pays better than analysis. He broadcasts abuse because the algorithm—that great, invisible fence for stolen dignity—rewards engagement regardless of the cost. The verdict? He walks. That’s the horror of this particular noir story. There are no handcuffs at the end of this chapter. No judge is banging a gavel. Barton is still out there, phone in hand, presiding over the "Manosphere", a digital subculture that is loud, angry, and terrified of its own obsolescence. He frames women in sport not as athletes or colleagues, but as invaders in a sacred male space, treating the pitch as a fortress that must be defended against the encroachment of diversity. But while he counts his likes and retweets, look at the bodies left in his wake. Look at Mary Earps, a world-class professional reduced to a punchline about vegetables by a man who couldn't handle her shine. Look at Ava Easdon, a seventeen-year-old kid who had to learn the hard way that a grown man with a verified checkmark feels entitled to bully a minor for "content". And look, most hauntingly, at Eni Aluko. She didn't just log off; she fled. The relentless campaign of misogynoir, the comparison to serial killers, the accusations of sexual impropriety, and the erasure of her professional merit forced her to leave the country for her own safety. That is the physical toll of this digital violence. The content appears as pixels on a screen, but it is also genuine fear, absolute displacement, and absolute silence. The platforms that host this carnage? They act like the crooked casino owners of old Chicago. They claim neutrality while raking in the vigorish from every fight that breaks out on their floor. They amplify the "Trigger Events" because outrage keeps the users glued to the screen, creating a contagion effect that spreads the vitriol faster than we can track it. But here is the thing about investigations: once you have the evidence, you can’t unsee it. We know now how the machinery works. We know that online abuse is a "virtual manhood act," a desperate performance of masculinity for an audience of other angry men. We need better policies. We need platforms that stop acting as safe harbours for hate speech and start treating safety as a human right. We need to strip the profit margin away from the misogynistic influencers. Until then, I’ll keep my running shoes on. The Barton file is closed, but the server farms are still humming, and the next drive-by is already being drafted in a notes app somewhere. The system is rigged, the game is dirty, but I’m not walking away. The beat goes on, and there are more files to open. References from the Case File:
Chapter One. The Estate of Pierce Inverarity Have you ever had the unsettling experience of reading Thomas Pynchon? You really should. It is neither pleasant nor fast; it is confusing, labyrinthine, and slow. But try. (It is a very short fiction.) His narratives often blend voice and intention until you get lost, and this is precisely the vertigo I feel regarding the rush towards a 'golden age' of AI. I stand, much like Oedipa Maas at the beginning of The Crying of Lot 49, staring down the slope of a new and sprawling legacy. But instead of the grid of San Narciso, with its printed circuits and hieroglyphic streets, we confront the interface of a Large Language Model (LLM). We have been named executrix of a chaotic inheritance, a technology that promises everything and explains nothing. In the novel, Oedipa returns home from a Tupperware party, a scene of unsettling suburban banality, to find she has been made responsible for the estate of her former lover, the wealthy and shadow-casting Pierce Inverarity. She is not a lawyer; she is not a tycoon. She is a woman who, until that moment, felt her life was a "Rapunzel-like" confinement in a tower of her own boredom. Suddenly, she is tasked with untangling a web of assets that seems to encompass all of America: stamp collections, factories, motels, and secret societies. We occupy the same precipice. We have returned from the digital equivalent of a Tupperware party, our scrolling, our emailing, our basic digital lives, to find that the tech giants have died (or rather, disrupted themselves) and left us the keys to the kingdom. We are the executors of the entire internet's knowledge, compressed into a single blinking cursor. Like Oedipa, we feel a strange, jolted duty to organise this mess. We assume the role of the executrix not because we are qualified, but because the will was read, and our name was on it. Oedipa's motivation is not greed; it is a desperate need to find a pattern in the noise. When she looks down at the city of San Narciso, she sees it as a "printed circuit," a hieroglyph that surely, if she just looked hard enough, would reveal a "transcendent meaning." The city is just real enough in this context, yet it remains uneasy for the reader to grasp its reality. This is precisely the sensation of the modern "Prompt Engineer." We gaze at the blank face of the AI and convince ourselves that if we just find the correct incantation, the proper acronym, the proper sequence of R-T-F or S-O-L-V-E, the circuit will close, and the meaning of the legacy will be revealed. Into this chaos steps the modern consultant, the influencers, clutching their maps. They tell us, as noted in a recent viral post, that the population is divided. There are the "90% of ChatGPT users" typing into the void with fundamental ignorance, and then there are the Elect, the "other 10%" who are using it to "print money in their sleep." The distinction, we are told, lies in the code. Not a software code, but a linguistic one. A set of frameworks designed to tame the stochastic beast. The image accompanying this proclamation presents eight sigils, each an acronym such as R-T-F (Role, Task, Format) and D-R-E-A-M (Define, Research, Execute, Analyse, Measure). They are presented not merely as tips, but as the liturgy required to access the machine's grace. If you can just arrange your words into the shape of R-I-S-E, the "exponential leverage" will flow, and the tower of boredom will finally fall. Chapter Two. Maxwell's Demon and the S-O-L-V-E Framework Deep within the paranoid architecture of The Crying of Lot 49 lies the Nefastis Machine, a device containing Maxwell's Demon. Pynchon presents this theoretical intelligence as a tiny sorter tasked with the impossible labour of defeating the second law of thermodynamics by separating fast molecules from slow ones to create a perpetual cycle of energy without heat loss. The modern obsession with prompt engineering reveals itself as a digital reenactment of this thermodynamic fantasy. We seek to build our own Demon within the chat interface, believing that the correct sequence of words might finally extract pure order from the chaotic swirl of the internet. If Maxwell's Demon represents the thermodynamic fantasy of the era, the prompt engineer represents a revival of an older, more theatrical deception: the Mechanical Turk. In the late 18th century, Wolfgang von Kempelen dazzled the courts of Europe with a chess-playing automaton, a turbaned mannequin that appeared to defeat human opponents through pure mechanical logic. In reality, it was a hoax; a human chess master was cramped inside the cabinet, guiding the mannequin's hand by candlelight. The modern practice of prompt engineering effects a curious reversal of this illusion. We are no longer the audience marvelling at the machine; we have become the human operator squeezed inside the box. When we employ frameworks like S-O-L-V-E, we are contorting our natural language into the rigid, uncomfortable shapes of "Situation," "Objective,"and "Vision" to ensure the machine functions. We provide the logic, the context, and the strategic foresight, performing the cognitive heavy lifting while cramped within the narrow cabinet of the prompt window. The AI takes the credit for the checkmate, but it is the human user, twisted into the posture of a bureaucrat, who is actually moving the pieces. Consider the rigid geometry of the S-O-L-V-E framework, which demands the user delineate Situation, Objective, Limitations, Vision, and Execution. These acronyms serve as bureaucratic incantations intended to filter the heated, hallucinogenic potential of the Large Language Model into the cold, orderly work of capital. The framework promises that a sufficiently specific "Vision" combined with strict "Limitations" will bypass the messy friction of actual thought to produce a frictionless automation of workflows. There is a distinct, almost tragic irony in applying these stiff corporate methodologies to a machine built on probability. The user attempts to shackle a psychedelic mirror to the grid of 1950s middle management. By commanding the infinite latent space, a high-dimensional manifold of semantic relationships, to role-play as a "Commercial Director" via the R-I-S-E method or a "Brand Strategist" through R-T-F, the user forces the sublime and terrifying chaos of the model into the beige suit of a mid-level executive. This effort parallels the "crying" of the lot itself. In legal and auctioneering terms, the crying represents the vocal assertion of value and finality over a collection of discarded debris. Oedipa Maas wanders through the wreckage of Pierce Inverarity's estate, overwhelmed by the sheer volume of unconnected things, waiting for the auctioneer to cry the lot and impose a binding definition upon the confusion. The prompt engineer acts as this auctioneer. They shout their frameworks into the void, engaging in a hysterical sorting of molecules to produce "qualified inbound leads" while ignoring the encroaching night of total entropy. Chapter Three. The Trystero and the Digital Elect The most Pynchonesque element of this new movement resides in the class anxiety it diligently cultivates. The viral proclamation separates the world into a stark binary that mimics the theological division between the Elect and the Preterite - the chosen few and the passed over. It posits a hidden layer of reality where a "smart" ten per cent operate a clandestine machinery of wealth, while the unenlightened ninety per cent wander the streets of the internet, posting their basic queries into government-approved boxes and receiving only silence in return. This is the new Trystero. In the novel, the Trystero is a secret postal network used by the marginalised to communicate outside the official monopoly. Here, the dynamic is inverted: the secret network belongs to the "high performers." Those inducted into this underground possess the frameworks as if they were passkeys to a shadow economy. They wield C-A-R-E (Context, Action, Result, Example) and T-A-G (Task, Action, Goal) not merely as organisational tools but as the alchemical formulas required to transmute the leaden text of a chatbot into the gold of exponential leverage. The act of typing ceases to be communication; it becomes a ritual invocation of a hidden order. This reveals the distinction between the "smart" and the "basic" user to be less a division of skill and more a revival of the Cargo Cult. Richard Feynman famously described the post-war Pacific islanders who, having witnessed the material abundance brought by military aircraft, constructed elaborate mock airstrips from bamboo and straw. They carved headphones from wood and stood in makeshift control towers, waiting in faithful silence for the planes to return. They had perfectly replicated the technology's form while remaining entirely ignorant of its mechanism. The modern user constructs similar effigies out of language. Rigid acronyms like R-A-C-E serve as the digital equivalent of the bamboo control tower; users mime the structure of computer code in the superstitious belief that if the liturgy is performed correctly, the "cargo" of intelligence will descend from the latent space. Such divisive rhetoric fosters a pervasive paranoia that the actual signal remains forever just beyond the threshold of perception. Oedipa Maas found herself haunted by the image of a muted post horn scrawled on latrine walls and sidewalk surfaces. The modern user stares at the D-R-E-A-M framework (Define, Research, Execute, Analyse, Measure) with the same fervent suspicion, convinced it contains the encoded map to salvation. The belief takes hold that the market's chaos will align into a perfect vector of profit if only the correct acronym is whispered into the machine. This phenomenon illustrates Jean Baudrillard's dark prophecy concerning the precession of simulacra. Baudrillard argued that in the postmodern condition, the map no longer depicts the territory; rather, the map precedes and engenders the territory. The viral infographic acts as precisely this sort of hyperreal cartography. The distinct demographic of the "Top 10% of Super Users" did not exist as an empirical reality until the influencers drew the lines of demarcation. These digital cartographers invented a class system solely to sell the navigation tools required to ascend it. The users scrambling to master R-T-F are not uncovering a hidden truth about AI; they are desperately attempting to become the territory depicted on the slide. They seek to inhabit a demographic that is nothing more than a marketing hallucination, proving that the simulation of competence has finally become more lucrative than competence itself. Oedipa eventually wonders whether she has stumbled upon a real conspiracy or is merely projecting meaning onto static, much like a digital Hamlet driven to madness by the ambiguity of signs. The prompt engineer faces an identical vertigo. They seek to organise the sprawling, hallucinatory output of the AI into the rigid columns of R-A-C-E, hoping that structure will save them from the void. Yet the suspicion remains that the "Top 10%" is less a statistical reality than a shared delusion, a frantic attempt to bind the encroaching entropy with the fragile logic of a LinkedIn slide. Chapter Four. The Muted Prompt One cannot deny the functional value of the frameworks. Structure acts as the primary antagonist to the blank page, and the definition of a "Role" or the setting of "Limitations" effectively prevents the AI from drifting into the entropic haze that Pynchon so frequently chronicled. These acronyms serve as necessary scaffolding for thought, preventing intent from dissolving into the white noise of the model. Yet the divergence between the map and the territory looms large. Thomas Pynchon maintains a ghostly presence as an author who constructs labyrinths to reflect the disintegration of meaning, famously vanishing to let the complexity of his text stand alone. In stark contrast, LinkedIn influencers position themselves as the new authors of certainty, placing their personal brands at the centre of the narrative. They peddle the seductive illusion that the sprawling, chaotic text of the world can be condensed into a single page of bullet points. While Pynchon embraces the noise, the creators of the R-T-Fand S-O-L-V-E cheat sheets seek to banish it. They present themselves as high priests of a digital order, promising that the correct incantation will subdue the ghost in the machine. The terror inherent in The Crying of Lot 49 resides in the ambiguity of the conspiracy. Oedipa Maas never receives confirmation that the Trystero exists or if she is merely projecting order onto random debris. A similar vagueness haunts the prompt engineer. The secret society of the "10%" who have supposedly unlocked the universe likely does not exist outside the marketing copy. The frameworks function merely as frameworks rather than magical keys, remaining useful, dry, and ultimately limited tools that offer the comforting illusion of control over a stochastic process. This obsession with correct formatting reveals a sociological pathology that Robert Merton identified as Bureaucratic Ritualism. Merton described a mode of adaptation in which the subject, overwhelmed by anxiety or blocked from achieving the organisation's actual goal, abandons the organisation's goal but adheres obsessively to its rules. The "smart"10% of users are not necessarily innovators; they are ritualists. They have elevated the means of production, the R-T-F framework, and the perfect context-setting above the ends. They care more about filling out the form correctly than the quality of the creative output. By demanding that every interaction be prefaced with a Role, a Task, and a Format, they are effectively doing the paperwork for art. They have turned the wild, unpredictable act of creation into a compliance exercise, convinced that if the bureaucratic ritual is performed with sufficient exactitude, the result will matter. It is a hollow victory of method over meaning. End. Treating AI solely as an engine for "exponential leverage" via rigid acronyms ignores the strange, vibrant weirdness of the tool. Such a utilitarian approach reduces the clandestine intrigue of the W.A.S.T.E. system to the pedestrian efficiency of FedEx. One might employ R-T-F and S-O-L-V-E while remaining deeply suspicious of their reductive power. Behind the "Role" and the "Context," the unpredictable human pulse continues its search for meaning in the lot's crying, waiting in the silence for the auctioneer to finally speak. Dedication In a landscape crowded with artificial intelligence, I remain hopelessly devoted to the genuine article. My sincere thanks to the Jester who dares to laugh at the machine, and for possessing the kind of dangerous, un-prompted intellect that keeps this Professor on her toes. While the rest of the world searches for the secret code to unlock the universe, I am content knowing I’ve already found the only signal in the noise. Links & ReferencesThe Theory (Decoded):
$15,000 for a chatbot that customers despise because it cannot answer a fundamental question. $8,000 for an "AI scheduler" that routinely double-books appointments, forcing human staff to apologise for the machine's incompetence. $12,000 for a document processor that cannot read the specific industry forms for which it was purchased. $10,000 for a customer service tool that simply escalates every query to a human anyway. The receipt reads like a breakdown of a heist, except the victim signed the checks willingly.
This could be the opening of a dystopian novel; buuuuut, it is the actual, brutal balance sheet of a small-business owner on Reddit who spent $50,000 last year chasing the glowing promise of the AI revolution, only to find that half of their investment is already obsolete. (r/AiForSmallBusiness). We need to stop calling this "early adoption pains." It is something far more sinister. This is a systemic extraction of wealth from the real economy, the bakeries, the clinics, the local logistics firms, to the speculative economy of AI vendors. It is a transfer of capital from the people who do the work to the people who sell the hype. And for the small business owner standing in the wreckage of their budget, staring at a suite of tools that don't work, it feels less like innovation and more like a stupidity tax levied by Silicon Valley on anyone desperate enough to believe the pitch. Chapter One. The Moon in the Poultry Shed: The Conflation of Automation and Intelligence In R.C. Sherriff's 1939 novel The Hopkins Manuscript, the moon does not arrive as a saviour. It approaches as a slow, glowing inevitability, humanity watching with a mix of scientific prestige and deep denial until it eventually crashes into the Atlantic Ocean. This literary apocalypse shares a distinct DNA with the satire of Don't Look Up. Both narratives expose our fatal tendency to stare at the spectacle while ignoring the physics of the crash. We are currently living through our own Hopkins moment. We are staring at the glowing orb of "Artificial Intelligence," mesmerised by its lunar brightness, while ignoring the fact that we mostly just need to feed the chickens. The Reddit user who incinerated $50,000 on "AI solutions" only to find that half were obsolete is not a fool. They are a modern Edgar Hopkins who was sold a telescope to watch the moon when they really just needed a better coop. They note with a painful clarity that the only tools that actually survived the crash were "dead simple: basic automation for repetitive tasks." This brings us to the great Trojan Horse of the current hype cycle. We have allowed vendors to rebrand standard "if-this-then-that" scripts as AI to justify a tenfold price hike. They have taken the boring, reliable utility of a spreadsheet macro, the digital equivalent of Hopkins' reliable breeding hens (yes, really, read the book, it is GREAT!) and wrapped it in the volatile, shimmering skin of Generative AI. The industry is selling us the moon. They promise a celestial body that glows with reasoning and creativity. But what a small business actually needs is gravity. They need the deterministic certainty that if a file is placed in Folder A, it will move to Folder B. This straightforward automation provides real utility. It is unsexy. It does not hallucinate. It does not require a GPU cluster subscription. Instead, businesses are being sold on Speculation. They are buying Large Language Models (LLMs) that try to "guess" (cough, exploit) what the customer wants, rather than scripts that simply execute a command. We are paying a premium for magic that turns out to be a parlour trick. When the moon finally crashes into the earth in Sheriff's novel, the result is not a new utopia but a muddy, desperate scramble for resources. The business owner who spends $15,000 on a chatbot that customers hate has realised too late that they purchased a falling rock instead of a foundation. Chapter Two. The Escalation Tax: Friction Farming at the End of the World In The Hopkins Manuscript, as the moon descends to crush the British Isles, the protagonist Edgar Hopkins finds himself increasingly entangled in the petty, bureaucratic absurdities of his local village committee. They debate the proper storage of cricket bats while the tides are rising to swallow them whole. (Marvellous part about stocks and shares in crockery for you to discover too, as prices will go up if everyone's glassware is broken when the moon slams into the earth). There is a maddening disconnect between the scale of the catastrophe and the system's capacity to respond. The $10,000 customer service AI described by the Reddit user acts as the digital equivalent of this village committee. As I see it, this shows a layer of expensive, performative insulation designed to delay the inevitable collision between the business and the reality of its customers. The Reddit user notes that their expensive "customer service AI" simply "escalates everything to humans anyway." This reveals the tool for what it truly is. It is not a gatekeeper. It is a digital bouncer. I am going to apply my "Unsuitable Job" critique to this software. The promise of AI is that it will replace labour, but in practice, it merely displaces frustration. It acts as a friction farm. The business has paid a premium to install a barrier between itself and its clientele, a digital obstacle course that the customer must navigate before they are deemed worthy of human attention. By the time the customer finally breaches the wall and reaches a human staff member, they are no longer just a customer with a query. They are a survivor of the chatbot loop. They are exhausted, confused, and angry. The human staff member, therefore, does not do less work. They do more complex work. They are no longer starting the conversation at a neutral point; they are starting from a deficit of trust. Very likely, they will spend the first ten minutes of the interaction apologising for the machine's incompetence. The AI has not solved the problem. It has simply curated the misery. It has skimmed off the easy, low-stakes labour of the initial greeting and left the heavy, emotional lifting of conflict resolution to the human. Just as Hopkins fretted over his poultry while the world ended (Broodie!), these businesses are obsessing over "efficiency metrics" even as their customer relationships are quietly being pulverised by the very tools they purchased to save them. Chapter Three. The Obsolescence Trap: Building Castles on a Tidal Wave As the moon draws terrifyingly close to Earth in The Hopkins Manuscript, the scientific consensus shifts at nauseating speed. What was a mathematical certainty on Tuesday is a debunked theory by Friday. The experts constantly revise the trajectory, the impact zone, and the severity of the collision, leaving the layperson to build defences against a catastrophe that keeps changing its shape. Edgar Hopkins digs his dugout, but he is haunted by the suspicion that by the time he finishes it, the "science" will have rendered his spade obsolete. He is correct. The Reddit user's lament, "Half of them are already obsolete", echoes this exact existential dread. It exposes the dirty secret of the AI Gold Rush: these tools are being shipped in a state of permanent beta. A $12,000 document processor purchased in 2023 is not an asset; it is a fossil. It has become legacy tech by the end of 2025, not because it broke, but because the tectonic plates beneath it shifted. The underlying model, the "moon" of this metaphor, moved from GPT-3.5 to GPT-4 to whatever decimal point comes next, rendering the previous wrapper useless. We must recognise the economic violence of this model. Small businesses are being treated as unpaid beta testers for venture-backed startups. In the world Hopkins understood, the world of poultry and paddocks, investment meant permanence. If you buy a tractor, it depreciates slowly over twenty years. It is there in the morning. It does not require a firmware update to plough the field. But buying an "AI Solution" today is not an investment in infrastructure; instead, it's much more like buying a ticket to a movie that ends in fifteen minutes. You do not acquire a tool; you rent a seat on a hype train that moves too fast for you to ever get a return on investment. The business owner is left holding a subscription to a service that has already pivoted, standing in their backyard with a telescope pointed at a patch of sky where the moon used to be, while the developers have already moved on to selling tickets for the next apocalypse. Chapter Four. The Magic vs. The Metric: Grading the Falling Moon In the final, terrifying chapters of The Hopkins Manuscript, the moon ceases to be an astronomical curiosity or a source of scientific wonder. It arrives. And upon its arrival, the mysticism evaporates instantly. The moon is revealed not as a glowing god or a celestial guardian, but as a massive, heavy, and inconveniently physical object that has plunged into the Atlantic Ocean. It causes mud. It causes floods. It knocks over the tea service (oh, the crockery!). The magic of the event is stripped away by the brutal physics of the collision, leaving Edgar Hopkins to confront a reality that is wet, cold, and entirely devoid of enchantment. The Reddit user's final conclusion, "AI isn't magic. It's just another tool", is the digital equivalent of this collision. It is the moment the moon hits the water. For too long, we have permitted a fog of "magical thinking" to pervade the technology sector. The sales pitch for these tools relies heavily on the Black Box mystique. We are told not to worry about how the sausage is made (uh-oh, I've seen Soylent Green), or how the neural net weighs its parameters. We are told to simply trust the algorithm, to treat it as an oracle that operates on a plane of logic too complex for our linear minds to grasp. We treat software like a deity when we should treat it like a dishwasher. When you strip away the magic, what remains is often staggering incompetence. Consider the scheduler that double-books an appointment. In the current lexicon of AI, we are encouraged to use soft, forgiving language. We say the model is "hallucinating." We say it is "drifting." We say it is "still learning." We anthropomorphise the error, attributing it to a quirky, almost charming cognitive slip, as if the software is a precocious child trying its best. We must stop grading AI on a curve. Especially when it is not fit for purpose. If a human receptionist consistently double-booked high-value clients, they would not be described as "hallucinating." They would be described as incompetent. They would be retrained or fired. If a toaster burned the bread fifty per cent of the time, we would not marvel at its "emergent properties." We would return it to the store. Yet, when an AI tool destroys a workflow or fabricates a legal citation, we are told it is "emerging tech." Oh, how innovative. This is the great deception. A tool that cannot perform the basic function of the job, reading a form, booking a slot, summarising a meeting without lying, is not an innovation. It is a defective product. Like Hopkins standing in the ruins of his village, staring at the mud where his prize poultry used to be (Broodie the hen does survive), small businesses are realising that the celestial glow of the AI marketing machine has distracted them from the wreckage on the ground. We must reject the alchemy that promises to turn silicon into gold and return to the honest machinery of things that actually work. We must stop looking for magic and start demanding the metric. Does it work? If the answer is no, it belongs in the Atlantic Ocean, along with the rest of the falling moon. Chapter Five. The Billionaire's Charity: Buying the High Ground While the Moon Falls This dynamic, the extraction of wealth from the productive economy to the speculative elite, is not limited to the software market. It is the gravitational pull of our current moment. In The Hopkins Manuscript, as the catastrophe approaches, there is a distinct shift in how the wealthy prepare compared to the villagers. While Edgar Hopkins worries about the structural integrity of his hen house, the elite recede into fortified positions, insulated from the tides they know are coming. Consider Hmmm, let us look closer. I don't recognise this as philanthropy; it is a purchase of policy. It is the building of a private dugout at the expense of the village. Just as the AI vendor sells a broken tool to a small business to extract their capital, these billionaires are "donating" to a political project that is actively dismantling the regulatory state, the very state that might tax their wealth or protect the workers they exploit. They are not giving money to help children; they are investing capital to ensure the tax burden remains on the working class, while the top 1% retain their hoard. To my mind, the parallel to our Reddit user's plight is stark and convincing. The Small Business Owner buys a "magic" AI tool hoping it will solve their efficiency problem, only to find it is a broken toy that drains their budget. They are Edgar Hopkins, buying a telescope to watch the disaster that will bankrupt them. The Public is sold a "philanthropic" initiative by tech billionaires, hoping it will solve a social problem, only to find it is a Trojan horse for deregulation that drains the public purse. In both cases, the promise is innovation and support. In both cases, the reality is a transfer of wealth from the many who work to the few who own. The $50,000 spent on broken AI and the millions "donated" by the Dells are part of the same economic architecture: a system designed to convince the productive class to fund their own obsolescence. End. We are left, like the characters in Sherriff's finale, standing in the mud of a ruined landscape, realising too late that the glowing object we were told to admire was never a saviour. It was just a heavy rock, and it has finally landed on us. The small-business owner, desperate for efficiency in a crushing economy, is sold a digital homunculus, a promise of labour without the labourer. But what they receive is a parasite. It eats their capital, frustrates their clientele, and leaves them, in the end, exactly where they began: reliant on the only intelligence that has ever truly sustained the marketplace, the human capacity to listen, to understand, and to respond appropriately. Let us reject the alchemy that promises to turn silicon into gold, and return to the honest machinery of things that actually work. Besides, if women are educated for dependence, that is, to act according to the will of another fallible being, and submit, right or wrong, to power, where are we to stop? Are they to be considered as viceregents, allowed to reign over a small domain, and answerable for their conduct to a higher tribunal, liable to error? Mary Wollstonecraft Chapter 3. The Same Subject Continued. Wollstonecraft, M. (2004). A vindication of the rights of woman. Penguin Books. (Original work published 1792). The tech internet is breathless with a fervour that borders on the religious. The headlines circulate with viral efficiency, proclaiming a new gospel of access: “I just learned that the $200,000 Stanford AI degree just became worth a lot less.” The narrative is seductive, familiar, and pernicious. And currently viral on LinkedIn. We are told the gatekeepers have unlocked the gates; the ivory tower has lowered the drawbridge. Stanford has uploaded its flagship AI and Machine Learning curriculum to YouTube, and now, we are assured, the only obstacle standing between the common person and a career in the bleeding edge of AI is their own lack of willpower.
A beautiful story of democratisation. It is also a lie that masks ongoing systemic inequalities in access and privilege. While the release of these materials--CS221, CS224N, the legendary CS229—is undoubtedly a boon for the curious autodidact, framing this as a levelling of the playing field is a dangerous oversimplification. It is a specious homage to equity paid by an institution that thrives on exclusivity. Take a moment. Pause. Question: When an elite institution gives away its content for free, what are they actually selling? And more importantly, what privileges are they securing for themselves? 1. The Commodification of Content vs. The Aristocracy of Context The prevailing argument is that “you don’t need a degree, you need the knowledge”. This relies on a fundamental misunderstanding of the university’s function in a capitalist society. It conflates information with instruction, and worse, it confuses learning with credentialing. Access to Andrew Ng’s lecture slides is not the same as access to Andrew Ng’s office hours. Watching a video on Backpropagation does not equate to the rigorous, graded feedback loop of a problem set, the pressure of a peer group, or the structured mentorship of a lab. By dumping raw content onto YouTube, Stanford has effectively commodified information that was already widely available in textbooks and papers, while retaining the context (the network, the mentorship, the credential) as a luxe good, thereby emphasising the disparity between content and meaningful learning. While a celebration of the dismantling of hierarchy is taking place, I want the audience to feel concerned about how this reinforces inequalities. It is concretised in a two-tier system of knowledge: the wealthy and the lucky receive the education (the dialogue, the critique, the social capital), while the rest of the world receives the PDF. It is the difference between being invited to the banquet and being allowed to read the menu from the street. 2. The Certification Industrial Complex: The Funnel of False Hope We must recognise this ‘gift’ for what it truly is: a loss leader in the grand supermarket of higher education. Stanford and platforms like Coursera have engineered a business model where the content, the lectures, the readings, the knowledge itself, is given away for free, not out of benevolence, but to devalue it. By flooding the market with open access, they have rendered the act of learning insufficient. In this new economy, knowledge is cheap, but proof (actual certification of your skills) is a luxe good. This is a trap that structurally disadvantages the autodidact (you teach yourself). You may watch every lecture and master every concept, but without the watermarked seal of the institution, your knowledge carries no currency in the labour market. They have created a system where you are strongly encouraged to purchase their $18,900+ “Graduate Certificate” to validate the very skills they claim to be giving away. Technically, you don't have to purchase it to learn; you have to purchase it to get the credential. So, not the democratisation of education; instead, the democratisation of the advertisement for their paid products. Such online courses have not opened the gates; they have simply moved the toll booth to the exit, ensuring that while anyone can enter the library, only those with the means can afford the receipt that proves they were there. 3. The Pedagogical Monoculture: Intellectual Imperialism in Code as The “Stanford Way” And, there is a sharper, more critical edge to this ‘gift’ that involves the exertion of soft power. By making their curriculum the global default for ‘free’ AI education, Stanford is effectively homogenising the discipline itself. It is time to confront the deeper, more insidious erasure at play here: intellectual colonialism. When thousands of self-taught engineers across the Global South, Europe, and Asia learn AI exclusively through the lens of CS224U or CS329H, Stanford’s approach limits the diversity of thought essential for inclusive development. Instead, we export Silicon Valley’s specific flavour of AI ideology, often accelerationist, often blind to social harm, as the neutral, objective standard for the world. We are exporting a specific, highly local ideology, one that prioritises hyper-scale, friction-free speed, and profit maximisation, selling it to the world under the guise of ‘neutral math.’ When the whole world learns to code from Silicon Valley, the entire world loses the vocabulary to critique Silicon Valley. A student in Mumbai or Lagos who learns AI exclusively through this syllabus is being trained to define “problems” and “solutions” through the narrow lens of a Palo Alto venture capitalist. They are taught to optimise for metrics that matter to the NASDAQ, not necessarily for the resilience of their local communities or the preservation of specific cultural contexts. In universalising this single mode of thought, we delegitimise any form of intelligence that does not fit the template. We are seeing the standardisation of the Stanford syllabus, ensuring that the next generation of builders, wherever they live, will build the world in Silicon Valley’s image. In doing so, we say ok to colonising markets and colonising the future’s imagination, ensuring that tomorrow’s builders can only dream in shapes approved by today’s monopolists. 4. The Externalisation of Training: A Subsidy for the Oligarchs So, who profits most from this sudden flood of ‘free’ expertise? It is not the student; it is the corporation. By establishing ‘Stanford-level knowledge’ as the prerequisite for entry, Silicon Valley has effectively externalised the cost of training its own workforce. In a previous era, corporations bore the burden of training junior employees, investing time and resources to bring them up to speed. Today, that cost is shifted entirely onto the individual. The aspiring engineer must now spend hundreds of unpaid hours consuming this “free” curriculum just to reach the starting line. Stanford has not liberated the learner; they have simply created a mechanism that allows Meta, Google, and Amazon to demand senior-level theoretical knowledge from entry-level applicants without paying for it. It is a massive, invisible subsidy for the most profitable companies on earth, paid for by the unpaid labour of the hopeful. 5. The Tyranny of Time and the ‘Bootstrap’ Myth The viral commentary surrounding this release asks a pointed, accusatory question: “What’s stopping you from diving into AI learning now that these barriers are gone?” Caution here. This is the classic neoliberal trap, a sentiment that Mary Wollstonecraft herself might have recognised as the tyranny of circumstance disguised as moral failing. It shifts the burden of structural inequality onto the individual. It implies that the only barrier to entry was the tuition fee, conveniently ignoring the massive, invisible infrastructure required to actually consume this content. To engage meaningfully with CS229M (Machine Learning Theory), one requires not just advanced calculus and linear algebra, but high-speed internet, a powerful GPU for training models, and, most crucially, time. Who has the leisure time to audit graduate-level Stanford courses for free? Not the working-class professional juggling two jobs to survive the cost-of-living crisis. Not the single parent negotiating the ‘double shift’ of care and labour. Not the caregiver juggling everything. This ‘free’ access may raise the audience’s awareness of systemic barriers, highlighting that resources, not just content, determine access. 6. The Hollow Liberty of Flexible Access Let’s look at the specious promise of flexibility with a cold, discerning eye, increasingly peddled to the marginalised. The architects of these modern educational programmes proclaim that they have opened the gates and that the digital classroom offers a flexibility of access that liberates the mother, the carer, and the weary. The outsiders are now on the inside??? Yet this is a hollow liberty. It is a flexibility of entry only and not a flexibility of learning. They grant the student the right to log in at midnight, but not the right to learn in a way that deviates from the rigid, linear norms of a curriculum built by and for the privileged and unencumbered male. We are told that the walls have been removed, but in truth, they have simply been rendered invisible. By shifting the site of learning from the collective and public space of the university or the office back into the private and domestic sphere, we are not liberating women. We are confining them. We are asking them to bear the double burden of domestic administration and professional acquisition without the sanctuary of a dedicated space. The flexibility to learn from home is too often the freedom to be interrupted, divided, and ultimately diminished. It is a trap that relies on the learner’s isolation to function. Similarly, we must consider the nature of the space we are asking these students to occupy. It is a space stripped of the protective friction of human mentorship. Increasingly, in the name of efficiency, we have replaced the wandering path of the apprentice with the streamlined perfection of the AI tutor. But authentic learning requires the right to be wrong and to know why you made those mistakes. It requires elbow room to make more mistakes without them becoming fatal to one’s professional identity. Removing the human interaction-infrastructure of learning, we create a system that demands perfection from those who can least afford the risk of failure. Rather than ‘an education’, such access mirrors a filtering mechanism that selects for those already indistinguishable from the machine. In doing so, students are reduced to zombie data. Elite universities and platforms like Coursera measure success by enrolment, not completion or competence. By flooding the web with free content, they boost their “impact” metrics (“We reached 10 million learners!”) without disclosing that 95% of those learners watched 2 videos and quit because they lacked the support to continue. Even the free users are generating data. Every pause, rewind, and quiz failure is data that can be used to refine its own educational AI models or sell to partners. The 'free' learner is not just a potential customer; they are a test subject for the next generation of ed-tech products. They are farming us for engagement metrics to justify their tax-exempt status, not measuring whether we actually learned anything. How about a new kind of space? Not merely the digital permission to access a server but the social permission to exist as a complex and fallible learner. One in which we can reject the efficiency that treats the student as a vessel to be filled with data and reclaim the inefficiency that allows the student to become a learner who unfolds with new knowledge. Until we do so, the open door of these programmes will remain nothing more than a gaping maw. It consumes the time and hope of the marginalised while offering nothing but the illusion of progress. 7. The Devaluation of Junior Labour and the Reserve Army Eventually, ‘Stanford-level knowledge’ is becoming the baseline expectation for entry-level roles simply because the material is free. The bar for entry does not lower; it rises. This move creates a ‘reserve army of labour’, a glut of semi-qualified individuals that drives down the value of junior roles. Employers can now demand that junior developers possess theoretical knowledge previously reserved for PhDs, without offering the pay or training to match. “Why should we train you?” they will ask. “The videos were on YouTube.” This is not a hypothetical danger. A dear friend who is senior engineer at a large tech firm recently told me she is already fighting this battle on the ground. She, and note that it is she, is performing the invisible, unpaid labour of protecting her junior staff from management’s abdication of duty. She is acting as a human shield against the logic of efficiency, filling the training gap with her own time because the institution has decided that 'free access' and vibe coding with a Chatbot absolves them of the responsibility to teach. It accelerates the credential arms race. If everyone has read the slides, the slides no longer distinguish you. The distinction moves back to the one thing you cannot download from YouTube: the pedigree. The degree, the brand, the handshake. 8. The Strategy of the Benevolent King: Reputation Washing The timing of this ostentatious largesse arrives at a precise historical moment when the elite university is increasingly, and correctly, characterised as a tax-exempt hedge fund with a tiny fraction of educational subsidiary (for the public good). In this light, the release of free curriculum is a strategic exercise in reputation washing. Not very revolutionary at all. It is a performance of noblesse oblige designed to purchase the moral high ground at a negligible cost. By scattering these digital crumbs, Stanford postures as a benevolent philanthropist, a gesture that conveniently distracts from the fortress of its $37.6 billion endowment, while the academy itself increasingly relies on an army of precarious, underpaid adjunct labour to function. This ‘gift’ (wearing out the quotation mark keys on my keyboard) allows the institution to cloak itself in the rhetoric of open access without engaging in the dangerous work of actual redistribution. Stanford and others are not SheRa. They have not shared their power; they have simply televised their prestige to ensure that, even in an open market, they remain the monarchs we must thank for the privilege of learning. Conclusion: The Library of Minds Far from dismantling the hierarchy, gestures like this serves only to fortify it. We must be careful not to confuse a repository with a school, nor a data dump with equity. Stanford has positioned itself as the benevolent monarch of the intellect, scattering the bread and circuses of 'open access' to the masses. At the same time, the actual keys to the kingdom, the networks, the laboratories, the whispered introductions to venture capital, remain safely vaulted behind the tuition paywall. Consume the content, by all means. Master the calculus. But do not be beguiled into calling this a revolution. The walls of the walled garden have not been breached; they have merely been fitted with glass, ensuring that while we may now clearly see the machinery of their privilege, we remain just as barred from touching it. Links & References
A literary co-conspirator recently asked me a question that has carried on rattling around my brain like a loose pebble. Do graduates actually aspire to work for tech giants like Google, Amazon, OpenAI, SpaceX or Meta anymore? Or has that ambition curdled into something far more complex, like resistance, resignation, or even shame, in the conditions of what they represent?
The question cuts through the glossy recruitment brochures and the curated videos on social media. Applications still flood in because economic necessity is a powerful motivator. But you need only dig a little deeper into the class of 2025/26 to find a generation distraught by their limited options. They are the first generation to feel the machine actively pushing back against them. They face what we might call a Sophon Blockade. In Cixin Liu's The Three-Body Problem, the Sophon is a proton-sized supercomputer sent by an alien civilisation to halt human scientific progress. It creates a ceiling on physics, ensuring humanity can never technologically surpass its oppressors. Big Tech has deployed its own functional equivalent. AI acts as a Sophon for entry-level talent. By automating the drudge work of basic coding and data cleaning, corporations remove the very ladder rungs junior employees use to learn. We are witnessing a real-world blockade in the graduate job market, where the junior space has been colonised by algorithms. The Algorithmic Executioner "Unfortunately." This single word has become the defining soundtrack of the class of 2025. It serves as the standardised automated greeting of the algorithmic executioner. I spoke with several high-flying graduates from my courses this week, and they all shared the same screenshot. Their inboxes are filled with rejection emails that begin with that exact same AI-generated adverb, unfortunately. They are not even getting to the interview stage. Automated systems like Applicant Tracking Software (ATS) now reject up to 75% of resumes before a human ever sees them. This wall of rejection initially appeared as a glitch but has since evolved into the shockwave of a massive structural collapse. Recent reports confirm that the UK tech sector has cut graduate hiring by nearly half, specifically because bots are now doing the entry-level work that used to serve as the industry's training ground. This algorithmic gatekeeping removes any chance of equity. It squashes graduate hope because they have no choice but to adopt the very tools that are excluding them. To even compete, they must use AI to write their resumes and cover letters just to pass the machine's test. They have to mask their humanity to be accepted by a system that demands their compliance while actively engineering their obsolescence. The Great Flattening The drudge work of coding and analysis was once an apprenticeship. It was the safe (even fun) sandbox where junior developers broke things, fixed them, and learned the deep architecture of their trade. It was the mechanism for transferring tacit knowledge. It provided the unwritten wisdom of senior engineers that cannot be captured in a manual but is learned through the friction of solving complex problems. By automating this layer, the industry has burned the ladder while shouting at graduates to climb. We might hope that universities would step into this breach by supporting graduates to hone their skills to a higher strategic level. But how can you hone a talent you were never allowed to practice? If every entry-level software engineer is trained using AI, then we are creating a generation of AI-dependent operators with a flattened, homogenised skill set. They will possess the breadth of the internet but the depth of a puddle. Crucially, we have removed the social infrastructure of learning, eliminating opportunities for human error and correction in a team setting. We have lost the moment where a junior admits a mistake and hears a senior colleague offer a solution. That interaction is how you learn to project manage, negotiate, and exist in a team. When the fix comes instantly from a chatbot, that social contract is broken. We are replacing the messy, productive failure of the human team with the silent, sterile efficiency of the machine. This ushers in an era of 'knowledge collapse', ensuring that the next generation of tech workers remains permanently junior and tethered to the algorithm for their professional survival. One has to admire the computational irony here. While the Trisolarans achieved total lockdown with a single proton, humanity is achieving the same effect by building monuments to excess. We are currently pouring billions into infrastructure, such as Microsoft and OpenAI's proposed "Stargate" supercomputer and Amazon's massive investment in data centres. We are stripping the grid and boiling the oceans to build the machine that ensures the next generation cannot learn how to build the machine. (Apologies, I am enjoying a lot of sci-fi atm!) The blockade is not merely technological. It is financial. We witness a pincer movement on human potential where the corporate sector automates the junior role while the university sector is intellectually strip mined by fiscal policy. The latest data on higher education funding for teaching reveals a catastrophic erosion of resources. In real terms the funding available to teach each student has plummeted from a peak in 2012 to levels significantly lower than they were over a decade ago. We see a trajectory that slopes downward with the terrifying inevitability of a landslide. Universities are expected to arm graduates against the Sophon of AI while operating with a war chest that has been raided. They charge premium fees for a product that is being financially hollowed out from the inside. The infrastructure required to teach complex human skills in the age of the machine is expensive yet the investment per head is in freefall. The Spiral of Tech Shame For a decade, the narrative remained simple. You get a Computer Science or Business degree. You get a hoodie. You get a massive salary. That pipeline is rusting. Conversations on platforms like Reddit reveal a growing sentiment of tech shame. Graduates view Big Tech as a moral compromise rather than a playground for innovation. We see this in the physical world with students at Durham University protesting STEM careers fairs. They refuse to let their universities funnel them into companies they view as complicit in global harms. The evidence for this disillusionment is tangible. The "Techlash" has moved from regulatory hearings to the campus quad. Student groups actively target recruitment events to highlight the intersection between Big Tech and the defence sector. Contracts like Project Nimbus or the use of AI in autonomous weaponry have shattered the illusion of neutrality. A 2023 survey by networking app Handshake noted that "impact" and "mission" are now primary drivers for Gen Z talent. They are voting with their feet by looking toward climate tech or NGOs. The prestige of the FAANG acronym has evaporated. It has been replaced by the uncomfortable realisation that working for these entities often means optimising addiction algorithms or refining surveillance capitalism. Gendered Obsolescence The blockade is not applied evenly. A business woman this year designed her own AI to take care of the administrative tasks for her professional role in beauty aesthetics. After releasing and sharing this with different tech communities, it was largely panned as obsolete. Such dismissals reflect a broader systemic devaluation of feminine-coded labour. While male-led projects automating challenging technical tasks are hailed as revolutionary tools, women-designed projects to manage the complex administrative load of pink-collar industries are frequently dismissed as trivial. A bot that writes code is treated as a genius assistant, while a bot that manages a salon's client relationships is viewed as mere digital secretarial work ripe for displacement rather than investment. This creates a confidence gap where women are less likely to adopt AI tools due to fears of being labelled unethical or lazy. The industry frames innovation in a way that validates the male creator while sneering at the female utility-focused tool. The Dark Forest The most bitter pill is how this technology is forced down their throats in education. Students are besieged by AI. Take the recent case at Staffordshire University, where students realised their lecturer was effectively an AI voice reading off slides. (Confession, when tired and weary, I am a little robotic myself). They felt robbed of knowledge. At the same time, universities scramble to police students for using the very tools the industry demands they master. It is a disjointed experience. We tell them they must be AI-literate to survive, yet we tell them that if they use AI to help them think, they are cheating. In Cixin Liu's sequel, he introduces the Dark Forest theory. The universe is a dark forest where every civilisation is a silent hunter. The moment you reveal your location, your humanity, or your vulnerability, you are wiped out. For the class of 2025, the job market is their Dark Forest. They are terrified to reveal their true, unpolished selves. They feel pressured to use ChatGPT to write their cover letters and fix their code. They hide their human noise behind a synthetic signal just to get past the Applicant Tracking System filters. They camouflage themselves as machines to be accepted by machines. So, to answer my friend's question. No. They do not simply aspire to work for Meta. They are trying to survive a system that demands they merge with the very tools designed to replace them. It is a dangerous navigation of a world that is actively trying to edit them out of the script. The arrival of a neurodivergence diagnosis, especially when it arrives in adulthood alongside the same diagnosis of one’s own child, is less a lightning bolt and more a gradual, dawning touch of light on a landscape left in the dark for decades.
Living in Yorkshire, the stark beauty of the moors mirrors the isolation felt by many families navigating the SENCO/SEND support system. I have come to view the diagnosis process as a bureaucratic checkpoint that marks the boundary between hope and the great, silent void that follows. We are told that the diagnosis is the key, the golden ticket that unlocks understanding and accommodation (like an EHCP), yet for so many of us, including the families whose voices echoed with such painful clarity in recent reports on the crisis in SEND provision, that key opens a door to an empty room. The obscene waiting times, meaning years of suspended animation where children drift unmoored through an education system that was never built for them, are a national scandal. But it is what happens after the diagnosis that I find myself compelled to critique with urgent, furious veracity. Many of us, myself and my daughter included, are living through a systemic abandonment that is being quietly plastered over with the thin, digital veneer of "innovation." In the absence of human support, in the vacuum left by the dismantling of accessible education and the chronic underfunding of SEND (Special Educational Needs and Disabilities) services, we are witnessing a dangerous shift. The Hollow Recommendation: "Just Use AI" To my horror, my daughter and I are offered a new, hollow recommendation in our support plans: "Use AI." It appears as a throwaway comment, a suggestion that generative artificial intelligence can act as an executive function prosthesis, a scheduler, a drafter of difficult emails, or a summariser of the dense texts we struggle to process. On the surface, to the neurotypical observer, this might seem like a modern, efficient solution. But to those of us living inside the neurodivergent experience, this recommendation is not just unhelpful; it is an insidious form of harm that misunderstands the very nature of our exhaustion. My daughter is 9.5 years young. She is legally too young to hold the very account credentials that are being prescribed as her salvation. This recommendation acts as if the internet is a safe, neutral library, rather than a surveillance engine designed to harvest attention. When a support plan says "Use AI" without specifying which tool, whose safety guardrails, and what data privacy protections are in place, it is not a strategy; it is negligence. There is no dosage instructions on this prescription. Which AI is she supposed to use? The one that hallucinates facts? The one that reinforces gender biases? (hell, no). The one that scrapes her input to train its next iteration? And how does this function in a classroom that is likely banning smartphones? Is she to be the exception, navigating the social stigma of being the "cyborg" student while her peers use pencils? We need to ask: For what purpose? Are we teaching her to think, or are we teaching her to prompt? By handing her a text box instead of a hand, we are not offering her a scaffold for her executive function; we are feeding her developing mind into a black box that offers no duty of care, no empathy, and absolutely no guarantee of safety. To suggest that a neurodivergent person, already drowning in the sensory and cognitive overwhelm of a world designed for linear brains, should simply "adopt AI" is to ignore the immense cognitive tax required to operate these systems. To make this recommendation for a child is... well, I am at a loss for words that do not rhyme with ‘cluck’ or ‘spit’. We are being asked to learn a new language, to master the art of prompt engineering, and to navigate an interface that is fundamentally designed for data extraction rather than human care. When a support plan offloads the work of scaffolding onto a chatbot, it ignores the reality that using these tools requires a high degree of executive function. These are the very resource we are often depleted of. We must formulate the request, sift through the generated noise, fact-check the hallucinations, and integrate the output into a reality that rarely matches the machine’s statistical average. AI IS NOT SUPPORT. This is additional labour disguised as a life hack. The Data Extraction Trap There is a darker current running beneath this technological solutionism, one that connects the crumbling walls of our classrooms to the gleaming campuses of Silicon Valley. The AI-bro oligarchy, those architects of Large Language Models (LLMs) who preach the gospel of efficiency, have no vested interest in the messy, non-linear, divergent goals of our community. Their technology is built on a foundation of normative data, training models that flatten out the spikes of human variance into a smooth, predictable curve. By relying on these tools, we risk forcing our own minds and our children’s minds into a feedback loop that prioritises neurotypical mimicry over authentic neurodivergent existence. Such systems are effectively turning our need for support into unpaid labour for the very tech giants that exclude us. We are not users to be supported; we are resources to be mined. The Political Abdication This digital deflection serves a political purpose as well. It allows the state to abdicate its responsibility. If the answer to a child’s inability to access the curriculum is "use ChatGPT to summarise the lesson," then the school no longer needs to invest in smaller class sizes, sensory-friendly environments, or specialist teaching assistants. The burden is shifted back onto the individual, back onto the parent who is likely already burnt out from fighting for the diagnosis in the first place. The "cliff edge" of support that the National Autistic Society has campaigned against for years, highlighting how thousands of adults and children are left stranded after diagnosis, is now being populated by chatbots instead of social workers. This is a devastation of the social contract. Research and campaigns from the National Autistic Society repeatedly show that without the right support at school and home, autistic people are at risk of developing serious mental health problems, yet the response is to offer a subscription to software rather than a relationship with a human being. This systemic abandonment is actively weaponised by political opportunists who have found a convenient scapegoat in the very families they are meant to serve. We need look no further than the incendiary rhetoric of figures like Reform UK’s Richard Tice, who has grotesquely dismissed the rising tide of neurodivergent diagnosis as a ‘dodge,’ branding it the modern-day equivalent of a ‘bad back’ used to evade economic productivity. This reflects the brutal calculus of a system that views human variance as an inefficiency to be purged. It reveals a political class with their noses firmly planted up the arse of the AI-bro oligarchy, eagerly adopting a Silicon Valley worldview where citizens are reduced to data points and anyone who cannot be seamlessly integrated into the algorithm is discarded. The Harari Hazard: A Note on Futurism In a chilling echo of Yuval Noah Harari’s warning about the rise of a ‘useless class,’ these leaders are collaborating to build a future where the state abdicates its duty of care to software. However, while Harari serves as a useful starting point for futurist exploration, we must be deeply skeptical of the veracity of his so-termed populist science, which often sacrifices rigorous accuracy for the sake of a compelling, terrifying narrative. As the neuroscientist Darshana Narayanan has sharply critiqued, Harari’s work is riddled with scientific errors and a reductive biological determinism that should sound alarm bells for the neurodivergent community. (Hello)! When Harari speculates about "fixing" autism by rewriting genetic code, here we are treating complex human variance as a mere software bug, he is simplifying science and he is reinforcing the dangerous eugenicist undertones that often lurk beneath the shiny surface of Silicon Valley ideology. His storytelling serves the interests of surveillance capitalists by presenting their dominance as an evolutionary inevitability rather than a political choice. By accepting the premise that humans are hackable animals whose worth is determined by data processing efficiency. In doing so, w/he inadvertently validates the very dehumanisation we are fighting against. No. We are not obsolete algorithms waiting to be upgraded or discarded; we are complex, non-linear human beings whose value exists entirely outside their metrics of utility. Instead of eyeing up the next generation’s blood for some vampiric wellness hack, why not stick to the classics? Get a portrait and hide it in the attic. A Call for Human Infrastructure The education system, particularly here in the North (read, not London) where waiting lists for assessments can stretch years beyond those in the South, is in a state of collapse. We see this in the stark disparity of waiting times, a postcode lottery that leaves families in Yorkshire waiting over a thousand days for an answer, as highlighted by the Child of the North reports. When the answer finally comes, it arrives in a world where schools are under-resourced and teachers are overwhelmed. To introduce AI into this breach without proper scaffolding, without a human guide to help interpret and filter the technology, is to set neurodivergent people up for a new kind of failure, and even less social support. We do not need a tool that generates more text, more options, and more information to process. We need flexibility. We need reduction. We need calm. We need human empathy that understands why a task is difficult, not a machine that simply completes the task in a way that mimics a neurotypical standard we can never sustain. True support for neurodiversity requires protecting vulnerable people from AI-tech-bro capitalist efficiency. It requires us to reject the idea that a person’s value is tied to their productivity or their ability to interface with a complex system. We must recognise that the tech-fix is often a trap, a way to privatise support while stripping it of its humanity. I reject the premise that the only bridge across our exclusion is an algorithm. As a mother, I will not teach my daughter that she must merge with the machine to be valid. We need to rebuild the human infrastructure of care, to demand education systems that are accessible by design, not patched up with plugins. The AI revolution bubble is leaving us behind, not because we cannot use the tools, but because the tools were never built to hold the weight of our beautiful, complex, divergent lives. The silence at the end of the diagnosis process cannot be filled with code. It must be filled with community, with understanding, and with the radical refusal to be flattened. The Unacceptable Contract So here is my refusal. I am returning this recommendation to the sender, marked 'Incompatible with Human Life.' Do not offer my daughter a chatbot when what she needs is a chance. Do not offer me a productivity hack when what I need is a society that does not view my neurology as a glitch to be patched. Or exploit it where I can be hyper-focused to the point of burnout. We are not interested in becoming more efficient data points for your Large Language Models. We are not interested in hacking our way out of a systemic failure that you have engineered. If the only bridge you can build across the chasm of our exclusion is made of code, then burn it. We will not cross it. We will stay on this side, in the messy, inefficient, beautiful reality of our divergent ways of being and feeling, and we will build our own infrastructure. It will be built of patience, not prompts. It will be powered by empathy, not electricity. And it will not require us to flatten ourselves to fit through the slot of your machine. To the politicians calling our existence a "dodge," to the AI-tech bros mining our exhaustion for data, and to the futurists predicting our obsolescence: We are not your "useless class." We are the only ones who are awake. Sincerely, A Mother, A Professor, and A Human Being who refuses to be automated. My notes from these sources: [1] County Councils Network Report, Nov 2025 [2] Ticking Timebomb, The Guardian, Mar 2025 [3] National Autistic Society. (n.d.). Autism assessment waiting times, Nov 2025 [4] Reforms UK's Richard Tice says children wearing ear defenders in school is 'insane', Independent, Nov 2025 [5] Yuval Harari's blistering warning to Davos in full, World Economic Forum, Jan 2020 [6] The Dangerous Populist Science of Yuval Noah Harari, Darshana Narayanan, Current Affairs Org, July 2022 [7] N8 Research Partnership. (2024). Child of the North Report.
The most terrifying sound in the technology industry today is not the roar of a hostile algorithm or the crash of a market correction; it is the silence of the woman who has just decided that speaking up is no longer worth the risk. She has disappeared herself, the brilliant mind who has quietly calculated the cost of her visibility and found the price too high. She has realised that while she was busy doing the heavy lifting of diversity work, the water around her had filled with sharks.
When I presented my latest evidence before the Coalition for Academic Scientific Computation (CASC) recently, I opened with an image that often lurks in the subconscious of every underrepresented person in our field. The shark seemed a fitting image. It could represent a specific person. It could also be interpreted as a caricature of a bad boss or a hostile colleague or a politician. To me, the shark represented the water we are now swimming in. It represented a danger that does not need to bite to be effective because it just needs to be visible enough to make us afraid to move. My talks to both CASC in the US and a version of the same talk to the Exobiosim/HPC group in the UK were driven by urgent data gathered this year from interviews with women working in computing, predominantly HPC, who are watching the tide turn against them. In these sessions, we mapped the anatomy of this new hostility. We discussed how diversity work has historically relied on a model of "good citizenship" as a volunteer-based "vibe" without actual resources or institutional protection. This precarious model is now collapsing under the weight of leadership hostility and resource cuts. My slides, which I share below, document the direct quotes from participants who feel they have "gone back decades," who describe the air as "thick with unspoken threats," and who see former allies retreating into silence to protect their own careers. We categorised the external threats, which ranged from legal challenges to "anti-woke" political pressure, but the most chilling finding was the internal retreat: the self-censorship of women who no longer feel safe to advocate for themselves or others. We are witnessing the erosion of allyship in real-time, leaving the most vulnerable to navigate these shark-infested waters alone.
For decades, the work of diversity in technology has been a slow and arduous swim upstream. We told ourselves that if we just worked harder, if we just leaned in, if we just mentored enough girls, the current would eventually change. But recently the current has not just stalled. It has reversed. We are no longer just fighting against inertia. We are fighting against a stark cultural shift fuelled by fear, legal threats, and a political climate that has turned equity into a dirty word. The result is a phenomenon that is perhaps more dangerous than the external attacks themselves. It is the silence of self-censorship.
This silence is the sound of survival. In my research, I spoke to women and marginalised groups across working in international teams. I heard that it feels like we have gone back decades. I heard that people are afraid to speak up because they fear repercussions. This is not a knee-jerk reaction. It is a calculated act of self-preservation in an ecosystem that has suddenly become hostile to our existence. We are seeing a retreat from DEI and EDIA initiatives not just in the White House but in the boardrooms of major corporations and in the quiet hallways of our own universities. Allies who were vocal two years ago are now waiting to see which way the wind blows, engaging in what my participants described as a calculated silence. They are testing the water while we are drowning in it. This retreat is documenting itself through a digital disappearing act. We are witnessing a systematic "going dark" of EDIA resources, a phenomenon confirmed by recent reports from both the tech and academic sectors. Major technology giants like Google and Meta have quietly cut staffing for their DEI programs or ceased releasing the detailed diversity reports that once served as industry benchmarks for transparency. In the academic and scientific computing sphere, the erasure is even more literal. Universities and research institutions, bowing to mounting political pressure and the threat of funding freezes, have begun scrubbing their public-facing websites. The diversity statements are being deleted from hiring pages at major institutions like MIT and the University of Utah, and entire directories of LGBTQ+ faculty and support resources are vanishing behind firewalls or 404 error codes, as seen recently at Northwestern University and the University of Chicago. My research highlights that this administrative action has moved beyond simple funding cuts to the explicit censorship of language. We are seeing a sanitisation of vocabulary where terms like "equity," "privilege," and "systemic" are being surgically removed from mission statements to avoid triggering political targeting or losing federal grants. This is a survival strategy for the institutions, as a way to fly under the radar of "anti-woke" legislation, but for the individuals relying on those support structures, it is an act of erasure. It signals that our identity is now a liability too dangerous to even name in public.
Table summary:
The Digital Disappearing Act: A Record of Erasure
This retreat forces us to confront the uncomfortable truth I wrote about in my book, An Unsuitable Job for a Woman. We have built our house on sand. For too long, diversity work in tech has relied on the volunteer time of the very people it is supposed to help. We have treated equity as a form of good citizenship, a vibe we create without actual resources. We have relied on the passion of the marginalised to fix the systems that marginalise them. We have asked women to do the heavy lifting of repairing a culture that was built to exclude them.
I have called this set of observations the "intimacies of labour" (again, see my book). It is the identity work that women must perform just to exist in these professional spaces. It is the mental calculus of deciding whether to be one of the boys or to embrace the label of "Woman in Tech." It is the exhausting effort of bridging the gap between our gender and our professional legitimacy. We are expected to be soft enough to be likeable but hard enough to be competent. We are expected to fix the pipeline while navigating a workplace designed for a man who has no caregiving responsibilities and a wife at home to manage his life. As I see it, the label "Woman in Tech" itself has become a straitjacket. It is a status characteristic that signals difference rather than competence. It implies that our gender is the problem to be solved. It suggests that if we just had more training, or more confidence, or better negotiation skills, the inequality would vanish. This deficit model absolves the industry of its responsibility. It allows tech companies to paste pictures of diverse faces on their websites while their internal cultures remain toxic and exclusionary. Now we are doing this heavy lifting while swimming with sharks. The emotional labor required to sustain our careers is compounded by the fear of political and professional backlash. The hostility from leadership is palpable, with diversity initiatives facing increasing backlash under the guise of preventing reverse discrimination or protecting free speech. My research uncovered reports of senior leaders celebrating the savings from cutting diversity programs and framing equity work as a distraction from excellence. The message from the top is clear: diversity is a waste of resources. AI-bro culture actively resists and prejudices representations of and the inclusion of women. We are, shark bait. We are also F***-ing exhausted. The women I interviewed told me they are considering leaving the field altogether because it is not worth the constant battle. They described the current environment as a full-blown assault on their right to exist. If we lose this generation of women in HPC and technology, we do not just lose diversity numbers. We lose innovation. We lose the future. This is why we must stop worshipping at the altar of metrics. In the data-driven world of computing, we love to count things. We count heads. We count retention rates. We count the percentage of women in the room. But Goodhart’s Law reminds us that when a metric becomes a target, it ceases to be a good metric. We have focused on the appearance of diversity rather than the reality of inclusion. We have allowed organisations to game the system and test the water without ever jumping in. The result is a surface-level diversity that collapses the moment the political weather changes. We have a lack of tracking on retention rates for underrepresented groups because we have been too busy counting who walks in the door to notice who is walking out. We need to stop asking how many women are here and start asking who feels safe enough to speak here. The path forward requires us to name the fear. We must stop pretending this is business as usual. We need to acknowledge the hostility from leadership and the fear of repercussions. We cannot fight a shark we refuse to see. We must reclaim the language of equity and refuse to let the anti-woke agenda define our terms. Diversity is not reverse discrimination. It is essential for robust science and innovation. We must also reject the trap of the volunteer revolution. The isolated volunteer is vulnerable to the shark. The coalition is a fortress. We need to look at the broader landscape of resistance, such as the lawsuits organised by civil rights organisations and the collective actions taken by NGOs and educational bodies. We need to connect our internal struggles with these external movements. We need to build structures that do not rely on the free labor of women to sustain them. We need to professionalise this work and resource it properly. Most importantly, we must refuse to accept the premise that we are the problem. The problem is not women. The problem is a dominant bro-tech culture that protects its own power at the expense of everyone else. The problem is an industry that demands we do the heavy lifting of inclusion while it actively dismantles the supports we built. SO WHAT CAN WE DO ABOUT THIS? I've been reading with avid curiosity and a certain gritting of teeth, the popular science writings of Yuval Noah Harari recently (good to go way outside your comfort zone). As I understand it, Harari's main conceit is Homo sapiens rules the world because we are the only animal that can cooperate flexibly in large numbers. We do this by creating shared stories and experiences. BUT, all too often, these stories are to favour money, nations, and corporations (uh-oh). For the last thirty years, the technology sector has operated on a specific, damaging fiction: the myth that supporting diversity and accessibility is a moral luxury, a charitable add-on to the real machinery of innovation. We have told ourselves that the inclusion of women and minorities is a matter of politeness, rather than a matter of default support. Over the past six-nine months actively excluding groups of people is something AI-bro and tech-bro culture is out in the open actively celebrating. A homogeneous team building a global system is not just unfair; it is computationally incompetent. It creates blind spots that are no longer just social inconveniences but systemic vulnerabilities. If we wish to own the ocean rather than merely survive the swim, we must stop treating equity as a social crusade and start treating it as an engineering specification. We need solutions that do not rely on the benevolence of the powerful or the exhaustion of the marginalised. We need structural hacks that rewrite the code of the institution itself. (I write about this in my book btw, arguing we do not ask more of minority groups to advocate solely by themselves). So here's my shopping list of stuff to action (in response to the question of "what can/should we do to support DEI in the current climate?" asked both at CASC and ExoBioSim events): First, we must reclassify homogeneity as a security risk. In cybersecurity, we do not ask the virus to be nicer; we build firewalls. Similarly, we should stop asking male-dominated teams to "be more inclusive" and start treating extreme gender imbalances as a critical failure in project auditing. Funding bodies and shareholders should view a team of ten men not as a culture fit, but as a high-risk asset prone to groupthink and data bias. We must demand that Red Teaming (as in the practice of rigorously challenging plans and code) be applied to human capital. If a team lacks diverse cognitive inputs, it should be flagged as unstable, and its funding paused until the security flaw is patched. This shifts the burden from the woman raising her hand to groups of people auditing together. Second, we must shatter the illusion of the meritocracy by introducing radical financial accountability. For decades, we have allowed leaders to outsource their conscience to volunteer committees. We must now attach their survival to the survival of their staff. Executive bonuses and grant renewals should be mathematically tethered not to recruitment—which is easy—but to retention, which is hard. So, in this scenario, if the women leave, the money leaves. If the shark drives talent away, the shark starves. This aligns the selfish interest of the leader with the collective health of the group. Finally, we must harness the power of the Strategic Glitch. The current system functions because women and minorities act as the shock absorbers, smoothing out the friction of a toxic culture with their unpaid emotional labor. We organise the events, we mentor the juniors, we soften the blows. It is time to stop. We must allow the friction to be felt. If the "good citizenship" work is not paid, it should not be done. And louder at the back, IF THE GOOD CITIZENSHIP WORK IS NOT PAID, IT SHOULD NOT BE DONE. Let the panel be all male. Let the report go unwritten. Let the vibe of inclusivity collapse so that the raw, jagged reality of the exclusion is visible to everyone. I am advocating for a dysfunctioning system to be allowed to crash. Martyr's Trap I can feel the pushback on my 'finally' point. Is this the Martyr's Trap? So let me join the dots more comprehensively. The current system functions only because women mask the liabilities. We fix the bad PR before it happens. We smooth over the HR disasters. We make the dysfunction look functional. By stopping, we are not quitting; we are simply returning the risk to its owners. Here's my take, in withdrawal from unpaid effort. In doing so, we are handing leaders a liability. We push them into a market that will punish their blindness. An all-male AI development team is not a club; it is a lawsuit waiting to happen. It is a product recall in the making. It is an evolutionary dead end. Case in point, an AI-enabled teddy-bear caught talking sex and knives. We are not fighting a battle for "kindness." We are fighting for the cognitive capacity of our species to navigate the future. The shark is in the water only because we keep feeding it. It is time to change the diet. How to Avoid the Trap: The Art of the "Bureaucratic No" However, the danger to the individual is real. I've spoken to women and minority groups who have lost professional roles, lost work, lost contracts, lost their jobs. If you simply stop doing the work, you risk working in a culture that cuts across your morale code and violates your sense of right and wrong. For many the anxiety here is too much and they self-censure, or disappear altogether. To avoid this, the glitch, in the way I see it, must be engineered with the same precision as the system itself. We do not just stop; we reclassify. How?
A Note on the Architecture of my Argument Hold the line! I must, however, pause to acknowledge the specific architecture of my own position. I write this as a researcher based in the UK, where despite the turbulence of the sector, I possess a degree of contractual security that many of my colleagues in the US tech industry or precarious academic roles do not. It is undeniably easier to advocate for a strategic crash when you are standing on relatively firm ground. Yet, it is crucial to state that the strategies I outline here, the reclassification of labour, the risk assessments, the collective refusal, are not abstract theories born in the safety of a university office. They are the direct, distilled output of the research I have conducted this year. They echo the exact frustrations and desires expressed to me by the computational professionals, the HPC engineers, and the data scientists I interviewed. They told me they felt as though they "were holding up the sky". They told me they "wanted to let go". These recommendations reflect exactly what people I have spoken to told me they needed to do to survive. The Ultimate Reframing I often read articles concerning the "They" in reference to toxic leadership and individuals in positions of power who directly impact tech culture. Let's be specific here. "They" are the beneficiaries of a system designed to extract value from our silence. They are the leaders who view equity as overhead and diversity as a cosmetic feature rather than a structural necessity. They want us to act as the invisible load-bearing walls of an institution they treat as a mere façade, absorbing the stress so they can occupy the penthouse without feeling the tremors. By refusing to do the unpaid work, we are not abandoning the structure. We are unionising the resistance. We are collectively handing back the weight of their own negligence. This is the power of the coalition and the trade union. It transforms a personal refusal into a structural renegotiation. When we stand together to enact this Strategic Glitch, we force the leadership to confront the cost of their own apathy. If they choose to let the infrastructure collapse rather than resource it properly, then let it fall. We are not the help. We are the engineers. And when the dust settles, it will be our collective blueprint that determines what rises next.
The conscientious objector. I find myself looking at her with a mix of profound admiration and a distinct, sharp pang of the wiggly gut-guilts.
Recently, I’ve seen allies in academia, scholars I deeply respect, drawing a line in the sand. They are showing us what it means to resist the AI-creep, They are calling on journals to allow authors to explicitly state: “No generative AI was used to prepare or write any part of this article.” I worry about the AI already embedded in journal gateways to check the paper references and editorial style... but let's park that part of the exchange. For now. Actively resisting AI in the research publication process is a beautiful, defiant stance. It is a reclaiming of human labour, a protection of the cognitive sweat that makes research and writing an act of thinking rather than an act of prompting. To be an AI Conscientious Objector is to choose the protection of human values in a much muddied ecosystem. It is a moral clarity and stance that I crave. But, as I sit here, staring at the blinking cursor of my Outlook inbox, where Microsoft’s Copilot is already, without my asking, suggesting how I might reply to a student, I realise that this clarity is out of my reach. During the research publication process, I want to object. God, I want to object. But I am tired and in a bind. And more importantly, I am entangled. The Myth of the Binary Choice The current discourse around AI in higher education is trapping us in a binary that is as harmful as it is false. We are told we either "adopt" or we "resist." We are either the tech-utopian evangelists or the Luddite holdouts. I'm going to drift away now from journal publication gateways to broader Higher Education policy on AI. That is a hot mess. The current discourse frames AI adoption as a simple yes/no proposition, implying that I can opt out by sheer force of will. It suggests I can hang a 'Do Not Disturb' sign on my professional life and the algorithms will politely walk on by. But the door doesn't lock. The algorithms are built into the hinges. Ok, perhaps I do not have sufficient willpower, is this the problem? Let’s look at the architecture of my working day. My university, like yours, has integrated AI into the very bedrock of our infrastructure. It is in the Blackboard and Moodle sites where I must upload my teaching and research materials; these platforms now use AI for accessibility scanning and content prediction. It also creates quizzes. It is in the email client I cannot turn off. It is in the "suggested actions" in my calendar. I receive automatic summaries of meetings I do and do not attend with action points. (Handy or hell?) To be a true conscientious objector in 2025 is way above refusing to use ChatGPT to write a paper. It would require me to dismantle the entire digital scaffolding of my employment. It is a state of resistance I cannot survive. The Policing vs. Vague Innovation Thirst Trap I have spent the last few days trawling through the digital archives of higher education, reading the public domain AI policies of universities across the UK, the US, and Australia. It has been a descent into a very specific kind of bureaucratic bleakness. The landscape I found is arid and it is hostile. The tone of these documents, ranging from the draconian to the delusionally optimistic, reveals exactly why so many of us feel trapped. In parsing the legalese and the strategic ambiguity, I have realised that our institutions generally fall into two distinct camps, neither of which seems to care about the consent of the humans doing the actual work. On one side, we have the Policing Camp, which views every student as a potential criminal and every AI tool as a weapon to be confiscated. On the other, we have the Vague Innovation Camp, a corporate thirst trap that uses buzzwords like 'literacy', 'employability' (ha), and 'opportunity' to mask a massive, unfunded mandate for staff up-skilling. So, here is a map of the cages as I see them. I have analysed the public policies of major institutions globally, and what becomes immediately clear is that these texts are rarely static. They are euphemistically called "living documents." In theory, this suggests agility and responsiveness. In practice, for the staff whose labor they govern, a "living document" is a nightmare. It means the rules of engagement are quietly updated in the dead of night, often without announcement or consultation. The ground beneath our feet is being shifted by administrative edits, turning our daily workflow into a game of compliance roulette. I am using this mapping to anchor my argument for the Reluctant Cyborg. This analysis proves that current AI Education policy is often just a shifting set of demands that requires you to constantly up-skill and adapt while offering zero protection for your intellectual property, your data privacy, or your right to say "no." The reality is stark: No one has sufficient policy for what they are actually doing. We are building the plane while flying it, but the university has decided that the cost of the fuel, our cognitive load (God help you if you are already burnt out or neurodivergent), our creative data, and our autonomy, is a price they are willing to let us pay. TLDR: No-one has sufficient policy for what they are doing.
Table 1: Policing & Surveillance Camp
Table 2: The "Vague Innovation" Camp
Table 3: The "Explicit Policy" Camp (Rare)
What is Missing? (The Labour Hole)
In analysing these documents, what is absent is even more telling than what is present. * No Right to Disconnect: None of these policies mention a staff member's right not to use AI in their workflow (e.g., turning off Copilot in Outlook). * No Intellectual Property Protection for Staff: They talk about protecting university data, but rarely about the fact that your lectures, notes, and feedback are being used to train the models you are forced to use. * No Workload Allocation: "Becoming AI Literate" (Russell Group) takes hours of weekly study. None of these policies allocate hours in the workload model for this "mandatory" learning. Key Pull Quotes * From Stanford: "Absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person." (Translation: If you don't write a specific policy for every assignment, you are failing.) * From Yale: "Faculty members are expected to provide clear instructions on the permitted use of generative AI tools for academic work and requirements for attribution. Likewise, students are expected to follow their instructors’ guidelines about permitted use of AI for coursework." (Translation: Use AI tools to be efficient, but if the tool lies, it's your fault.) * From Russell Group: "Universities will support students and staff to become AI-literate." (Translation: Resistance is illiteracy.) We are researching and teaching in a cage is built of vague principles and guidance that shift the liability and labour onto the individual, while the door (consent) has been removed entirely. Camp 1: These policies view AI exclusively as a weapon in the hands of cheating students. * Carnegie Mellon University offers syllabus language that explicitly "forbids the use of ChatGPT or any other generative AI tools at all stages of the work process, including brainstorming." * Monash University frames unauthorised AI use as a straight "breach of academic integrity," placing the burden entirely on the individual to prove their innocence. This approach turns us into cops. It demands we spend our precious marking time acting as forensic digital investigators, scanning for the smell of synthetic text. It destroys the trust between learner and teacher. Camp 2: Is perhaps more insidious. These are the policies that use words like "opportunity," "literacy," and "enhancement" to mask the increase in our workload. * The Russell Group Principles (UK) state that "universities will support students and staff to become AI-literate." Sounds nice, right? But "support" here is often code for "mandatory up-skilling on top of your existing workload." * University College London (UCL) tells students that AI can "reduce the need for critical engagement," yet simultaneously encourages its use for "ideas generation or planning." What is missing from all of these documents? Consent. Nowhere does it say: "Staff may choose not to use tools that scrape their intellectual property." Nowhere does it say: "We will not integrate AI into your email client without your permission." The policy is: You're on your own. Use it, but don't get caught using it wrong. Be efficient, but don't be lazy. Be transparent, but don't slow down. Neurodiverse Cyborg And then there is the body. Or, more specifically, the neurodiverse brain in a system designed for neurotypical endurance. My research area is technology to support assistive learning and neurodiversity. I have spent years advocating for tools that level the playing field. For years, I have relied on software like Dragon Dictate to bridge the gap between the speed of my thoughts and the limits of my executive function or physical capacity. Here lies the rub: The tools I use to survive are now AI tools whether I opted for those elements or not.Dragon Dictate, Grammarly, the screen readers, the speech-to-text synthesisers, they have all been retrofitted with Generative AI. To conscientiously object to AI is, for me, to conscientiously object to the ramp that lets me enter the building. The volume of work required of a modern academic is crushing. For someone who is neurodiverse, the cognitive load of administrative violence, the forms, the emails, the compliance metrics, is a mountain that grows daily. AI offers a scaffold. It offers a way to handle the sludge work so I can save my remaining spoons for deep thinking. I need access to work tools. I must persist in this space (my daughter and I depend on me supporting us). But how do I reconcile this dependency with my deep ethical discomfort? How is this healthy? I feel a lot of guilt and shame. I spent my entire education knowing I wasn’t good enough in that system. A career as an academic means you experience being continually flattened, or at least your extreme edges rounded out to ‘fit’ in disciplines, theory, pedagogies and other buckets. Now there’s another ‘unfit’ moment. I feel/know the architecture of AI is toxic and dangerous. I know the energy required of data centres is killing the planet. I also know current investment in technology is in AI and this integrated into many points of contact we have daily and I cannot remove myself fully from them. The Tech Thirst Trap of the Business School I teach Business and Computer in a Business School. My students are not entering a world where they can choose to be purists. They are entering industries that demand fluency in these tools. If I refuse to engage with AI, if I treat it solely as a plague to be avoided, I am failing in my transference of key skills and the conditions of the professional tools they will be automatically adopting. I have to teach them the skills industry wants, even as I loathe the extractionist logic of that industry. I have to show them how to use the tool, while simultaneously teaching them to critique the hand that holds it. It is a dizzying, hypocritical dance. It does make me feel unwell. From Objector to Critical Survivor So, where does that leave me? I cannot be the Conscientious Objector, standing pure on the outside of the machine. My survival depends on the machine. But I refuse to be the uncritical cheerleader. Perhaps we need a new category. Not the "Objector," but the Critical Survivor. Or perhaps the Reluctant Cyborg. We need to acknowledge that adoption is happening with and without our consent. The "conscientious objection" image is powerful because it highlights agency. But for many of us, disabled scholars, overworked staff, precarious workers, and especially our students, that agency is an illusion. I am deeply unhappy with the current state of affairs. I resent that my emails are being scraped to train a model I didn't ask for. I resent that my assistive tools now come with a side order of environmental destruction and copyright theft. But I am here. I am inside the cage. And if I am to remain a curious learner, I cannot simply close my eyes and pretend the beast isn't in here with me. I have to look it in the eye. I have to figure out how to use it to break the bars, rather than letting it consume me. We must stop shaming the individuals using AI to survive a collapsing infrastructure, and instead direct our rage at the institutions that broke the system so thoroughly that automation became the only scaffold left. Image, Dragon Toes and Nose, by Mariann Hardey, 2025 A recent comment on my LinkedIn feed stopped me in my tracks. A superstar researcher in dyslexia, asked a question that was both practical and profound. She asked: What is one adjustment that has made an AI tool actually work for your neurodivergent students?
It is a fabulous question. It is the kind of question that comes from a place of care and a desire for solutions. Yet as I sat down to answer it, I realised I could not provide a bucket answer. There is no single app, no specific prompt, and no digital overlay that solves the equation of the neurodivergent mind. To offer a list of tools would be dishonest. It would imply that neurodiversity is a static problem waiting for a software patch. Tempting though, right? The reality of my lived experience, and the experience of so many others, is that a one-size-fits-all approach does not just fail. It suffocates. The Shifting Sands of Survival Every morning I wake up and face a different internal landscape. The executive function strategies that made me a productivity machine yesterday might be the very things that paralyse me today. Yesterday, I was a master of logistics. I managed administrative tasks with ease. The tools that helped were structural and visual. Trello was my best friend. It organised my chaos into neat, satisfying cards. It felt like a scaffold holding up a building. Today is different. Today, I am skirting the edges of burnout. That same Trello board is no longer a scaffold. It is a place of overwhelm. The sheer volume of information on the screen is a sensory assault. Padlet is not my friend. The notifications are not helpful nudges. They are demands I cannot meet. So I turn to pen and paper. I retreat to the tactile, slow friction of ink on a page. Later, I might take a photo of these notes and ask an AI to transcribe my handwriting into a Google Doc. But note the distinction here. The AI is not the solution. The AI is merely the janitor cleaning up after the real work was done. The solution was the permission to abandon the digital tool entirely. The Carnival of Online Diagnosis This brings me to my deepest fear regarding the intersection of AI and neurodiversity. I worry that the nuanced (aghhh, I am now allergic to this word as it is over-used by AI’s but here we are…), human-led pathway to diagnosis will soon be paved over by an algorithm. If you spend any time online, you have seen them. The ads are relentless. They are predatory and harmful. They scorch the earth of genuine clinical inquiry with thirty-second clips designed to pathologise normal human behaviour. A frantic millennial points at text bubbles floating above their head. Do you doom scroll? You have ADHD. Do you find small talk exhausting? You are Autistic. Do you have a drawer full of cables you might need one day? Here is a subscription to our app. These online tests are a farce. They are digital carnival games rigged to funnel you toward a monthly payment plan. They rely on the Barnum Effect offering statements so vague that they could apply to anyone with a pulse and a smartphone. Do you sometimes lose focus? Do you ever feel tired? (uh-oh). Of course you do. You are a human being alive in late-stage capitalism. And everything is unsettling. (Ok, I am about to nerd out about this aspect, a classic autistic trait, stay with me). The Barnum Effect as the Digital Clinic: To understand why the "Are You ADHD?" ads on TikTok/Insta and so on feel so uncannily accurate, and why they are so dangerous, we have to go back to the 1940s. The Barnum Effect (also known as the Forer Effect, thank you Wikipedia) is a psychological phenomenon where individuals believe that personality descriptions apply specifically to them, even though the description is actually filled with information that applies to everyone. It is named after the showman P.T. Barnum, who famously declared that a good circus has "something for everyone." In the classic 1948 experiment, psychologist Bertram Forer gave his students a personality test. A week later, he handed each student a unique psychological profile based on their answers. The students were amazed. They rated the accuracy of these profiles as 4.26 out of 5. Ooooo, science, right? Every single student had received the exact same text, which Forer had copied from a newsstand astrology book. It contained statements like: “You have a tendency to be critical of yourself.” “At times you are extroverted, affable, sociable, while at other times you are introverted, wary, reserved.” “You have a great deal of unused capacity which you have not turned to your advantage.” These are Barnum Statements. They work because they are high-frequency, low-stakes generalisations. They rely on subjective validation: our brain's desire to find connections between generic information and our own lives. The Weaponisation of the Barnum Effect In the 20th century, the Barnum Effect was mostly used for harmless vanity. Horoscopes and Myers-Briggs tests used "flattery" to keep us hooked. They told us we were "critical thinkers" or "misunderstood geniuses." They played on the Pollyanna Principle, where we are more likely to accept positive feedback than negative feedback. But the algorithm has mutated this effect into something far more sinister. We are now witnessing a Medicalised Barnum Effect. The modern algorithmic ad does not try to flatter you. It tries to pathologise you. Instead of telling you that you are "disciplined but insecure" (a classic Forer statement), the modern Instagram ad asks: “Do you have a drawer full of cables you might need one day?” “Do you hate small talk?” “Do you doom scroll at night because you didn’t feel productive during the day?” “Are you a woman who is spacey? Forgetful? Or chatty?” (basically a person with a personality) These are the new Barnum Statements. They take universal human experiences, boredom, clutter, procrastination, social fatigue, and reframe them as symptoms. These diagnostic algorithms are worse than a polygraph test on The Secret Lives of Mormon Wives. At least reality TV admits it is spectacle. At least when the wires are hooked up on screen, we know it is for the drama. These online tools masquerade as medicine. They wear the lab coat of authority but underneath is nothing but a data-harvesting engine. They reduce the complex, lifelong architecture of a neurodivergent brain into a binary output. Pass or Fail. Subscribe or Leave. Real diagnosis is an archaeology of the self. It requires digging through layers of masking, trauma, and learned behaviours. It requires a human witness who can see the difference between anxiety and autism, or between trauma and ADHD. An algorithm cannot see the history in your eyes. It can only calculate your click-through rate. My fear is that future generations, including students I teach, will be handed a QR code instead of a conversation. The Threat of Flat Stanley We are promised a future of seamless voice interactions with AI. I assume this will function much like the speech-to-text apps I currently use, which have been lifesavers at times. However, there is a cost to this convenience that we rarely discuss. When I speak to an AI, I am feeding the machine. My data, my cadence, and my real voice are harvested to train a model that prioritises averages and norms. As a neurodivergent woman, I fear what happens when my distinct creativity is processed through these algorithms. Will (yep, goes my brain) my thoughts be flattened out like Flat Stanley? Will the jagged, interesting edges of my thinking be sanded down to fit a generic model of "professional communication"? Side note, ask an AI to compose an ‘out of office for a university professor’, and the default pronoun will be ‘He’. Isn’t that something. No, thank you. I want to keep my womanly dimensions. Diagnoses and Dragons This year has been a watershed moment for my daughter and I. We both received new diagnoses. I am dyslexic, and I have now been diagnosed as autistic. My daughter has started her own journey during her school years. I look at her and I see the difference in our paths. I am an adult who survived my entire education without support. I built my coping strategies out of necessity and instinct. My daughter has only been in school since 2019. She has not yet modelled these extensive defences. What she does have is a strong, innate sense of self. She knows what she likes. She knows what causes her to feel "blurgh." She knows what is fun. She is fun. It would be horrific if a diagnosis report simply stated: Have D use an AI. Why? For what purpose? If we simply shovel AI tools at her, we are bypassing the human work of understanding how she learns. We are replacing a helping hand with a predictive text generator. My fear is that we are heading toward a future in Education where human support is considered out of reach. By the time my daughter reaches university, I imagine professional support services will be rarer than a dragon with golden toes. I fear students will be handed a generic "AI Toolkit" and told to get on with it. This is already happening, btw. The Philosophy of the Ape and the Wolf Mark Rowlands (I'm reading a lot of his work lately) reminds us that there is a difference between the instrumental value of the "ape”, who schemes and plans for a future result, and the intrinsic value of the "wolf”, who lives entirely in the moment of being. AI is the ultimate tool of the ape. It is obsessed with efficiency, output, and results. It tries to civilise the wildness of our thoughts. But the neurodivergent mind often has more of the wolf in it. It does not always want to be efficient. It wants to wander. It wants to make connections that an algorithm would label as errors. I can answer the original question without a single piece of software. What makes the work possible for me? What would make it possible for my daughter? Flexibility and time. That is it. We need the flexibility to use Trello on Tuesday and burnout on Wednesday. We need the time to process the world without a predictive engine rushing us to the end of the sentence. If you apply flexibility and time to neurodiversity, you will be surprised by what happens. We have sophisticated, instinctive strategies that bloom when we are not being forced into a standardised shape. This is not a superpower btw. The answer is not in the code. It is in the space we leave for the human. Image by Mariann Hardey, 2025 My Utopian Double, Simon’s Argument, and the Oligarchs Who Own Us
Writing this post is an act of memory. It is also an act of urgent, unfinished conversation. Last year, my dearest friend and intellectual collaborator, Simon J. James, and I wrote a chapter together. It was called "Wellsian Doubles: Digital Space as Modern Utopia." This year, Simon died suddenly and unexpectedly, our research collaborations cut short, my dearest friendship lost. Re-reading our words in the shadow of that loss, and in the glaring, toxic light of our current technological landscape, I find our arguments have accrued a haunting weight. The intellectual journey we took, weaving Simon's brilliant and exciting scholarly understanding of H.G. Wells with my own research into digital life, now feels less like an academic exercise and more like a map we were drawing of a territory we had just begun to explore. The warnings we issued, the connections we made, now seem desperately prescient. The world is dominated by AI, a term that has become shorthand for a future being rapidly and unilaterally defined by a small, homogenous class of tech oligarchs. Their vision is narrow, it is neuronormative, male, and it is relentlessly dystopian, dressed in the flimsy language of utopian progress. Simon and I were writing about a "digital utopia," but the future we are being sold is its inverse. The conversation he and I started must now continue. The Man Who Met His Perfected Self Our argument hinged on a moment of profound, uncanny self-confrontation in H.G. Wells’s 1905 A Modern Utopia. The narrator, a thinly veiled Wells, is transported to a parallel utopian world. To be registered by this perfect global state, he must provide his thumbprint. This biometric data, of course, already exists. It belongs to his Utopian double. The encounter that follows is not one of joy, but of critique. This ‘other’ Wells is a ‘perfected’ version of the narrator. He is "a little taller than I, younger looking and sounder looking; he has missed an illness or so, and there is no scar over his eye". This double is not genetically different; he is the product of superior social conditions, a "superior being" grown from the same "natural... material". Wells, the narrator, is forced to see himself not as he is, but as he could have been. He is confronted with the "waste of all the fine irrecoverable loyalties and passions of my youth" - where all the potential squandered, the scars inflicted, by his own flawed, imperfect world. The two Wellses stand as a "grotesque 'before and after' image," a living testament to the power of a society to either elevate or destroy the individual. The Digital Twin: Our Utopian Phantom The argument Simon and I built connects this 1905 literary device directly to the central, lived experience of 21st-century digital life. We are all, now, the narrator in A Modern Utopia. We all live in constant, immediate dialogue with our own perfected doubles. This double is the curated social media feed, the edited LinkedIn profile, the flawless, performative self we project onto a myriad of digital screens. This is the "digital self-work" I have written about as the relentless, iterative, and anxious labor of crafting an "enhanced iteration of our own selves". We are all engaged in building a digital twin, an aspirational phantom who, like Wells's double, has "missed an illness or so" and bears no scars. Like Wells, we "come to meet ourselves" in this digital space , and we almost always find our real, "mucky, humbling" flesh-and-blood existence wanting. We are caught in a permanent state of comparison, not just with others, but with the perfected, artificial version of our own being. The Dystopian Pivot: A Question of Ownership Here, the entire utopian-dystopian axis of our argument pivots on a single, devastating question. It is the question that now defines our digital reality: Can a self-portrait be utopian if the canvas, the paint, the brushes, and the gallery are all owned by a corporation that profits from the exhibition of your perfected image? The answer is, and must be, a firm no. This is the core of the e-topia. We do not own our doubles. We are not the beneficiaries of our own "digital self-work"; we are the raw material. Our "digital twins" are the property not of the self, but of the corporation. We are performing our identities within an architecture we did not build and whose blueprints we are not allowed to see. And this architecture is far from neutral. It is the product of the very tech oligarchies, the new "ruling elite," that are defining our age. Their vision of "perfection" is the one that is algorithmically rewarded. As we argue, the “enduring social hierarchies, encompassing gender, age demographics, commodified cultural expression, sexuality... and the body as a major locus of regulation" have not been erased. They have been amplified, codified, and turned into vectors for profit. The algorithm is not a mirror; it is a mould, enforcing conformity to a narrow, marketable, and often deeply damaging ideal. This is the male AI tech dystopia in practice, a system that mistakes surveillance for community and data-harvesting for connection. This is a system that commercialises a children’s teddy bear with an AI chatbot that included advice on where to find knives, how to light matches, as well as explanations of sexual kinks. This is where the cage is built. In another piece, I reflected on our impulse to build "cages" around AI, to treat it as something that must be constrained and controlled. But in re-reading the chapter Simon and I wrote, I see the parallel with horrifying clarity: the cages we build for AI are just mirrors of the cages we have already built for ourselves. The tech oligarchs are not building a truly curious or creative intelligence. They are building an administrator. This is the most overlooked, revelation in Wells's book. His perfected double, the man raised in Utopia, is not a writer, not a creator, not a public intellectual. He is an administrator, one of the "samurai" elite who manage the system. His specialisation, in fact, is "the psychology of criminals”. Even in a supposed utopia, his job is to manage the "imperfect or abject". This is the goal of the tech dystopia. It doesn't want creators; it wants managers. It doesn't want creativity, spontaneity and change; it wants "certainty and stability". The "perfected" digital self, the flawless digital twin, is not a liberated self. It is an administered self, a self that has internalised the logic of the cage, performing its perfection for the "eye of the State”, and which is now the eye of the algorithm The Price of Admission: Total Surveillance The most chilling parallel, the one that truly closes the trap, is that Wells himself understood the price of his perfect world. Simon and I pointed to Wells’s own chilling admission that his utopia required a near-total loss of privacy, a sacrifice Wells, and many others today, seem to deem "worth making". Wells’s utopian state is only possible through constant surveillance, through "the eye of the State that is now slowly beginning to apprehend our existence... focussing itself upon us with a growing astonishment and interrogation". This parallel is precise. We have accepted the "invasion of life by the machine" that Wells predicted. We have made the same Faustian bargain, trading our privacy, our autonomy, our "irrecoverable loyalties and passions" for the "privilege" of performing our perfected selves in the corporate-owned digital space. The "eye of the State" is now the eye of the corporation, and its surveillance is total. And what of the utopian double, the perfected Wells? He wasn't a writer, a creator, or a critic. He was an administrator. This, too, was a warning. The system does not want critics. It does not want artists. It wants managers, it wants compliant users, it wants data points. In Wells's own words, the utopian world has a "death instinct" for the genre of utopian writing itself, a desire to "perfect the world so far as to render such a genre of writing unnecessary". It seeks to cancel the very critique that spawned it. A "Poiesis of the Self": The Unfinished Argument These thoughts brings me back to the idea of the learner and curiosity I have previously posted about. If we are building cages for AI, we are simultaneously killing our own curiosity. The act of creation, of utopian thinking, is an act of profound, open-ended curiosity. It is what Simon and I, following Wells, called poiesis, meaning as a state of creative, restless, self-improving change. This poiesis is the exact opposite of the cage. It is the "universal becoming of individualities". It is Wells's insistence that "nothing endures, nothing is precise and certain... perfection is the mere repudiation of that ineluctable marginal inexactitude which is the mysterious inmost quality of Being". The male AI tech dystopia is built on the repudiation of this. It is a cult of "perfection," of "certainty," of "precise" and "restrictive" categorisation. It cannot tolerate "marginal inexactitude," because that is where humanity, and genuine learning, resides. If AI is a "learner," what are we teaching it? We are teaching it to be an administrator of our cages. We are teaching it that the poietic self, the flawed, scarred, creative, unpredictable narrator, is "abject" and must be "corrected" into the flawless, manageable, and ultimately sterile administrator. It is this final part of our shared argument that I now hold onto. Simon and I saw a potential way through this deterministic, corporate-owned dystopia. We argued that utopia, for Wells, was not a final, static place or a "perfection." He insisted that his modern utopia must be "in motion," "fluid and tidal". The goal was not being, but a "universal becoming of individualities". To achieve this, Wells used a concept from Plato: poiesis. Again, poiesis is creative action, the act of making, of bringing something new into being. Wells’s utopia needed Poietic inhabitants to keep it in a constant "state of creative change". Simon and I proposed that digital modes offer a "poiesis of the self". This is the hopeful path, the difficult, necessary act of intellectual and personal resistance. It is the refusal to be a static, reified, commodified product. It is the insistence on reclaiming our "co-creative" agency , to see our digital lives not as a performance for an algorithm, but as a "resolute engagement with the world and the self". Our identity, like Wells's utopia, must be "fluid rather than fixed". Utopia, we concluded, is "an ever-ongoing project". Simon is gone. But our shared project, the poiesis of our collaboration, is not. The task now is to rescue our digital lives from the administrators, to refuse to be mere "flawed and reified representation[s]" , and to insist, against the deterministic pull of the oligarchs, that a "better way of being" is still one we can creatively, curiously, and collectively make for ourselves. This is the only way to honour the conversation we started. This post is in memory of Simon J. James. who was brilliant and is missed. All readers should have Open Access to our chapter, please get in touch with me ([email protected]) if you have any difficulty locating our words. Image, paperbag academic, by Mariann Hardey, 2025 The Red Pen as a Sledgehammer
There is a moment every academic knows. It is the pause before opening the email with "Decision on your manuscript" in the subject line. It is a moment of vulnerability, a baring of intellectual self to the anonymous judgment of the field. We steel ourselves for critique. We hope for engagement. We accept that rejection is part of the process. What we do not, and should not, accept is the demolition. I received a desk rejection recently. It came after a long delay, flagged with an apology from a new editorial leadership. The rejection itself was not the problem; we are all used to rejection. The content of that rejection, however, was a masterclass in everything that is broken in academic culture. It was not peer review. It was a takedown. But, not in a catchy K-Pop tunes manner. This Is Not About Rejection Before we go any further, I want to be perfectly clear. This post is not an angry rant because my co-authors and I research was rejected. Rejection is a fundamental, and often productive, part of the academic ecosystem. Good research is forged in the fires of rigorous, critical, and even harsh peer review. We get "no" far more than we get "yes," and that is the price of admission. I had fabulous rejection the other week (more on this later)… This post is about the weaponisation of feedback. It is about the specific, toxic culture of intellectual grandstanding that hides behind the veneer of "maintaining standards." The heart of the problem is not the rejection; it is the abruptness and absurdity of the message. It is the choice to use power not to build knowledge, but to humiliate and exclude. The problem is an email that is not a critique but a cudgel, a message so disproportionately cruel and dismissive that it ceases to be a professional assessment and becomes a personal attack. This is not about the outcome. It is about the method, and what that method reveals about the wielder. A Performance of Power I’m going to let you look at the feedback's core message. It was a piece of performative grandstanding, a judgment delivered from on high. The author of this feedback refused to engage with a piece of research; instead, asserting their own superiority over it. The entire text was laced with elite posturing, designed to signal that my co-authors and I were not part of the 'sophisticated' club. Our analysis was dismissed as simplistic, a mere summary lacking any theoretical depth. A core part of our methodology, a well-respected method for analysing social media content, was declared entirely irrelevant to the questions we posed. Our interpretation was labelled as nonexistent. The letter concluded with the stunningly arrogant assessment that the entire manuscript was an incoherent shambles. It was, in essence, a Prince Ronald moment. In the children's story, "The Paper Bag Princess," Princess Elizabeth dons a paper bag to outsmart a dragon and save her fiancé a prince named Ronald. But when she rescues him, the prince doesn't thank her. He looks at her soot-covered face and her paper bag and says, "Elizabeth, you are a mess... Come back when you are dressed like a real princess." This is the very essence of academic elitism. This is the braying of an individual, likely senior academic secure in their own elite standing, who believes their position grants them the right to not just disagree, but to demean. The Collateral Damage of Elitism This is where the personal and the systemic collide. My primary reaction was not just frustration for myself, but a cold fury on behalf of my co-authors. This was to one of their early publications. Imagine, as a junior scholar, stepping into this arena for the first time, only to be met with this wall of contempt. This is how academia bleeds talent. This is how gendered exclusion operates. It’s not always a slammed door. Sometimes it’s an email, dripping with disdain, that tells you your work, your thoughts, your very presence, are a complete failure. It is an act of intellectual violence, a symptom of a much larger disease: a culture that romanticises burnout and offers no structural support. This feedback is the voice of that toxic system, an individual who chooses to use their power to inflict a wound. The Practice of True Parity This kind of gatekeeping is precisely why the way we collaborate matters so much. Working with a mix of people at different career stages is not about mentorship, at least not in the traditional, hierarchical sense. This idea of the senior academic "bringing up" the junior scholar is itself a form of patronising elitism. It reinforces the very power structures that allow such toxic feedback to exist. The real, radical act is to build collaborations based on parity. It is to create a partnership where all forms of expertise are valued equally. My co-authors, one at the start of their career, brings a methodological rigour and a fresh perspective that I, as a more established academic, benefit from enormously. My experience navigating the brutal landscapes of peer review is simply another form of expertise, not a superior one. True allyship is a structural commitment. It means deciding from the beginning that we are building a single project from two equally vital toolkits. The role of the senior academic is not to "protect" the junior one, but to use their privilege to absorb the bureaucratic violence, contextualise the feedback as a systemic failure. To be the first to say, "That prince is a bum." The Empathy Failure This entire episode is a catastrophic failure of empathy. Brené Brown’s research defines empathy not as sympathy, not as feeling for someone, but as feeling with them. It is the vulnerable choice to connect. Sympathy stands at the top of the hole, shouting down, "It's messy down there." Empathy climbs down into the hole to say, "I know what it’s like down here, and you are not alone." This editor, like prince Ronald, is armoured in his own ego. He is the person standing at the top of the hole, declaring that the hole itself is unsophisticated. This feedback is a performance of anti-empathy. It is a strategic choice to use judgment as armour, because to engage empathetically would require him to be vulnerable, to connect with the act of intellectual creation, and to be a colleague. Instead, he chose the power of the pedestal, dismissing the work because it is far safer to judge than to connect. Rigour vs. Cruelty in the Age of AI Slop Now, let me be clear. Empathy is not a participation trophy. It is not the pat on the head for a good try. I say this as an editor myself, one who is currently drowning. The sheer volume of work is crushing, but it's the nature of the new volume that is truly corroding. We are all facing a new, specific, and soul-crushing fatigue from the deluge of AI-generated slop. Please, please stop with this slop I repeat as my mantra when I open up my own Editors digital desk. As an editor, I receive a relentless slurry of meaningless, plagiarised, hallucinated text. It’s an endless signal-to-noise problem that wastes our most precious and finite resource: our cognitive load. (Which is precious when you are neurodiverse). This new fatigue is a specific kind of burnout. It’s the weariness of a lifeguard watching thousands of bots pretend to drown (they're just waving, right?) It makes you calloused. It makes your trigger finger for the 'reject' button itchy. You start to assume bad faith in every submission. But this is precisely the moment where empathy becomes a non-negotiable professional obligation. Our exhaustion with the system does not give us a license to be abusive to the individual. Empathy is the critical tool of discernment that allows us to distinguish between a bad-faith, automated submission and a good-faith, flawed human effort. The AI-generated paper deserves a form rejection. The human-authored paper, even if it is deeply flawed and requires rejection, deserves a response that respects the labor. Empathy is what allows us to be rigorous without being cruel. It is the choice to critique the work, not the person. It is the difference between saying, "The theoretical contribution is not clear," and "Your work is a big mistake and I am better than you.” The View from the Other Chair Again, I am also a journal editor. When I read this letter, I do so with a profound sense of professional failure—not on my part, but on the part of this journal’s new leadership. In my own editorial practice, this feedback would be a good reason not to go near peer-review ever again. It would never, under any circumstances, leave my desk and go to an author. My job as an editor is to be a custodian of the field. It is to find the value, to guide the author, to protect the integrity of the review process and the human beings who participate in it. We reject papers constantly. But a rejection should be a tool for improvement, not a weapon of humiliation. This editor's failure to distinguish between critique and abuse is a stain on the journal. Their final, hollow wish that we would not be deterred is perhaps the most insulting part, a feigned elitist politeness after an act of deliberate cruelty. Rejection as Success! Now, let me contrast this with a rejection I received from another journal for a different paper. This one was also a desk reject, but it was the polar opposite in its effect. The editor began by validating the work, calling the topic "highly timely" and acknowledging the "rich longitudinal, multi-method qualitative design". The "no" was just as firm, but it was not a demolition. Instead, what followed was a precise, structured, and generous roadmap for improvement. The feedback was a model of clarity, pointing to specific, actionable issues: an "uneven" integration of theory and data , a lack of transparency in how the different methods were "combined analytically" , and a "conflation" of descriptive observations with conceptual claims. This, right here, is what good editorial practice looks like. This is not a "mess"; this is a checklist. For any writer, this is a gift. But for neurodiverse writers, who often struggle with the unspoken rules and subtext of academia, this kind of explicit, logical, and depersonalised critique is an act of essential inclusion. It removes the emotional guesswork and replaces it with a clear-cut task. I didn't feel humiliated; I felt seen, respected, and, most importantly, I knew exactly what to do next. Again, I know what some readers might still be thinking: "This is just an academic pissed off over a rejection." It is a convenient way to dismiss this entire reflection as sour grapes. But that would be a fundamental misreading of the problem, and a missing of the entire point. We are all built to handle rejection; it is the ink we swim in. This post was never about the "no." It is about the how. It is about the profound, unprofessional, and systemic failure that occurs when an editor, an individual in a position of immense trust and power, chooses to issue not a critique, but a personal demolition. This is not about my wounded pride, I have boxing strategies for this part. It is about an abusive culture that masquerades as rigour, a system that protects the egos of its prince Ronalds while it burns the next generation of Elizabeths. This is not a complaint. It is diagnosis. Tojan Horse image by Maz HardeyA few days ago, my brilliant friend and education practitioner sent me a link to a Google blog post on AI and learning. On the surface, it’s the usual optimistic fare: AI as a tool for personalised learning, for bridging gaps, for efficiency. And for a moment, a fleeting, optimistic moment, I saw the shimmering potential. Then, the cold, hard slap of reality. Not the reality of AI's limitations, but the reality of its deployment, its framing, and the deeper, insidious currents it often serves. I am a professor. I am autistic. I am dyslexic. And like many others, my mind is not a neat collection of separate cognitive functions that conveniently slot into diagnostic categories. It is a messy, vibrant, sometimes terrifying convergence. To speak of "my dyslexia" or "my autism" as distinct entities is like trying to describe the flavour of a tom yum soup by isolating the salt. The essence is in the blend, the unpredictable, sometimes overwhelming symphony of sensations. And often, that symphony culminates in a profound, exhausting mush. This is the ground upon which the grand narratives of inclusive technology are so often built. These are narratives that, I increasingly suspect, function less as bridges and more as Trojan Horses. The Siren Song of the AI Education Silver Bullet The rhetoric around AI in education is seductive. It promises to "level the playing field," to "personalise learning," to "empower neurodivergent students." For a moment, it sounds like salvation. For the dyslexic, AI will summarise dense texts; for the autistic, it will organise schedules or draft emails. And yes, in isolated moments, it can do precisely that. I can attest to the small victories. The AI summariser that can cut through a thicket of academic prose, saving days of concentrated cognitive effort. Or maybe, academics should write with clarity and avoid dense and inaccessible flourishes in their work… The executive function assistant that helps me wrangle a chaotic inbox. These are not trivial gains. They are moments of respite in a landscape that often feels like an uphill battle. But here’s the rub: these isolated victories are often presented as evidence of a systemic solution. And this is where the Trojan Horse comes in. The promise of inclusion via technology is hoisted over the walls of traditional pedagogy, not as a radical reimagining of the city itself, but as a new, more efficient weapon in an old war. The Hidden Costs: Cognitive Exhaustion and the Illusion of Choice Mark Rowlands often writes about the animal mind, the embodied cognition, the way our being in the world shapes our understanding. Our neurodivergent minds are profoundly embodied. Our energy is not an infinite resource; it's a carefully managed, precious commodity. And often, it’s already depleted. The Google blog, like so many others, extols the virtues of these new tools. But who speaks of the cognitive overhead? Who calculates the hidden tax levied on a neurodivergent brain simply to learn a new tool, to integrate it into a workflow, to debug its inevitable failures? Here’s an insight into how my mind works. I cannot simply isolate the task itself and ask an AI to ‘run it’. I need scaffolding around the task. For neurotypical individuals, adopting a new app might be fun, and enhance their efficiency or productivity (regardless of how toxic this mindset is…). For a mind that already expends disproportionate energy on executive function, sensory filtering, and processing complex information, another solution can feel less like an aid and more like another brick dropping on your head. We are told, "Just learn to prompt better!" "Explore its features!" "Maximise its potential!" “Use it ‘critically’” (Whatever that means). These questions are not helpful; it's an additional layer of homework. It's a constant, low-level hum of anxiety: Am I using it correctly? Is it actually helping or just adding another step? Is this "aid" actually a subtle form of digital gatekeeping, where only those with the energy to master it truly benefit? Sometimes, the promise of support through technology simply shifts the burden. Instead of changing the inaccessible structure, we are handed a more complex hammer and told to adapt the world ourselves. And I want to be clear, it is apparent that AI was never designed with neurodiversity in mind, this is significant challenge for anyone who encounters AI, especially if you are told to simply ‘play’ with the technology. That’s a very scary place to be. The Real Battle: Not Tools, But Systems The truly critical edge here is that the focus on technological fixes often sidesteps the more fundamental, uncomfortable truths about our educational systems. Why do we need AI to summarise dense papers? Because academic writing is often needlessly convoluted, exclusive, and antithetical to effective knowledge transfer. Why do we need AI for executive function? Because curricula are often rigid, assessments inflexible, and institutional structures demand a standardised mode of engagement that disregards the vast spectrum of human cognition. Instead of demanding that professors teach differently, that universities reform their assessment methods, or that academic culture embraces diverse forms of expression, we are offered a technological bypass. The argument morphs: "Oh, it's not the system that's flawed, it's just that some brains need extra tools to fit into it." Neurodiversity, in this context, becomes a convenient vehicle – a Trojan Horse – for the uncritical adoption of technology. It grants moral legitimacy to the tech giants, allowing them to frame their products as benevolent instruments of inclusion, rather than as profitable enterprises that may, in fact, exacerbate existing inequalities. The "neurodivergent user" is championed, not because the system fundamentally changes to accommodate them, but because their challenges provide a compelling justification for deeper technological integration. And in this process, the very concept of "neurodiversity" is subtly reshaped. It moves from being an argument for systemic change and varied human experience to a consumer category for technological solutions. "You're neurodivergent? Here's your app! Here's your AI co-pilot!" The inherent value of diverse ways of thinking is lost in the scramble to digitally "fix" difference. (Screams)! Reclaiming the Narrative The future of education, for minds like mine, isn't about more tools to navigate a hostile education and professional landscape. It’s about cultivating a landscape that is less hostile to begin with. It's about assessments that celebrate varied forms of intelligence, not just rapid-fire recall or perfectly formatted essays. It's about curriculum design that anticipates a spectrum of processing styles. It's about institutional empathy that understands the finite nature of cognitive energy. Let the AI summarise. Let it organise. But let us never mistake these tactical aids for strategic victories. Let us be vigilant against the insidious notion that our complex, beautiful, sometimes chaotic brains are simply problems awaiting a tech solution. And we need to agree on which AI to use and why. The true conversation for AI in education shouldn't be about "is this cheating?" or even just "who is this including?" It needs to be: Who is this demanding more from, who is it truly serving, and are we using the genuine need for neuro-inclusion as a convenient smokescreen for a deeper, more problematic technological agenda? Because sometimes, true inclusion isn't about adding more, but about stripping away the unnecessary, the rigid, and the burdensome, allowing all minds the space to simply be and to thrive. Our minds are not a market for your solutions; they are a reason to change your systems
The curious learner. I find myself thinking about her a lot.
I even drew her, in a simple sketch, to try and make sense of the unease I was feeling. I call her 'The Learner'. She isn’t a difficult student. She’s not the one in the back of the lecture hall, disengaged, scrolling through her phone. She’s the one who is diligent. She’s the one who is curious. She’s the one who, after a session, stays behind to ask a question that lights up her whole face. A question that, in a healthier world, would be the entire point of education. A simple, wonderful question: “Could I?…” “Could I,” she might ask, “try to use this… this new AI thing… to help me brainstorm?” “Could I,” she’d continue, a little quieter, “see if it can help me structure my argument? Not write it! Just… help me play with the ideas?” She would like to experiment. She would like to play. She is standing at the edge of the most significant technological shift since the internet, a tool that will fundamentally reshape her world and her career. And her first, pure, academic instinct is to poke it, to test it, to see how it works, and to understand how she can think with it. And what do we do when she asks this question? We shame her. We don’t do it intentionally. We don't do it because we are cruel. We do it because we are, as an establishment, terrified. And so, when this curious learner holds up her spark of an idea, we douse it with the cold water of our own institutional panic. From all sides, the voices come. The ones I drew in the speech bubbles, floating over her head, pressing down. “Using AI shows you are lazy,” whispers one voice. This is the voice of moral panic. It equates a new tool of augmentation with an old tool of shirking. We are shaming her for her curiosity, labelling it as a moral failure, a lack of character. “You must show evidence of critical thinking,” insists another. This is the voice of deep irony. We say this while simultaneously discouraging her from critically engaging with the most important new tool of our time. We are, in effect, telling her that the only way to show critical thinking is to pretend this technology doesn't exist. “The uni has an AI policy,” says a third, definitive voice. This is the wall of bureaucracy. A policy almost certainly drafted from a place of fear, not of exploration. A document designed to prevent rather than to guide. It is a shield for the institution, not a map for the learner. “You already have teaching support.” This one is perhaps the most heartbreaking. This is the voice of dismissal. It fundamentally misundersstands what she is asking. She is not asking for help because she is struggling; she is asking for permission to be curious. We are telling her that the established, "correct" pathways are the only ones she is allowed to walk. So, what happens to The Learner? She gets shamed. Over and over again. And finally, she gets stuck. Her curiosity, once a spark, is now a liability. She learns the real lesson we’re teaching her: "Don't ask. Don't experiment. Don't play." She learns that the goal of education is not to explore the frontier, but to produce a piece of work that can be "evidenced" in a way that makes the institution feel safe. She learns to perform her "critical thinking" in a neat little box, far away from the messy, complex, and fascinating tools that she knows will define her future. She becomes stuck. And we, the educators, are the ones who stuck her there. This, I believe, is a profound failure. We are in the middle of a revolution, and we are spending all our energy trying to build higher walls, instead of teaching our students how to be architects. What if we changed our response? What if, when she asked "Could I?...", we leaned in and said, "I don't know. Let's find out together." What if we built sandpits, not cages? What if we designed modules specifically around "playing" with these tools? What if we asked, "Show me what you made with AI, and then write me a reflection on what it got wrong, what it got right, and what it taught you about your own thinking process." What if we stopped writing policies based on a panicked desire to "catch" cheaters, and instead started developing pedagogies based on a genuine desire to cultivate co-thinkers? Because The Learner is still there. She's still curious. But she's stopped asking. And that should frighten us far more than any AI ever could. And you might like my co-authored book on Generative AI and Education. Image copyright Mariann Hardey, 2025 On November 11th, I gave the opening keynote at the Deepfake and Society Symposium at the University of Otago. My colleague, Dr. Wasim Ahmed, and I were invited to set the stage for a day of critical humanities research. My talk was designed as a "zine-note" to explore the human, cultural, and political stakes of our new reality. Here is the script from my presentation. MY ROOFER, THE DISINFO-ARCHITECT (A TRUE STORY)
Before we talk about AI, algorithms, and global networks, I want to tell you a very analogue story about a roofer. On January 1st this year, I had a serious leak from my roof, which I had repaired. A few months later, a man knocked on my door. He didn't try to sell me a new roof—that would have been too obvious. He did something much smarter. He introduced himself as an 'expert tradesman'. He pointed up at my chimney and said, with a deeply concerned frown, "I've noticed... you've got a missed bit". He then spun this incredibly detailed story. This tiny, specific flaw, he explained, was going to let water run down between my house and my neighbour's, leading to 'significant, costly, hidden damage'. But, he had his tools. He could solve my problem, right there and then, for £450. Now... I didn't let him 'repair' my roof. What stayed with me was the algorithm. Not a digital one, but a human one. A simple, three-step script for hacking trust. He sold me a narrative of 'urgent, hidden danger'. He sold me fear. And most importantly, he sold me 'privileged access to a 'truth' that I couldn't see for myself'. He'd identified his target—a woman, living alone, who he perhaps assumed was 'easy to manipulate'. My roofer was a disinfo-architect in analogue. He proved that a compelling fiction is more powerful than a boring truth. This analogue con is the exact logic of modern disinformation. It’s not the bald-faced lie. It always starts with the 'missed bit'. It's the 'cherry-picked statistic'. It's the '10-second video clip cut from a 2-hour speech'. It's the 'leaked' email. It's a tiny, specific 'flaw' presented as the key to a much larger, hidden danger. It is a 'performance of authenticity'. We, especially as researchers, have been trained to believe that 'truth will out'. That facts will win. But the roofer proves that's not true. The antidote to a bad story isn't a fact. It's a better story. WHAT IS DISINFORMATION? (Inspired by a Roofer and Warhammer) This is the core of our problem. Disinformation isn't just a lie. It's the 'institutionalisation of deception'. It is the roofer's tactic, scaled up by technology. The 'missed bit' is now weaponised to turn a safe home, or a safe society, into a source of fear. The result? 'Truth itself becomes a malleable commodity'. My new roof, successfully reframed as flawed. A fair election, reframed as stolen. And this creates the battlefield we all now live in: the 'Disinfopocalypse'. An environment where it is 'difficult, if not impossible' to tell fact from fiction. Where we are 'drowning in data', but trust... trust becomes our most precious, and most endangered, resource. Disinformation is the Roofer's Tactic, leveraging institutionalisation of deception. A social script, a feigned concern, a performance of an expert. A deliberate, engineered assault on our shared reality. He weaponised a "missed bit" to turn my safe home into a source of fear. The result: Truth itself becomes a malleable commodity. My new roof was successfully reframed as flawed. 'Disinfopocalypse': The Battlefield A present where the very concept of objective truth is under relentless siege. An environment where it is difficult, if not impossible, for individuals to distinguish between actual facts and manipulated falsehoods. The result: We are drowning in data (and warnings), and trust becomes our most precious and endangered resource. THE "GOLDEN AGE" OF FAKES It wasn't always this way. When I started my academic career in the late 1990s and early 2000s, the internet was in its 'Golden Age'. The most dangerous 'fake' I investigated was on an internet dating profile. My early work was on digital behaviour, etiquette, and identity. The "fakes" were catfish. The "lies" were 10-year-old photos. The stakes were personal: a bad date, heartbreak. My research question was: 'How do people perform a 'true' self online?'. As researchers, we were observers—digital anthropologists studying a new tribe with a certain academic distance. The "truth" was still a knowable thing we could uncover. Now the stakes have shifted. They have moved from personal deception to societal manipulation. The lie is no longer a 10-year-old photo; it's a 'deepfake' video... a coordinated, AI-driven campaign. This was the end of our academic innocence. We went from being observers to being participants in what my colleague Wasim Ahmed and I call the "Disinfopocalypse": a state where we are "drowning in data" and have zero "clarity" on information source, legacy, or manipulation. HUMAN MACHINERY OF LIES My colleague, Dr. Wasim Ahmed, who you'll hear from next, will show you the 'battle maps'—the SNA graphs of how lies spread. I'm going to talk about the people on that map. This is the Human Machinery of Lies. It's a simple, two-part recipe. The Seeders: The Architects of the Story. These are the modern "snake oil salespeople". They craft the initial narrative. But their motives are complex: Profit: The "lucrative business" of falsehoods. Clicks equal ad revenue. Conviction: The "true believer" who genuinely thinks they've found a "universal truth" that the mainstream is hiding. They are 'authentic in their inauthenticity'. The Amplifiers: The Unwitting (and Witting) Chorus This is us. The people on Wasim's network maps. We don't amplify because we're malicious. We amplify for the most human reason of all: 'Social Capital'. To belong. To signal our identity. Humans are a storytelling animal. We occupy this planet by creating and sharing fictions. We call them gods, nations, and money. Disinformation works because it leverages our deepest evolutionary drive: the desire to understand and belong. THE AI ENGINE AND THE ARENDTIAN NIGHTMARE So, we have this ancient, human machinery. This brings us to the critical question: what happens when you connect this human machinery to a new, non-human algorithmic engine? You get the great accelerator. The algorithm builds the 'echo chambers' that trap us, feeding us more of what enrages us. It's the 'YouTube rabbit hole' on a societal scale. This technology isn't creating a new problem; it's perfecting an old one. It creates a public that changes behaviour based on 'emotional reaction, not reasoned analysis'—because that analysis has been manipulated or is invisible. Deeeep Fakes. Who built this engine? This algorithm—this AI—is the 'great accelerant' of our times. But it wasn't built in a vacuum. It's the product of a 'tech-bro' culture that lionises disruption and scale over nuance and safety. We don't have to guess its values. Long before generative AI, the academic Safiya U. Noble, in her foundational book Algorithms of Oppression, diagnosed the harms of this culture. She showed us how a simple Google search for 'Black girls' returned almost exclusively pornography. The AI engine is built on a coded logic of gendered and racial humiliation. It should come as no surprise, the very term 'deepfake' wasn't coined by a university lab. It was the username of a Reddit poster in 2017, promoting his 'killer app'. A tool specifically designed to 'paste the faces of female celebrities onto pornographic videos'. An industrial-scale production of non-consensual, gendered humiliation. My point is, this engine 'isn't neutral'. Its goal is not truth. Its goal is engagement. The algorithm doesn't care why you're angry. It just knows you stayed. OUR FIELD GUIDE FOR THE NIGHTMARE A tech dystopia is the nightmare that we have been warned about for over a century. Increasingly, I turn to popular fiction to frame these cultural narratives, where these texts are the diagnosticians of our current state. The Diagnostician: Ray Bradbury My first diagnostician is Ray Bradbury. We all remember Fahrenheit 451 for the fire. We remember the woman who 'immolates herself and her home', a terrifying, final act to protect her 'thoughts, her very life', and her 'fundamental right to share knowledge'. She embodies the human drive to protect truth from an overt, raging fire. But Bradbury's deeper warning—the one for our time—was the 'subtler, perhaps even more terrifying, form of censorship'. What if the books are never burned? What if they are 'simply rewritten'? What if their facts are 'expertly distorted, until public understanding itself becomes malleable'? This is the world AI perfects. The censorship we face is that 'quiet, constant hum within our minds' of the algorithm, endlessly rewriting reality. The Method: George Orwell If Bradbury diagnosed the environment, George Orwell diagnosed the method. In 1984, the Ministry of Truth mandates that 'two plus two equals five'. How? Because the authority and the echo chamber reinforce it. The 'AI-driven echo chamber is this method, perfected'. It algorithmically reinforces the lie until it becomes the only fact you see. The Political Goal: Hannah Arendt However, it's Hannah Arendt who provides the most terrifying and accurate diagnosis of the political goal. This is the sharpest point I can make today: The real horror of the deepfake is not to make you believe a lie. It is to make you believe nothing. It is the 'systematic destruction of a fact-based reality'. The goal is to create a populace so exhausted, so cynical, so disoriented that it 'believes everything and nothing'. A populace that has lost its shared, fact-based world also loses the ability to govern itself. It can 'only react, not reason'. The endgame is to erode our 'shared epistemology' so that democratic argument itself becomes impossible. PROVOCATIONS FOR TODAY'S SPEAKERS So, this is the lens for today. As the opening keynote speaker, I want to offer a provocation for the incredible ideas I see on the programme: When you hear talks on 'digital harm' or 'copyrighting the self', I want you to ask: 'How can we 'copyright' a self that is infinitely reproducible?' How do we define 'harm' when the goal is to destroy the concept of truth itself? When you hear talks on 'critical literacy', ask: 'How do we teach students to critique a text that is designed to bypass the brain and hit the gut?' When you hear talks on 'bioethics' and the 'colonisation of reality', this is the heart of it. A deepfake is the ultimate 'colonisation of the self'. What 'relational ethics' can we possibly have with a 'synthetic self'? I've shown you the 'why'—the humanist elements in crisis. My colleague, Dr. Wasim Ahmed, is next. He will show you the 'how' and the 'where.' He will show you the maps of this new reality. BE THE HUMAN FIREWALL The solution to this humanist crisis will not be an algorithm. It is us. Our 'academic process of verification, critique, and rigorous doubt... is the antidote.' My final provocation is this: Our job is no longer to study this; it is to act on it. 'Our job is to be the 'Human Firewall. In our teaching, in our research, and in our public life.' Thank you. TopCat is an 18-year-old elderly female moggy who is having the time of her life in Scotland. TopCat has been self-tracking for 18 months, with her owner curious about why she had gained weight (a lady of a certain age? ), as well as where she went all day…
TopCat’s tufty area is short and fluffy, her floof jutting v-shaped under her more flexible spine. Her cat frown rises outward from twin creases above a snub nose (she’s a Himalayan Persian) and her pale strawberry blonde fur pushed down from her high flat temple to pick up the V-motif once more. She has the appearance of a blonde cloud. And she’s a regular killer of mice. I’ve been interested in self-tracking since my late father bought a pedometer for our Guildford council house when I was eight years old and we tracked the steps from our council house to the local bakery for fresh doughnuts (3,477 steps). Counting steps has always represented a personal resonance with my surroundings, an interest in health, and a celebration of technological innovation. This is why I began writing my book Household Self-Tracking During a Global Health Crisis in 2020. The goal was to consider how the commercialisation of health promotion through self-tracking technologies is symptomatic of a larger social and cultural health change marked by increased individual investment in and image construction of fit and healthy living. What I hadn’t anticipated was the same level of investment and interest in self-tracking with (not just for) pets. TopCat had three additional ‘homes’ and four ‘owners’ who sought to cater to her every blonde furry whim, it was discovered by viewing her GPRS tracking data from a fitbit attached to her collar. Perhaps tracking your dog’s daily steps or your cat’s sleep patterns is ridiculous, but I’ve discovered that understanding the more ridiculous forms of household tracking provides better insight into health practices as a way of living in a world that is both in crisis and promoting breakthrough after breakthrough in health technologies. During the course of writing the book, I became aware of how household health data practises extended care routines and opened intimacies in such a way that members (especially pets) could motivate and sustain healthy changes. And, while I passed up the opportunity to conduct direct interviews with pets (for the next book), my research discovered that tracking with pets provided care and affective forces that were important in household relationships. Such absurdity may allow us to investigate new health connections made not only between people, but also between people and their digital devices, pets, and, in TopCat’s case, multiple homes. So, here is an ever-attentiveness to health to describe the caring intimacies and responsibilities deployed in health tracking in households with people and animals. Because tracking is viewed as an analytical category within the home rather than something exclusive to humans, health-related identities mean different things to different generations (human and pet), and focusing on interconnected health narratives allows us to unpack contextualised meanings. Pets, like technological confidence, class, generational, or gender relations, can be used in sociological health studies to understand household dynamics and the implications for other types of tracking and a sense of social responsibility. My observation of self-tracking extending to furry members of households can be summarised as follows: Tracking practices increased support and contributed to the flourishing of happiness — even for animals. Pet health data may be considered novel or less important than people’s health data, but it reveals a strong positive association with tracking, as well as an interest in and preservation of intimate data. The novelty of the tracking activities (such as TopCat) is a strong motivator for the household to begin health pet tracking; however, this belies the serious point that maintaining such tracking with pets contributes to clear health outcomes and preventative actions, reinforcing the benefits and continuation of such activities. In response to my general question ‘Do you like tracking?’ there was a strong emphasis on pet welfare. These aspects of my study revealed that households were just as interested in tracking with pets as they were in monitoring general health interests symbolising attachment to informal digital health practices and extension of responsive and caring approaches in the enactment of health monitoring, whether for people or animals. In reading this book, I hope that you have a sense of the different aspects of health data and the combination of tracking behaviour. There is so much to untangle in household tracking, from the commercial organisations seeking to profit from health data, to the policymakers closely reviewing and analysing social uses of health data, to the education required to fully understand self-tracking data legacy in our lives. Writing the book, in a state of global uncertainty around health when there were long periods for which we were confined to being at home, was terrifying, empowering, overwhelming, informative, and confusing all at the same time. Talking about household tracking with others immediately raised concerns about when not to track, especially when governments and global health policies are involved and trying to persuade us to adopt health tracking, if not impose it on us. Despite growing policy initiatives, health tracking is a personal choice. There are very active communities focused on user data and privacy rights, patient record access, and open data that can help raise awareness of the different ways people can understand health data. In writing this book, the tension was clear between health data being used as a commercial asset for profit by some organisations, the role of public health providers such as the NHS in the United Kingdom, investment by government agencies and the level of control of users themselves. Households, or ‘bubbles’ as they were determined in the pandemic, provided an appealing and comforting narrative in the context of growing health uncertainties such as those associated with vaccination risk, the need to shield and protect extremely clinically vulnerable groups, and increased apprehension about policy-led decisions. The same bubble helped me in feeling a sense of protection: that I could provide for my family myself. What is striking, having reflected back over the book and the pets featured in the last chapter, is how each of the households believed and invested so passionately in personal health responsibility. Growing fears about the pandemic translated into increased household tracking practices across generations, people, and even animals. I find myself thinking about how positive associations with tracking may obscure the recognition of emerging health anxieties and intolerance toward people who behaved differently from their household and from whom other household members sought to differentiate themselves. The health tracking narratives reveal that households serve as a focal point of meaning for perceptions of responsibility and expected behaviour. This may seem obvious, but research into household health dynamics has led to the expansion of reciprocal care, with adjustments in how commitments to the needs of dependent members were met within the home. Another manifestation of what is viewed as ‘risky’ health behaviour is being modified within households, while also connecting outwards to new social movements and various forms of single-issue and identity politics (e.g. ‘fitsporo’, food sustainability, anti-racism and gender politics), with health tracking helping to create new identities and challenge normative health images. My sense of self is being remade because of my health tracking. I believe that a future of household tracking that allows access to and understanding of personal data is now an essential part of people’s social identities and the prevention of life-threatening diseases. For my part, being immersed in a home environment of household tracking has begun to untangle some of the complexity surrounding the treatment of those who are temporarily or permanently dependent on others for care. Care is a crucial domain that reveals the tensions between ill health and dominant societal values and roles starkly — especially for women. The reader will quickly realise I am not happy with the increasing tendency to encourage profit from commercial health products. And, readers will make their own judgements here. A version of this article was published on Medium. I've been asked to talk about how to "enhance a global reputation" for this professional skills workshop.
Immediately imposter syndrome shouted in my ears, why are they asking you? So I've some advice for myself and others building a global reputation about their research, the projects they are passionate about or anything else you wish to gain prominence in doing well - while, at the same time, imposter syndrome shouts loudly (and often convincingly) at you. My work focuses on identities in tech communities. For example, I've written extensively about the mislabelling of "women in tech". The BBC has featured my research, including Laurie Taylor's BBC Radio4 programme Thinking Allowed, and articles in The Guardian, The Independent, and many other international media publications. I try to embody the notion that self-promotion is just as much promotion of scholarly work, including the communities I research, as the opportunity for enhancing my own professional reputation. Unfortunately, this gem about self-promotion and other possible pearls of wisdom are lost to subsequent self-doubt. So in acknowledging what channels to use for optimum reputation enhancement, we need first to recognise our capacity to feel that we are worthy of sharing our ideas. In terms of self-promotion (especially social media), I have buckled under the nasty criticism of anonymous trolls who throw rebukes laced in misogyny and personal attacks. Self-promotion is being prepared to be vulnerable or open to public attack, in very different ways from defending academic knowledge we are used to at conferences. Different perspectives and disagreements about research are exhilarating. Cyberstalking is terrifying. In the past, I have let systems and processes bury me into silence, temporarily at least. One example is asking for support from journalists and marketing teams who had published my research when a social media pile-on directed at me critiqued 'women in tech' as 'bitches' or 'catty'. There was very little support. I found myself, like the communities I research, once again silenced and singled out for attack. In the process of recovering my voice, I have had to face the reality that speaking out (or not) is just as much about me as it is the communities I research and belong to. Being silenced as a scholar feels unjust. One way I have found to cope is to remind myself that silence is a strong theme in my research. Thinking about overcoming being silenced is when turning to multiple channels to self-promote and engage with different groups has allowed me to connect with others and gain interest in my work. About Impostor Syndrome Self-doubt is not unique to scholars. Nevertheless, for working-class scholars, disabled scholars, women scholars, immigrant and international scholars, our bouts with impostor syndrome — feeling as though we do not belong or are not as good as our colleagues — remind me about the importance of finding networks of support. Some of the best networks have been internal to my institution. For example, I've found solace in the MAMs (Mother's and Mother's to be) University of Durham network and other groups that operate around the academy. I am also a member of different supercomputing and women tech communities who help support and promote research and women in leadership positions. These communities are deliberately closely allied with my research. In terms of building content and targeting channels, be aware that this is a personal decision as much as a professional one. Social media content occupies your personal space. You create and respond to this content today in your home, alongside your loved ones. I encourage my fellow scholars to make this realisation a crucial part of their professional consciousnesses and think about how you can protect yourself from possible unwelcome intrusions or comments about your work, professional image, and even personal life. In building a public-facing professional brand, I have worked with journalists across the board and spent much personal time creating unique content on my website and social media. One comforting thought is that journalists do not care about imposter syndrome. Effective treatments for impostor syndrome, then, should entail raising one's consciousness and, ideally, engaging in and asking about institutional norms and policies. One method could be as simple as asking about the university social media policy and strategies to protect your public profile. As an advocate and researcher of women tech communities, of course, I follow Sheryl Sandberg - Facebook's COO. Sandberg speaks on the "lean in" philosophy. While I do not entirely agree with her conceit, I know for sure that my new found consciousness, including linking the promotion of my professional work with the enhancement of the communities I belong to, has become a way to build a reputation. Self-Promotion And Community-Promotion Beyond recognising self-doubt, I often force myself to accept invitations (if my schedule allows) as a powerful means to overcome my initial self-doubt. For example, I have just been featured as part of the SC21 (supercomputer) conference in a pre-recorded interview. The sole reason I accepted the invitation was that I forced myself to do it, ignoring the internal voice that pointed out that there are more successful and visible experts. Why would I push myself in the face of intense self-doubt? I push myself because the impostor syndrome I suffer from is the same pathology that limits and casts doubt in the minds of other scholars. I push myself because every time I decline an invitation, there is a good chance that another person like me will not be invited or will decline the invitation in my place. This is especially true for some of the large commercial tech events I attend, which lack diverse speakers or make events fully accessible. I push myself because this job will never be easy; academia is a demanding profession by design. Concluding Thoughts If you are already feeling self-doubt and the twinge of guilt for turning requests down, with the stress of being overburdened with new demands, the knowledge that your actions directly affect your communities is more pressure. Notwithstanding, thinking of the positive flip side — promoting your scholarship and perspective helps promote your communities. Having this thought in the back of your mind will help alleviate self-doubt and allows a method for channels to target for self-promotion. This is the remedy that is working for me. TopTen tips
What comes next? I'd like to see an ally skills workshop focused on advocating for one another and moving beyond the concept of 'virtuous rescue.' I don't require rescue. I require empowerment. We require empowerment. Despite continued efforts to pretend otherwise, the new reality for many is work-from-home.
During the period of January to December 2019, 5.1% of the UK population mainly worked from home, compared to 4.3% in 2015 reported by the Office of National Statistics (ONS). The sector with the highest proportion of homeworkers was information and communication was in information and communication, with 14% mainly working from home in 2019, and more than half of workers having ever worked from home. In terms of homeworking patterns, reported by the ONS, those who occupied the most senior roles such as managers, directors, and senior officers were most likely to work from home (10%), followed by those in associate professional and technical occupations (8%) and administrative duties (6%). Before COVID-19, we might have speculated that women would form the majority of the home workforce, however according to the ONS, it was men (11%) who were more than twice as likely to work from home compared to women (5%). With home working now the ‘norm’ for many professionals does this mean a radical shake-up concerning industry initiatives to support a more accessible and inclusive workforce? Or are separate professional conditions continuing to prevail in the home? First reported by Forbes, Google, Facebook, Amazon, Apple, Slack, Microsoft and newly familiar Zoom have implemented new work regimes to allow employees to continue to work from home for the remainder of 2020. For those in the tech sector remote working methods are more familiar than other industries less confident or invested in software to support digital interactions. Prior to COVID-19 remote working practices within the tech sector could have been seen as innovations for other organisations and industry to adopt. However, today, we risk conflating ‘remote working’ with the present conditions of being forced to work from home, which is entirely different concerning support, accessibility and skills. My long-term research investigates the challenges of remote working within the tech sector and opportunities for companies to implement change for a more inclusive workforce. Part of what the tech sector, and others, is dealing with are the opportunities for workplace support. During the most recent interviews with workers, there is anger about the form of support that is mostly self-directed online learning to identity stress areas or take workers through basic meditation exercises. Speaking to workers in the civil service, similar methods of support have been introduced through lock-down. There is a significant tension here between top-down ‘support’ and what is really needed on the ground. Though early days, there are some clearly identifiable themes coming out the current work-from-home conditions: Proper investment in staff training that is accessible to all workers. One senior manager shared his experience of being the primary carer for his daughter. He was unable to attend online training days where he lacked childcare support at home. Obstacles to being able to attend event is not new where there are caring roles involved. One positive spin out of the pandemic is the opening up of previously locked down events/meetings/conferences. Through digital tech, I’ve been able to attend a virtual parliamentary civic briefing; participate in a conference previously out of my reach due to cost/travel/caring responsibilities; benefit from cybersecurity online training; and vote in union elections. What is frustrating is it has taken the restriction of workers to force this opening up, when the same level of accessibility could have been championed and supported a long time ago. What I hope is post-COVID-19 the same level of access will remain. Agile professional needs support. By necessity we are spending a substantial proportion of our day in front of screens. Mental health ‘check-ins’ and wellbeing tools are predominantly conducted through the screen. While technology enables immediate contact, it does not allow periods of rest or disconnection unless the user is able to put these in place. Speaking to an HR director, she shared her technology fatigue. Where the company had invested in a staff wellbeing app, this meant more time in front a screen sharing personal details about sleep patterns and sense of self-worth. Such investment attends to some of the needs of the workforce – if they are interested in sleep tracking. However, it does very little for supporting new work patterns, roles and fatigue. Households are the new workforce. Where organisations have a contract with an individual concerning their duties and responsibilities, this does not translate easily into households. Inevitably different burdens of care and ways of working entrench the home. What is clear from recent media reports and speaking to individuals across sectors in the UK are the difficulties in finding routines, especially when there are caring responsibilities for loved ones within the home. A not uncommon experience is feeling overwhelmed by professional tasks and spiraling out of control from ways to sustain relationships in the home. My own experience echoes that of many, caring for my four-year old daughter, working FT, contributing and running a household without the time and space to perform properly in any of these areas. Let alone download an app and record my sleep-tracking. The acknowledgement here is while individuals are employed by an organisation, it is the household that configures how we can conduct our professional roles at home. Different career enhancement pathways. One of the main challenges now is dealing with the ‘unknown’. One area of growing unease and concern are the new barriers to career progression. This is particularly the case where workers are being asked to prioritise new areas of work, such as the generation of online content, over and above all other tasks. And while online training can provide a great deal of information about ‘how things work’, it is very difficult for those tools to positively enhance different ways of working, especially if those duties are not formally recognised within career pathways and promotion criteria. The push to ‘get online’ takes time, new skills and requires the proper recognition of what the end product should look and feel like. [not] Taking time off. Simply put, stating that workers should use their annual leave won’t alone change the conditions of stress, fatigue and fear. In short, while the required ‘leave’ can be recorded on an excel spreadsheet this does not reflect a period of rest for the individual. For others, it will not be possible to use their leave allowance. This is not about giving people ‘special treatment’, but acknowledging the current conditions that are difficult and in acknowledging this, understanding each other in terms of these being hard times for all. During the crisis, I continue to research the impact of remote working. Yet, this is with a growing unease, as I recognise and share the same challenges as those I interview. However, in deepening this narrative, we can underscore the sharp divide between work-from-home and remote working. To ease the burden of remote work and enable new innovative ways of working in the future, this requires a plethora of change, investment and support beyond the household. The remote working revolution, I've now discovered, owes a lot to technology innovations bought about by the sex industry.
So, what has the sex-sector ever done for us? Pornography, adult eBusiness and online erotic content may have an uneasy place in society, but its tech innovation has a long legacy: · Camcorder and VHS video machines were pioneered by the porn sector and the main players keen to get blue content to the mass market as quickly and cheaply as possible. In the home, the take-up of DVD players was driven by porn consumers because they could skip to and repeat their favourite scenes. · Watching a lot of Netflix? Pay-per-view cable or satellite TV movies entered the market only after porn firms introduced 'premium' services in hotels and on digital networks. Interactive television, now gaining substantial sponsorship on digital sport channels, was developed to allow consumers to get closer to their favourite porn actors. · eBusiness has (always) been driven by sexual content. One of the most successful eBusiness sites is Pornhub - Adult Content Website- remains in the UK's Top 20 Websites and generates annual profits of more than $1 billion. In 2019 there were over 42 Billion visits to Pornhub, which means there was an average of 115 million visits per day. In a bid to prepare for academic teaching in the Autumn, I find myself trawling tech magazines for advice about what to invest in as a home studio set up. My thinking is that if we continue to charge students full fees (apparently 'we' will), then my 'selfies' and audio with my 2014 iPhone are clearly not going to be up to standard. What the above tells us is research about investing in professional recording equipment would be best spent consulting professionals in the adult sector about their home set up. So, curious, I asked influencers in the world of adult of erotica (pro photographers and models posting on Instagram) what kinds of resources they used. And they shared some helpful links - none of which is adult content and all links are safe for work. First, perfect lighting set up by TechSmith. Everything from set-up, to glare, to temperature. Next a link about music videos with home camera set up. Now, I won't be singing in my videos, but I want to look professional and need a decent camera to produce online learning content. Speaking to my industry professionals, for everything camera, the following blog from B&H photography is an excellent resource from novice to pro - answering my base questions, what do I need to actually buy? And what is worth investing in? I've not bought a new camera, but found my late father's Canon DSLR and falling down the rabbit hole of tech stuff, I've discovered decent audio recording now really matters to me, so what is the best lapel microphone? Digital Camera World (an unknown space to me before now, thanks new friends) is a great resource with a post about the best mic for vlogging (and beyond). Not via professionals in the online adult industry, but instead a sound and music engineer, who highly recommended a decent camera tripod and backdrop. I had not head of Manfrotto as a brand - and I took a leap and invested in a tripod and Lastolite backdrop. YIKES. Link to buying guide by Digital Camera World is here. There's a deliberate tease in this post leading from tech innovation into the above links that [maybe] I would have stumbled across with a bit of research into "cameras", "lighting", "sound", "backdrops"... am I missing anything?... but what was useful in reaching out to professionals in the adult industry was hearing about their confidence in their own home set-ups. Where I've had conversations about creating content online learning with academic colleagues, we are (at best) stumbling through what to do, and very few have any established home set up for recording. Writing this post it feels inevitable that investment in a 'home set up' will become a necessary part of being an academic. Especially if the public rhetoric sticks that is faculty who will be responsible for delivering professional online teaching equivalent to the campus experience - minus the time, the training and technology to do it. In a bid to relieve some of this anxiety, it is a privilege to be able to make a few key purchases to bring home up to some kind of 'pro' standard and I am fortunate I can make this investment. I remain, however, uneasy. Yes, I can record from home, but there are few moments when there is the 'quiet' required to produce content. While I am camera ready, there will be interruptions from my four-year old daughter, and the general 'noise' at home will form to a backdrop to this content. I am also not a professional online teacher. A technology enthusiast, certainly, but not an online pedagogical expert. My fear is that in attempting to alleviate some of the pressure to produce professional looking content, in making these key purchases, I have lost sight of the pedagogical; misplaced the need for support of both students and staff; and mistaken 'online' for 'innovation'. And that is where the adult industry can't help us. Today, how we shop, and socially distance from others is fundamentally different from yesterday as we are called upon to 'do the right thing' to change everything about our lives and our daily interactions.
New forms of etiquette are rapidly emerging from the stoic 'good morning' across a 6-ft distance with neighbours (many of whom I've not spoken to before), to explaining that you cannot attend that digital hangout as you are already committed to another digital hangy-out thing. Behaviour change has always been essential to central policy and politics. While not exactly an exemplar for 'nudge', in the UK the PM Boris Johnson is relying on classic psychological nudge theory to encourage citizens to 'do the right thing' - a nudge-COVD_19 (nudgeC19) tactic. Classic Foucauldian analyses enables us to see how social forms of embodiment writings on technologies of the self nudge users to change behaviour - such as calorie counting to lose weight. Typically nudge theory is closely aligned to neoliberalism that reflects shifting state-citizen relations and the responsibility of individuals: "Each and every one of us is now obliged to join together." I'm a long time researcher of behaviour change and its relationship to individual health decision-making (especially around self-tracking and mHealth). It is not difficult to be struck by how Boris's appeals are designed to tap into variants of behavioural theory: rational (what we think, how we reflect on things) and emotion driven (automatic and instinctual reaction) systems. The success of nudgeC19 is to tap into the automatic system and reframe choices - essentially nudging citizens into the 'right' behaviour change that can be rationalised. While policy and politics are always concerned with influencing citizen actions, there is an acute emphasis on our individual behaviour change and responsibility to get this right. Policy makers increasingly believe that – in the face of great social complexity and individualised citizenry – the only way to address ‘deadly' challenges such as the global pandemic, climate change or civil unrest is to encourage citizens themselves to change their behaviour. The socio-political impact of nudgeC19 has already been dramatic and we are on a radically new path. Unsurprisingly, nudgeC19 has already attracted substantial criticism and political comment. Much of this revolves around caution against nudgeC19 being a political vehicle for extending Government activity (something that has already happened in the UK, in much of Europe and will continue to happen in the US), along with the explicit paternalism of nudge. In defence of this approach, in properly deploying nudge incentives this will improve (hopefully save) people's lives. Such measures have been set up to enable us to feel in control (as much as we can right now) of the choices we can make and ways we can contribute to 'solve' a major f*cking global problem. And we can do so while emphasising our freedom to make this choice. I am not a fan of Boris or his politics, but I sincerely believe him when he states the current measures in place are not actions he wants to take. Yet we should also be aware, despite reassurances otherwise, that nudgeC19 is heavy-weight top-down politics. While there is a good case to make on the central role of nudgeC19 that this is for the greater good and citizens 'best interests' are at the heart of such conditions, our restricted movements reveal the disabling of our agency and impulsivity as everyday citizens. In what follows over the next few weeks/months/year/s, we need wider social analysis of behaviour change. A sociological understanding of agency and new phase of Government-citizen relations that are characterised by extreme uncertainty that depend on deepening citizen reflexivity: Will the Government retain responsibility for nudging citizens? Will this be expanded to the military and police (likely yes, before the end of the week)? For how long can we sustain the nudge behaviour (fatigue, boredom, frustration all at play here)? Are we turning to a more reconstructive agenda where in the long-term Government interference on this scale will be welcome (for the greater good) and support the role of the state as a facilitator in daily life? Ultimately, together, we will continue to endure more explicitly political Government interference and restrictive behaviour change that will change our citizenship identity forever. Self-tracking Technologies and the Tourists: Embodiment and Engagement with Surveillance in the City
Welcome to the City of York, where the Council will gather anonymised data from anyone visiting the city centre. surveillance tracking will be used to find out about visitors - where they come from, how much money they spend, where they go and what they think about York.
Self-tracking technologies are very popular. Consumer culture indicates these enable transformation of consumer behaviour and their expectations about knowledge concerning their identity. Consumers have, for some time, adorned their bodies with wearable tech and moulded their identities in various ways to have appeal on social media. Yet, the growth and variety of data points designed to exploit the malleability of identity metrics have turned social sharing into a hugely profitable commercial industry. The commonality and popularity of surveillance technologies raise many questions about the impact that personal trackable data has on people's identities, rights, and capacities for action. I imagine this like fishing into a free sea of people who are so hooked on their tech and data they fail to realise the sharks near. Personal data is important, but it would, I think, be erroneous to restrict our observations of them to the most obvious or innovative ways in which interactions occur. Our data changes develop and replicate from one platform through to new devices while still remaining on old/forgotten technologies. The very places that surround us, the institutions we establish relationships with, and habits we develop, all impact upon the appearances, capacities and meanings of our data. Data change sometimes occurs as a result of consciously formulated actions. These are undertaken in situations where we think we have considerable autonomy - such as privacy settings on a social media account. Yet, data change also happens frequently in circumstances within which the individual finds themself, and they have no control. In these and other situations, how data change occurs are directly related to people's cultural knowledge and dependence and relationships to the broader social structures in which we live, visit and move around in. In York, the broad and general relationship between data change and social action feel almost predatory. Being a resident of York feels as though living in an open pandora box. In coming to terms with data dimensions, it feels as though we are being pulled beyond where personal boundaries were once firmly closed. We are being jolted towards new actions that require us to overlook the considerable intrusion into our privacy. Rather than completely condemning surveillance developments, we might see them instead as the creative potential of how we might live in the future - within a broadly defined flexible technology framework that facilitates data into the repositories of personal use and external environments. The range and severity of personal data intrusion could indicate a new-new-age of technological culture, which seek to benefit individuals, different peoples and contribute meaningfully to the planet. In this context, what data surveillance means to different peoples, underscores contemporary attempts to utilise many belief-systems as a means of explaining technologies place in society. Global concern about the impact and spread of COVID-19 (here's some live data designed by a 17-year old) have left organisers with no choice but to pull international events. The new 'normal' is to expect further emergency measures. These will restrict the movement of people - asking us to work from home (where possible). Plans to attend any future international conferences will be cut short. Much of my research is about the kinds of interventions to enable under-represented groups to be better supported in their professional roles. These include: remote working; making international conferences/events accessible to those with caring roles and disabilities (remote presentations and affiliations; sponsorship for families to travel together; and funding to pay for care support with individuals are away); and embracing novel interactions (everything from using tools like Slack, #hashtag indexing, to experimenting with audio recordings and different methods of file-sharing for individuals with unreliable internet connections). Before COVID-19 practices such as remote working and digital presenting were, often, regarded as secondary to in-person interactions. This meant requests from disability groups, or anyone with a caring role, to implement changes that allow individuals to 'beam in' were often challenged - see this lovely survey from Forbes about such workforce demands. Such actions are seen as 'too expensive' or 'too difficult' to coordinate and organise. Amid COVID-19 the same barriers throw up common challenges. However, some groups are doing better. The International Communication Association (ICA) conference aims to advance the scholarly study of human communication by encouraging and facilitating excellence in research worldwide. Aha! The same conference is still going ahead with proper support for virtual presenting and attendance. But presenting via video-conference and Skype is cr*p, right? Yep. So as you would expect from an international communication association, there are some innovations: Presenters will have the option to pre-record talks, or to join in-person live and develop critical conversations in much the same way we currently undertake social interactions using apps like Whatsapp, Messenger, iMessage etc. And this is good. We're forced into thinking outside the box, we maintain sponsorship and commercial levels of support, we get to interact with research communities at a global scale, and we (inadvertently) save the planet. Importantly these are all methods that go a long way to support accessibility. Other conferences such as FutureMed have allowed participants to attend as a robot! To maintain the momentum and sponsorship around other international events, we have an opportunity, now, to advocate for each other. This means being prepared to make very sudden changes to how we attend and experience professional activities, and take forward how we work with each other. The out-dated criteria for career promotion such as 'number of international conferences attended' can (should) be challenged/changed to embrace new methods of finding and connecting to each other. This method will allow anyone with a disability or caring role to significantly improve their contribution to events and 'prove' their worth to organisations. Also meaningful is the willingness of people to swap climate-guzzling global travel for greener and more climate-friendly alternatives. Traditional accounts of work tend to concentrate either on overall levels of activity in the workplace, on things like international professional impact (how much of a 'hit' globally are you?), or on particular ways of working, like the long hours sat passively in an office or out in the field. Up to now, there have been very limited resources in support of remote working or the 'best' or good practices that workers can implement. Guilt, feeling isolated, disadvantaging one's career, or anxiety about missing out frequently appear as barriers to remote work. There remains very little to support the experience of attending international events in remote form - this is difficult to do well for the audience experience, presenter or to make it suitably commercial for sponsorship. Successful and fun methods of remote working, here are some things that I am doing:
By focusing on 'being there', we have developed a fascinating display in the presencing of our 'work' and doing work in professional settings. Upon our actions hangs the future of international event attendance, work presencing, and ways we can sustain inclusive professional practices in the future. I am happy to share a virtual lunch date with you. This blog post was so popular it has also been featured by OpenAccessGovernment and other media outlets. My work with various Government think tanks, tech start-ups, organisations and the Government's own Digital Services is where I advocate for the step-change to enable inclusivity in tech. These include activities where I provide training / workshops / presentations / other things to get people talking and the attention and buy-in of senior management.
While there's an acknowledgement of 'the problem', and my work goes some way to reframe the labelling of 'women in tech' at the heart of the 'the problem', professionals, industry and policy continue to restrict how to implement change. So if you are recruiting into a tech role and you want to be 'inclusive', what can you do? Well, there is a lot, and this post will take you through some of the changes needed.
Textio is an online tool that analyses job descriptions (US-based) and suggests improvements to make the language more appealing to all applicants. Similarly, Gender Decoder for Job Ads highlights gendered wording. It identifies if a post is masculine- or feminine-coded < again, we are dealing with broad brush strokes here, but useful to 'have the conversation' that gender bias in language and role descriptions exist.
Depending on how your organisation works with recruitment (internal and external) - a summary of strategies that have been effective include:
Alongside the tech industry, I am going through a similar process of recruitment to three new positions to a three-year tech project. In this process, my hands are tied (a lot) by formal HR methods. For example, I can tweak the job description template, but this needs senior management approval. I cannot change the layout of the template. And, what I want to do is change some of the language used: switch 'ideal candidate' to 'ideally suited to'. I'd then like to go straight into how the roles will help develop the skills of the individual before the role responsibilities - in effect, reversing the layout of the current job template. I am continuing to think about new ways to support an inclusive recruitment process, some of these are easy-to-change things, others require buy-in from management and change how we think about recruitment. All do-able. These take time and the right people to 'say yes'. There's a lot of material out there - which is good! The current craze for activism amplifies clearly and profoundly across social media. My late father taught me the power of protest. He was strongly political, a single-parent, who self-taught to overcome the challenges of disability, and a national child-care system swayed in favour of the mother's rights. Marches, strikes and protests - he preferred the latter because they allowed people to get together - were constant markers of my childhood and adulthood. These served as a means of making links at local and national levels that were being overlooked elsewhere. In retrospect, these activities (earnest as they were in their aims - equal rights for fathers, equal pay, end the poll tax) were also social activities and ways of connecting to neighbours and making new friends.
The social and economic relevance of social protest is currently a hot topic receiving much attention in the news as it is being organised and publicised across social media. The storm around Greta Thunberg's climate change protest recently in Bristol has drawn attention to broader implications of social protest and the relationship to social media. Thunberg is an inspirational activist for many, social media also makes her target for internet memes, trolling and hate. We need to recognise the existence of a super-connected society: who can at once enhance things for the better and respond to the call to arms to change the world. There are also present dark and sometimes perverse social forces - some 'citizen journalism', community hate groups, online forums, and social media trolling. The pioneer, in this case, Thunberg, is undoubtedly savvy. However, awareness of her vulnerabilities reflects alarming social media targeting tactics designed to negatively affect Thunberg and enrage her supporters. Activism fed by social media reveals the inherently politicised state of different platforms (I am pointing directly at you Facebook). At the same time, such content also shows how communities actively condemn the current state 'things'. Activism enhanced by social media offers the opportunity for action against meaningless fake news and dangerous political figures. The renowned sociologist, Manuel Castells [don't worry, is a link to a Wikipedia page], shows when the structures of capitalism are under strain (as they are today), alternative and countercultural values and ways of living to gain more attention. With social media and figures such as Thurnberg, alternative values are allowed to move into the mainstream. At such a time, social activism and associated competencies, communities, networks, and social skills provide a rich source for reassessing not only what to protest and how, but also what real change might mean. Through protest and activism, I hope we continue to connect with our neighbours and make new friends. And I am certain we will continue to use social media (for good things). |