Tojan Horse image by Maz HardeyA few days ago, my brilliant friend and education practitioner sent me a link to a Google blog post on AI and learning. On the surface, it’s the usual optimistic fare: AI as a tool for personalised learning, for bridging gaps, for efficiency. And for a moment, a fleeting, optimistic moment, I saw the shimmering potential. Then, the cold, hard slap of reality. Not the reality of AI's limitations, but the reality of its deployment, its framing, and the deeper, insidious currents it often serves. I am a professor. I am autistic. I am dyslexic. And like many others, my mind is not a neat collection of separate cognitive functions that conveniently slot into diagnostic categories. It is a messy, vibrant, sometimes terrifying convergence. To speak of "my dyslexia" or "my autism" as distinct entities is like trying to describe the flavour of a tom yum soup by isolating the salt. The essence is in the blend, the unpredictable, sometimes overwhelming symphony of sensations. And often, that symphony culminates in a profound, exhausting mush. This is the ground upon which the grand narratives of inclusive technology are so often built. These are narratives that, I increasingly suspect, function less as bridges and more as Trojan Horses. The Siren Song of the AI Education Silver Bullet The rhetoric around AI in education is seductive. It promises to "level the playing field," to "personalise learning," to "empower neurodivergent students." For a moment, it sounds like salvation. For the dyslexic, AI will summarise dense texts; for the autistic, it will organise schedules or draft emails. And yes, in isolated moments, it can do precisely that. I can attest to the small victories. The AI summariser that can cut through a thicket of academic prose, saving days of concentrated cognitive effort. Or maybe, academics should write with clarity and avoid dense and inaccessible flourishes in their work… The executive function assistant that helps me wrangle a chaotic inbox. These are not trivial gains. They are moments of respite in a landscape that often feels like an uphill battle. But here’s the rub: these isolated victories are often presented as evidence of a systemic solution. And this is where the Trojan Horse comes in. The promise of inclusion via technology is hoisted over the walls of traditional pedagogy, not as a radical reimagining of the city itself, but as a new, more efficient weapon in an old war. The Hidden Costs: Cognitive Exhaustion and the Illusion of Choice Mark Rowlands often writes about the animal mind, the embodied cognition, the way our being in the world shapes our understanding. Our neurodivergent minds are profoundly embodied. Our energy is not an infinite resource; it's a carefully managed, precious commodity. And often, it’s already depleted. The Google blog, like so many others, extols the virtues of these new tools. But who speaks of the cognitive overhead? Who calculates the hidden tax levied on a neurodivergent brain simply to learn a new tool, to integrate it into a workflow, to debug its inevitable failures? Here’s an insight into how my mind works. I cannot simply isolate the task itself and ask an AI to ‘run it’. I need scaffolding around the task. For neurotypical individuals, adopting a new app might be fun, and enhance their efficiency or productivity (regardless of how toxic this mindset is…). For a mind that already expends disproportionate energy on executive function, sensory filtering, and processing complex information, another solution can feel less like an aid and more like another brick dropping on your head. We are told, "Just learn to prompt better!" "Explore its features!" "Maximise its potential!" “Use it ‘critically’” (Whatever that means). These questions are not helpful; it's an additional layer of homework. It's a constant, low-level hum of anxiety: Am I using it correctly? Is it actually helping or just adding another step? Is this "aid" actually a subtle form of digital gatekeeping, where only those with the energy to master it truly benefit? Sometimes, the promise of support through technology simply shifts the burden. Instead of changing the inaccessible structure, we are handed a more complex hammer and told to adapt the world ourselves. And I want to be clear, it is apparent that AI was never designed with neurodiversity in mind, this is significant challenge for anyone who encounters AI, especially if you are told to simply ‘play’ with the technology. That’s a very scary place to be. The Real Battle: Not Tools, But Systems The truly critical edge here is that the focus on technological fixes often sidesteps the more fundamental, uncomfortable truths about our educational systems. Why do we need AI to summarise dense papers? Because academic writing is often needlessly convoluted, exclusive, and antithetical to effective knowledge transfer. Why do we need AI for executive function? Because curricula are often rigid, assessments inflexible, and institutional structures demand a standardised mode of engagement that disregards the vast spectrum of human cognition. Instead of demanding that professors teach differently, that universities reform their assessment methods, or that academic culture embraces diverse forms of expression, we are offered a technological bypass. The argument morphs: "Oh, it's not the system that's flawed, it's just that some brains need extra tools to fit into it." Neurodiversity, in this context, becomes a convenient vehicle – a Trojan Horse – for the uncritical adoption of technology. It grants moral legitimacy to the tech giants, allowing them to frame their products as benevolent instruments of inclusion, rather than as profitable enterprises that may, in fact, exacerbate existing inequalities. The "neurodivergent user" is championed, not because the system fundamentally changes to accommodate them, but because their challenges provide a compelling justification for deeper technological integration. And in this process, the very concept of "neurodiversity" is subtly reshaped. It moves from being an argument for systemic change and varied human experience to a consumer category for technological solutions. "You're neurodivergent? Here's your app! Here's your AI co-pilot!" The inherent value of diverse ways of thinking is lost in the scramble to digitally "fix" difference. (Screams)! Reclaiming the Narrative The future of education, for minds like mine, isn't about more tools to navigate a hostile education and professional landscape. It’s about cultivating a landscape that is less hostile to begin with. It's about assessments that celebrate varied forms of intelligence, not just rapid-fire recall or perfectly formatted essays. It's about curriculum design that anticipates a spectrum of processing styles. It's about institutional empathy that understands the finite nature of cognitive energy. Let the AI summarise. Let it organise. But let us never mistake these tactical aids for strategic victories. Let us be vigilant against the insidious notion that our complex, beautiful, sometimes chaotic brains are simply problems awaiting a tech solution. And we need to agree on which AI to use and why. The true conversation for AI in education shouldn't be about "is this cheating?" or even just "who is this including?" It needs to be: Who is this demanding more from, who is it truly serving, and are we using the genuine need for neuro-inclusion as a convenient smokescreen for a deeper, more problematic technological agenda? Because sometimes, true inclusion isn't about adding more, but about stripping away the unnecessary, the rigid, and the burdensome, allowing all minds the space to simply be and to thrive. Our minds are not a market for your solutions; they are a reason to change your systems
Comments are closed.
|