|
The arrival of a neurodivergence diagnosis, especially when it arrives in adulthood alongside the same diagnosis of one’s own child, is less a lightning bolt and more a gradual, dawning touch of light on a landscape left in the dark for decades.
Living in Yorkshire, the stark beauty of the moors mirrors the isolation felt by many families navigating the SENCO/SEND support system. I have come to view the diagnosis process as a bureaucratic checkpoint that marks the boundary between hope and the great, silent void that follows. We are told that the diagnosis is the key, the golden ticket that unlocks understanding and accommodation (like an EHCP), yet for so many of us, including the families whose voices echoed with such painful clarity in recent reports on the crisis in SEND provision, that key opens a door to an empty room. The obscene waiting times, meaning years of suspended animation where children drift unmoored through an education system that was never built for them, are a national scandal. But it is what happens after the diagnosis that I find myself compelled to critique with urgent, furious veracity. Many of us, myself and my daughter included, are living through a systemic abandonment that is being quietly plastered over with the thin, digital veneer of "innovation." In the absence of human support, in the vacuum left by the dismantling of accessible education and the chronic underfunding of SEND (Special Educational Needs and Disabilities) services, we are witnessing a dangerous shift. The Hollow Recommendation: "Just Use AI" To my horror, my daughter and I are offered a new, hollow recommendation in our support plans: "Use AI." It appears as a throwaway comment, a suggestion that generative artificial intelligence can act as an executive function prosthesis, a scheduler, a drafter of difficult emails, or a summariser of the dense texts we struggle to process. On the surface, to the neurotypical observer, this might seem like a modern, efficient solution. But to those of us living inside the neurodivergent experience, this recommendation is not just unhelpful; it is an insidious form of harm that misunderstands the very nature of our exhaustion. My daughter is 9.5 years young. She is legally too young to hold the very account credentials that are being prescribed as her salvation. This recommendation acts as if the internet is a safe, neutral library, rather than a surveillance engine designed to harvest attention. When a support plan says "Use AI" without specifying which tool, whose safety guardrails, and what data privacy protections are in place, it is not a strategy; it is negligence. There is no dosage instructions on this prescription. Which AI is she supposed to use? The one that hallucinates facts? The one that reinforces gender biases? (hell, no). The one that scrapes her input to train its next iteration? And how does this function in a classroom that is likely banning smartphones? Is she to be the exception, navigating the social stigma of being the "cyborg" student while her peers use pencils? We need to ask: For what purpose? Are we teaching her to think, or are we teaching her to prompt? By handing her a text box instead of a hand, we are not offering her a scaffold for her executive function; we are feeding her developing mind into a black box that offers no duty of care, no empathy, and absolutely no guarantee of safety. To suggest that a neurodivergent person, already drowning in the sensory and cognitive overwhelm of a world designed for linear brains, should simply "adopt AI" is to ignore the immense cognitive tax required to operate these systems. To make this recommendation for a child is... well, I am at a loss for words that do not rhyme with ‘cluck’ or ‘spit’. We are being asked to learn a new language, to master the art of prompt engineering, and to navigate an interface that is fundamentally designed for data extraction rather than human care. When a support plan offloads the work of scaffolding onto a chatbot, it ignores the reality that using these tools requires a high degree of executive function. These are the very resource we are often depleted of. We must formulate the request, sift through the generated noise, fact-check the hallucinations, and integrate the output into a reality that rarely matches the machine’s statistical average. AI IS NOT SUPPORT. This is additional labour disguised as a life hack. The Data Extraction Trap There is a darker current running beneath this technological solutionism, one that connects the crumbling walls of our classrooms to the gleaming campuses of Silicon Valley. The AI-bro oligarchy, those architects of Large Language Models (LLMs) who preach the gospel of efficiency, have no vested interest in the messy, non-linear, divergent goals of our community. Their technology is built on a foundation of normative data, training models that flatten out the spikes of human variance into a smooth, predictable curve. By relying on these tools, we risk forcing our own minds and our children’s minds into a feedback loop that prioritises neurotypical mimicry over authentic neurodivergent existence. Such systems are effectively turning our need for support into unpaid labour for the very tech giants that exclude us. We are not users to be supported; we are resources to be mined. The Political Abdication This digital deflection serves a political purpose as well. It allows the state to abdicate its responsibility. If the answer to a child’s inability to access the curriculum is "use ChatGPT to summarise the lesson," then the school no longer needs to invest in smaller class sizes, sensory-friendly environments, or specialist teaching assistants. The burden is shifted back onto the individual, back onto the parent who is likely already burnt out from fighting for the diagnosis in the first place. The "cliff edge" of support that the National Autistic Society has campaigned against for years, highlighting how thousands of adults and children are left stranded after diagnosis, is now being populated by chatbots instead of social workers. This is a devastation of the social contract. Research and campaigns from the National Autistic Society repeatedly show that without the right support at school and home, autistic people are at risk of developing serious mental health problems, yet the response is to offer a subscription to software rather than a relationship with a human being. This systemic abandonment is actively weaponised by political opportunists who have found a convenient scapegoat in the very families they are meant to serve. We need look no further than the incendiary rhetoric of figures like Reform UK’s Richard Tice, who has grotesquely dismissed the rising tide of neurodivergent diagnosis as a ‘dodge,’ branding it the modern-day equivalent of a ‘bad back’ used to evade economic productivity. This reflects the brutal calculus of a system that views human variance as an inefficiency to be purged. It reveals a political class with their noses firmly planted up the arse of the AI-bro oligarchy, eagerly adopting a Silicon Valley worldview where citizens are reduced to data points and anyone who cannot be seamlessly integrated into the algorithm is discarded. The Harari Hazard: A Note on Futurism In a chilling echo of Yuval Noah Harari’s warning about the rise of a ‘useless class,’ these leaders are collaborating to build a future where the state abdicates its duty of care to software. However, while Harari serves as a useful starting point for futurist exploration, we must be deeply skeptical of the veracity of his so-termed populist science, which often sacrifices rigorous accuracy for the sake of a compelling, terrifying narrative. As the neuroscientist Darshana Narayanan has sharply critiqued, Harari’s work is riddled with scientific errors and a reductive biological determinism that should sound alarm bells for the neurodivergent community. (Hello)! When Harari speculates about "fixing" autism by rewriting genetic code, here we are treating complex human variance as a mere software bug, he is simplifying science and he is reinforcing the dangerous eugenicist undertones that often lurk beneath the shiny surface of Silicon Valley ideology. His storytelling serves the interests of surveillance capitalists by presenting their dominance as an evolutionary inevitability rather than a political choice. By accepting the premise that humans are hackable animals whose worth is determined by data processing efficiency. In doing so, w/he inadvertently validates the very dehumanisation we are fighting against. No. We are not obsolete algorithms waiting to be upgraded or discarded; we are complex, non-linear human beings whose value exists entirely outside their metrics of utility. Instead of eyeing up the next generation’s blood for some vampiric wellness hack, why not stick to the classics? Get a portrait and hide it in the attic. A Call for Human Infrastructure The education system, particularly here in the North (read, not London) where waiting lists for assessments can stretch years beyond those in the South, is in a state of collapse. We see this in the stark disparity of waiting times, a postcode lottery that leaves families in Yorkshire waiting over a thousand days for an answer, as highlighted by the Child of the North reports. When the answer finally comes, it arrives in a world where schools are under-resourced and teachers are overwhelmed. To introduce AI into this breach without proper scaffolding, without a human guide to help interpret and filter the technology, is to set neurodivergent people up for a new kind of failure, and even less social support. We do not need a tool that generates more text, more options, and more information to process. We need flexibility. We need reduction. We need calm. We need human empathy that understands why a task is difficult, not a machine that simply completes the task in a way that mimics a neurotypical standard we can never sustain. True support for neurodiversity requires protecting vulnerable people from AI-tech-bro capitalist efficiency. It requires us to reject the idea that a person’s value is tied to their productivity or their ability to interface with a complex system. We must recognise that the tech-fix is often a trap, a way to privatise support while stripping it of its humanity. I reject the premise that the only bridge across our exclusion is an algorithm. As a mother, I will not teach my daughter that she must merge with the machine to be valid. We need to rebuild the human infrastructure of care, to demand education systems that are accessible by design, not patched up with plugins. The AI revolution bubble is leaving us behind, not because we cannot use the tools, but because the tools were never built to hold the weight of our beautiful, complex, divergent lives. The silence at the end of the diagnosis process cannot be filled with code. It must be filled with community, with understanding, and with the radical refusal to be flattened. The Unacceptable Contract So here is my refusal. I am returning this recommendation to the sender, marked 'Incompatible with Human Life.' Do not offer my daughter a chatbot when what she needs is a chance. Do not offer me a productivity hack when what I need is a society that does not view my neurology as a glitch to be patched. Or exploit it where I can be hyper-focused to the point of burnout. We are not interested in becoming more efficient data points for your Large Language Models. We are not interested in hacking our way out of a systemic failure that you have engineered. If the only bridge you can build across the chasm of our exclusion is made of code, then burn it. We will not cross it. We will stay on this side, in the messy, inefficient, beautiful reality of our divergent ways of being and feeling, and we will build our own infrastructure. It will be built of patience, not prompts. It will be powered by empathy, not electricity. And it will not require us to flatten ourselves to fit through the slot of your machine. To the politicians calling our existence a "dodge," to the AI-tech bros mining our exhaustion for data, and to the futurists predicting our obsolescence: We are not your "useless class." We are the only ones who are awake. Sincerely, A Mother, A Professor, and A Human Being who refuses to be automated. My notes from these sources: [1] County Councils Network Report, Nov 2025 [2] Ticking Timebomb, The Guardian, Mar 2025 [3] National Autistic Society. (n.d.). Autism assessment waiting times, Nov 2025 [4] Reforms UK's Richard Tice says children wearing ear defenders in school is 'insane', Independent, Nov 2025 [5] Yuval Harari's blistering warning to Davos in full, World Economic Forum, Jan 2020 [6] The Dangerous Populist Science of Yuval Noah Harari, Darshana Narayanan, Current Affairs Org, July 2022 [7] N8 Research Partnership. (2024). Child of the North Report. Comments are closed.
|