|
The conscientious objector. I find myself looking at her with a mix of profound admiration and a distinct, sharp pang of the wiggly gut-guilts.
Recently, I’ve seen allies in academia, scholars I deeply respect, drawing a line in the sand. They are showing us what it means to resist the AI-creep, They are calling on journals to allow authors to explicitly state: “No generative AI was used to prepare or write any part of this article.” I worry about the AI already embedded in journal gateways to check the paper references and editorial style... but let's park that part of the exchange. For now. Actively resisting AI in the research publication process is a beautiful, defiant stance. It is a reclaiming of human labour, a protection of the cognitive sweat that makes research and writing an act of thinking rather than an act of prompting. To be an AI Conscientious Objector is to choose the protection of human values in a much muddied ecosystem. It is a moral clarity and stance that I crave. But, as I sit here, staring at the blinking cursor of my Outlook inbox, where Microsoft’s Copilot is already, without my asking, suggesting how I might reply to a student, I realise that this clarity is out of my reach. During the research publication process, I want to object. God, I want to object. But I am tired and in a bind. And more importantly, I am entangled. The Myth of the Binary Choice The current discourse around AI in higher education is trapping us in a binary that is as harmful as it is false. We are told we either "adopt" or we "resist." We are either the tech-utopian evangelists or the Luddite holdouts. I'm going to drift away now from journal publication gateways to broader Higher Education policy on AI. That is a hot mess. The current discourse frames AI adoption as a simple yes/no proposition, implying that I can opt out by sheer force of will. It suggests I can hang a 'Do Not Disturb' sign on my professional life and the algorithms will politely walk on by. But the door doesn't lock. The algorithms are built into the hinges. Ok, perhaps I do not have sufficient willpower, is this the problem? Let’s look at the architecture of my working day. My university, like yours, has integrated AI into the very bedrock of our infrastructure. It is in the Blackboard and Moodle sites where I must upload my teaching and research materials; these platforms now use AI for accessibility scanning and content prediction. It also creates quizzes. It is in the email client I cannot turn off. It is in the "suggested actions" in my calendar. I receive automatic summaries of meetings I do and do not attend with action points. (Handy or hell?) To be a true conscientious objector in 2025 is way above refusing to use ChatGPT to write a paper. It would require me to dismantle the entire digital scaffolding of my employment. It is a state of resistance I cannot survive. The Policing vs. Vague Innovation Thirst Trap I have spent the last few days trawling through the digital archives of higher education, reading the public domain AI policies of universities across the UK, the US, and Australia. It has been a descent into a very specific kind of bureaucratic bleakness. The landscape I found is arid and it is hostile. The tone of these documents, ranging from the draconian to the delusionally optimistic, reveals exactly why so many of us feel trapped. In parsing the legalese and the strategic ambiguity, I have realised that our institutions generally fall into two distinct camps, neither of which seems to care about the consent of the humans doing the actual work. On one side, we have the Policing Camp, which views every student as a potential criminal and every AI tool as a weapon to be confiscated. On the other, we have the Vague Innovation Camp, a corporate thirst trap that uses buzzwords like 'literacy', 'employability' (ha), and 'opportunity' to mask a massive, unfunded mandate for staff up-skilling. So, here is a map of the cages as I see them. I have analysed the public policies of major institutions globally, and what becomes immediately clear is that these texts are rarely static. They are euphemistically called "living documents." In theory, this suggests agility and responsiveness. In practice, for the staff whose labor they govern, a "living document" is a nightmare. It means the rules of engagement are quietly updated in the dead of night, often without announcement or consultation. The ground beneath our feet is being shifted by administrative edits, turning our daily workflow into a game of compliance roulette. I am using this mapping to anchor my argument for the Reluctant Cyborg. This analysis proves that current AI Education policy is often just a shifting set of demands that requires you to constantly up-skill and adapt while offering zero protection for your intellectual property, your data privacy, or your right to say "no." The reality is stark: No one has sufficient policy for what they are actually doing. We are building the plane while flying it, but the university has decided that the cost of the fuel, our cognitive load (God help you if you are already burnt out or neurodivergent), our creative data, and our autonomy, is a price they are willing to let us pay. TLDR: No-one has sufficient policy for what they are doing.
Table 1: Policing & Surveillance Camp
Table 2: The "Vague Innovation" Camp
Table 3: The "Explicit Policy" Camp (Rare)
What is Missing? (The Labour Hole)
In analysing these documents, what is absent is even more telling than what is present. * No Right to Disconnect: None of these policies mention a staff member's right not to use AI in their workflow (e.g., turning off Copilot in Outlook). * No Intellectual Property Protection for Staff: They talk about protecting university data, but rarely about the fact that your lectures, notes, and feedback are being used to train the models you are forced to use. * No Workload Allocation: "Becoming AI Literate" (Russell Group) takes hours of weekly study. None of these policies allocate hours in the workload model for this "mandatory" learning. Key Pull Quotes * From Stanford: "Absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person." (Translation: If you don't write a specific policy for every assignment, you are failing.) * From Yale: "Faculty members are expected to provide clear instructions on the permitted use of generative AI tools for academic work and requirements for attribution. Likewise, students are expected to follow their instructors’ guidelines about permitted use of AI for coursework." (Translation: Use AI tools to be efficient, but if the tool lies, it's your fault.) * From Russell Group: "Universities will support students and staff to become AI-literate." (Translation: Resistance is illiteracy.) We are researching and teaching in a cage is built of vague principles and guidance that shift the liability and labour onto the individual, while the door (consent) has been removed entirely. Camp 1: These policies view AI exclusively as a weapon in the hands of cheating students. * Carnegie Mellon University offers syllabus language that explicitly "forbids the use of ChatGPT or any other generative AI tools at all stages of the work process, including brainstorming." * Monash University frames unauthorised AI use as a straight "breach of academic integrity," placing the burden entirely on the individual to prove their innocence. This approach turns us into cops. It demands we spend our precious marking time acting as forensic digital investigators, scanning for the smell of synthetic text. It destroys the trust between learner and teacher. Camp 2: Is perhaps more insidious. These are the policies that use words like "opportunity," "literacy," and "enhancement" to mask the increase in our workload. * The Russell Group Principles (UK) state that "universities will support students and staff to become AI-literate." Sounds nice, right? But "support" here is often code for "mandatory up-skilling on top of your existing workload." * University College London (UCL) tells students that AI can "reduce the need for critical engagement," yet simultaneously encourages its use for "ideas generation or planning." What is missing from all of these documents? Consent. Nowhere does it say: "Staff may choose not to use tools that scrape their intellectual property." Nowhere does it say: "We will not integrate AI into your email client without your permission." The policy is: You're on your own. Use it, but don't get caught using it wrong. Be efficient, but don't be lazy. Be transparent, but don't slow down. Neurodiverse Cyborg And then there is the body. Or, more specifically, the neurodiverse brain in a system designed for neurotypical endurance. My research area is technology to support assistive learning and neurodiversity. I have spent years advocating for tools that level the playing field. For years, I have relied on software like Dragon Dictate to bridge the gap between the speed of my thoughts and the limits of my executive function or physical capacity. Here lies the rub: The tools I use to survive are now AI tools whether I opted for those elements or not.Dragon Dictate, Grammarly, the screen readers, the speech-to-text synthesisers, they have all been retrofitted with Generative AI. To conscientiously object to AI is, for me, to conscientiously object to the ramp that lets me enter the building. The volume of work required of a modern academic is crushing. For someone who is neurodiverse, the cognitive load of administrative violence, the forms, the emails, the compliance metrics, is a mountain that grows daily. AI offers a scaffold. It offers a way to handle the sludge work so I can save my remaining spoons for deep thinking. I need access to work tools. I must persist in this space (my daughter and I depend on me supporting us). But how do I reconcile this dependency with my deep ethical discomfort? How is this healthy? I feel a lot of guilt and shame. I spent my entire education knowing I wasn’t good enough in that system. A career as an academic means you experience being continually flattened, or at least your extreme edges rounded out to ‘fit’ in disciplines, theory, pedagogies and other buckets. Now there’s another ‘unfit’ moment. I feel/know the architecture of AI is toxic and dangerous. I know the energy required of data centres is killing the planet. I also know current investment in technology is in AI and this integrated into many points of contact we have daily and I cannot remove myself fully from them. The Tech Thirst Trap of the Business School I teach Business and Computer in a Business School. My students are not entering a world where they can choose to be purists. They are entering industries that demand fluency in these tools. If I refuse to engage with AI, if I treat it solely as a plague to be avoided, I am failing in my transference of key skills and the conditions of the professional tools they will be automatically adopting. I have to teach them the skills industry wants, even as I loathe the extractionist logic of that industry. I have to show them how to use the tool, while simultaneously teaching them to critique the hand that holds it. It is a dizzying, hypocritical dance. It does make me feel unwell. From Objector to Critical Survivor So, where does that leave me? I cannot be the Conscientious Objector, standing pure on the outside of the machine. My survival depends on the machine. But I refuse to be the uncritical cheerleader. Perhaps we need a new category. Not the "Objector," but the Critical Survivor. Or perhaps the Reluctant Cyborg. We need to acknowledge that adoption is happening with and without our consent. The "conscientious objection" image is powerful because it highlights agency. But for many of us, disabled scholars, overworked staff, precarious workers, and especially our students, that agency is an illusion. I am deeply unhappy with the current state of affairs. I resent that my emails are being scraped to train a model I didn't ask for. I resent that my assistive tools now come with a side order of environmental destruction and copyright theft. But I am here. I am inside the cage. And if I am to remain a curious learner, I cannot simply close my eyes and pretend the beast isn't in here with me. I have to look it in the eye. I have to figure out how to use it to break the bars, rather than letting it consume me. We must stop shaming the individuals using AI to survive a collapsing infrastructure, and instead direct our rage at the institutions that broke the system so thoroughly that automation became the only scaffold left. Comments are closed.
|