|
The curious learner. I find myself thinking about her a lot.
I even drew her, in a simple sketch, to try and make sense of the unease I was feeling. I call her 'The Learner'. She isn’t a difficult student. She’s not the one in the back of the lecture hall, disengaged, scrolling through her phone. She’s the one who is diligent. She’s the one who is curious. She’s the one who, after a session, stays behind to ask a question that lights up her whole face. A question that, in a healthier world, would be the entire point of education. A simple, wonderful question: “Could I?…” “Could I,” she might ask, “try to use this… this new AI thing… to help me brainstorm?” “Could I,” she’d continue, a little quieter, “see if it can help me structure my argument? Not write it! Just… help me play with the ideas?” She would like to experiment. She would like to play. She is standing at the edge of the most significant technological shift since the internet, a tool that will fundamentally reshape her world and her career. And her first, pure, academic instinct is to poke it, to test it, to see how it works, and to understand how she can think with it. And what do we do when she asks this question? We shame her. We don’t do it intentionally. We don't do it because we are cruel. We do it because we are, as an establishment, terrified. And so, when this curious learner holds up her spark of an idea, we douse it with the cold water of our own institutional panic. From all sides, the voices come. The ones I drew in the speech bubbles, floating over her head, pressing down. “Using AI shows you are lazy,” whispers one voice. This is the voice of moral panic. It equates a new tool of augmentation with an old tool of shirking. We are shaming her for her curiosity, labelling it as a moral failure, a lack of character. “You must show evidence of critical thinking,” insists another. This is the voice of deep irony. We say this while simultaneously discouraging her from critically engaging with the most important new tool of our time. We are, in effect, telling her that the only way to show critical thinking is to pretend this technology doesn't exist. “The uni has an AI policy,” says a third, definitive voice. This is the wall of bureaucracy. A policy almost certainly drafted from a place of fear, not of exploration. A document designed to prevent rather than to guide. It is a shield for the institution, not a map for the learner. “You already have teaching support.” This one is perhaps the most heartbreaking. This is the voice of dismissal. It fundamentally misundersstands what she is asking. She is not asking for help because she is struggling; she is asking for permission to be curious. We are telling her that the established, "correct" pathways are the only ones she is allowed to walk. So, what happens to The Learner? She gets shamed. Over and over again. And finally, she gets stuck. Her curiosity, once a spark, is now a liability. She learns the real lesson we’re teaching her: "Don't ask. Don't experiment. Don't play." She learns that the goal of education is not to explore the frontier, but to produce a piece of work that can be "evidenced" in a way that makes the institution feel safe. She learns to perform her "critical thinking" in a neat little box, far away from the messy, complex, and fascinating tools that she knows will define her future. She becomes stuck. And we, the educators, are the ones who stuck her there. This, I believe, is a profound failure. We are in the middle of a revolution, and we are spending all our energy trying to build higher walls, instead of teaching our students how to be architects. What if we changed our response? What if, when she asked "Could I?...", we leaned in and said, "I don't know. Let's find out together." What if we built sandpits, not cages? What if we designed modules specifically around "playing" with these tools? What if we asked, "Show me what you made with AI, and then write me a reflection on what it got wrong, what it got right, and what it taught you about your own thinking process." What if we stopped writing policies based on a panicked desire to "catch" cheaters, and instead started developing pedagogies based on a genuine desire to cultivate co-thinkers? Because The Learner is still there. She's still curious. But she's stopped asking. And that should frighten us far more than any AI ever could. And you might like my co-authored book on Generative AI and Education. Comments are closed.
|