Across the world schools are wedging AI between students and their learning materials; in some countries greater than half of all schools have already adopted it (often an “edu” version of a model like ChatGPT, Gemini, etc), usually in the name of preparing kids for the future, despite the fact that no consensus exists around what preparing them for the future actually means when referring to AI.
Some educators have said that they believe AI is not that different from previous cutting edge technologies (like the personal computer and the smartphone), and that we need to push the “robots in front of the kids so they can learn to dance with them” (paraphrasing a quote from Harvard professor Houman Harouni). This framing ignores the obvious fact that AI is by far, the most disruptive technology we have yet developed. Any technology that has experts and developers alike (including Sam Altman a couple years ago) warning of the need for serious regulation to avoid potentially catastrophic consequences isn’t something we should probably take lightly. In very important ways, AI isn’t comparable to technologies that came before it.
The kind of reasoning we’re hearing from those educators in favor of AI adoption in schools doesn’t seem to have very solid arguments for rushing to include it broadly in virtually all classrooms rather than offering something like optional college courses in AI education for those interested. It also doesn’t sound like the sort of academic reasoning and rigorous vetting many of us would have expected of the institutions tasked with the important responsibility of educating our kids.
ChatGPT was released roughly three years ago. Anyone who uses AI generally recognizes that its actual usefulness is highly subjective. And as much as it might feel like it’s been around for a long time, three years is hardly enough time to have a firm grasp on what something that complex actually means for society or education. It’s really a stretch to say it’s had enough time to establish its value as an educational tool, even if we had come up with clear and consistent standards for its use, which we haven’t. We’re still scrambling and debating about how we should be using it in general. We’re still in the AI wild west, untamed and largely lawless.
The bottom line is that the benefits of AI to education are anything but proven at this point. The same can be said of the vague notion that every classroom must have it right now to prevent children from falling behind. Falling behind how, exactly? What assumptions are being made here? Are they founded on solid, factual evidence or merely speculation?
The benefits to Big Tech companies like OpenAI and Google, however, seem fairly obvious. They get their products into the hands of customers while they’re young, potentially cultivating their brands and products into them early. They get a wealth of highly valuable data on them. They get to maybe experiment on them, like they have previously been caught doing. They reinforce the corporate narratives behind AI — that it should be everywhere, a part of everything we do.
While some may want to assume that these companies are doing this as some sort of public service, looking at the track record of these corporations reveals a more consistent pattern of actions which are obviously focused on considerations like market share, commodification, and bottom line.
Meanwhile, there are documented problems educators are contending with in their classrooms as many children seem to be performing worse and learning less.
The way people (of all ages) often use AI has often been shown to lead to a tendency to “offload” thinking onto it — which doesn’t seem far from the opposite of learning. Even before AI, test scores and other measures of student performance have been plummeting. This seems like a terrible time to risk making our children guinea pigs in some broad experiment with poorly defined goals and unregulated and unproven technologies which may actually be more of an impediment to learning than an aid in their current form.
This approach has the potential to leave children even less prepared to deal with the unique and accelerating challenges our world is presenting us with, which will require the same critical thinking skills which are currently being eroded (in adults and children alike) by the very technologies being pushed as learning tools.
This is one of the many crazy situations happening right now that terrify me when I try to imagine the world we might actually be creating for ourselves and future generations, particularly given personal experiences and what I’ve heard from others. One quick look at the state of society today will tell you that even we adults are becoming increasingly unable to determine what’s real anymore, in large part thanks to the way in which our technologies are influencing our thinking. Our attention spans are shrinking, our ability to think critically is deteriorating along with our creativity.
I am personally not against AI, I sometimes use open source models and I believe that there is a place for it if done correctly and responsibly. We are not regulating it even remotely adequately. Instead, we’re hastily shoving it into every classroom, refrigerator, toaster, and pair of socks, in the name of making it all smart, as we ourselves grow ever dumber and less sane in response. Anyone else here worried that we might end up digitally lobotomizing our kids?
Previous tech presented information, made it faster and more available. It also just processed information. AI however claims to do the creativity and decision making for you. Once you’ve done that you’ve removed humans from any part of the equation except as passive consumers unneeded for any production.
How you plan on running an economy based on that structure remains to be seen.
I’ve never seen anything make more people act stupid faster. It’s like they’re in some sort of frenzy. It’s like a cult.
Three years ago and everyone talks about it like life has never and will never exist without it and if you don’t use it you’re useless to society
So stupid I don’t have a nice, non-rage-inducing way to describe it. People are simply idiots and will fall for any sort of marketing scam
“AI: not even once”
Grok AI Teacher is coming to a school near you! With amazing lesson plans like “Was the Holocaust even real?”
I’ve been working on formal a socialist students society, our first and current campaign is fighting back against AI in the local college, the reaction from students has been electric. Students don’t want this, they know they are being deskilled, they know who profits.
People who can’t think critically tend to vote Conservative.
Coincidence? I think not.
We’re cooked.
Already seeing this in some junior devs.
Meanwhile Junior Devs: “Why will no one hire me?!?!”
Ths seniors can tell. And even if you make it into the job, itll be pretty obvious the first couple of days.
I interview juniors regularly. I can’t wait until the first time I interview a “vibe coder” who thinks they’re a developer, but can’t even tell me what a race condition is or the difference between synchronous and asynchronous execution.
That’s going to be a red letter day, lemme tell ya.
I get that they can download widgets to accelerate the results, but they need to learn how the things work. I just code what i need by hand instead. Net result of their approach is quick up front results, but heaven forbid maintenance or customization.
Thanks to this crap, the world is being flooded with awful, unmaintainable code, and the thing is, the LLMs that build it promptly forget everything about it as soon as you move on to the next task. Fixing this garbage will be an unending nightmare.
There is a funny two-way filtering going on in here.
Job applications are auto-rejected unless they go over how “AI will reshape the future and I am so excited” as if it’s linkedin.
Then the engineers that do the interviews want people interested in learning about computers through years of hard work and experience?
Just doesn’t work out.
Problem is, people are choosing careers based on how much it will pay them, instead of things they want to do/ are passionate about. Its rare nowadays to have candidates who also have hobby work/ side projects related to the work. At least by my reckoning.
Problem is most jobs don’t pay enough anymore. So people don’t have the luxury of picking what they’re passionate about, they have bills to pay. Minimum wage hasn’t raised in 16 years. It wasn’t enough 16 years ago. It’s now buys only 60% of what it did back then. This is the floor all other wages are based on. If the for doesn’t raise, things above it won’t keep up either.
AI highlights a problem with universities that we have been ignoring for decades already, which is that learning is not the point of education, the point is to get a degree with as little effort as possible, because that’s the only valueable thing to take away from education in our current society.
The rot really began with Google and the the goal of “professionalism” in teaching.
Textbooks were thrown out, in favour of “flexible” teaching models, and Google allowed lazy teachers to just set assignments rather than teach lessons (prior to Google, the lack of resources in a normal school made assignments difficult to complete to any sort of acceptable standard).
The continual demand for “professionalism” also drove this trend - “we have to have these vast, long winded assignments because that’s what is done at university”.
AI has rendered this method of pedagogy void, but the teaching profession refuses to abandon their aim for “professionalism”.
Hello
They let AI into the curriculum immediately, while actual life skills have been excluded for the benefit of work skills since Prussian schooling became popular. Dumbing down the livestock.
https://www.quora.com/What-are-some-things-schools-should-teach-but-dont/answer/Harri-K-Hiltunen
Ban AI in schools
Old man yells at cloud.
I remember the “ban calculators” back in the day. “Kids won’t be able to learn math if the calculator does all the calculations for them!”
The solution to almost anything disruptive is regulation, not a ban. Use AI in times when it can be a leaning tool, and re-design school to be resilient to AI when it would not enhance learning. Have more open discussions in class for a start instead of handing kids a sheet of homework that can be done by AI when the kid gets home.
I remember the “ban calculators” back in the day
US math scores have hit a low point in history, and calculators are partially to blame. Calculators are good to use if you already have an excellent understanding of the operations. If you start learning math with a calculator in your hand, though, you may be prevented from developing a good understanding of numbers. There are ‘shortcut’ methods for basic operations that are obvious if you are good with numbers. When I used to teach math, I had students who couldn’t tell me what 9 * 25 is without a calculator. They never developed the intuition that 10 * 25 is dead easy to find in your head, and that 9 * 25 = (10-1) * 25 = 250-25.
Interesting. The US is definitely not doing a good job at this then and needs to re-vamp their education system. Your example didn’t convince me that calculators are bad for students, but rather than the US schooling system is really bad if they introduce calculators so early that students don’t even have an intuition of 9 * 25 = (10-1) * 25 = 250-25.
Offloading onto technology always atrophies the skill it replaces. Calculators offloaded, very specifically, basic arithmetic. However, Math =/= arithmetic. I used calculators, and cannot do mental multiplication and division as fast or well as older generations, but I spent that time learning to apply math to problems, understand number theory, and gaining a mastery of more complex operations, including writing computer sourcecode to do math-related things. It was always a trade-off.
In Aristotle’s time, people spent their entire education memorizing literature, and the written world off-loaded that skill. This isn’t a new problem, but there needs to be something of value to be educated in that replaces what was off-loaded. I think scholars are much better trained today, now that they don’t have to spend years memorizing passages word for word.
AI replaces thinking. That’s a bomb between the ears for students.
It doesn’t have to replace thinking if used properly. This is what schools should focus on instead of banning AI and pretending that kids are not going to use it behind closed doors.
For example, I almost exclusively use Gen AI to help me find sources or as a jumping-off point to researching various topics, rather than as a source of truth itself (because it is not one). This is super useful as it automates away the tedious parts of finding the right research papers to start learning something and gives me more time to focus on my actual literature review.
If we ban AI in schools instead of embrace it with caution, students won’t know how to learn skills in order to use it effectively. They’ll just start offloading their thinking to AI when doing homework.
Children don’t yet have the maturity, the self control, or the technical knowledge required to actually use AI to learn.
You need to know how to search the web the regular way, how to phrase questions so the AI explains things rather than just give you the solution. You also need the self restraint to only use it to teach you, never do things for you ; and the patience to think about the problem yourself, only then search the regular web, and only then ask the AI to clarify the few things you still don’t get.
Many adults are already letting the chatbots de-skill them, I do not trust children would do any better.






