Using LLMs to produce material is not a good idea, except maybe to polish up grammar and phrasing.
As a former teacher, I know you need to have a good grasp of the material you are using in order to help students understand it. The material should also be in a similarly structured form thoughout a course, which will reinforce the expectations of the students, making their mental load lesser. The only way to do this is to prepare the material yourself.
Material created by LLM will have the issues you mentioned, yes, but it will also be less easy to teach, for the reasons mentioned above. In the US, where teaching is already in a terrible state, I wouldn't be surprised if this is accepted quietly, but it will have a long lasting negative impact on learning outcomes.
If we project this forward, a reliance on AI tools might also create a lower expectation of the quality of the material, which will drag the rest of the material down as well. This mirrors the rise of expendable mass produced products when we moved the knowledge needed to produce goods from workers to factory machines.
Commodities are one thing, you could argue that the decrease in quality is offset by volume (I wouldn't, but you could), but for teaching? Not a good idea. At most, let the students know how to use LLMs to look for information, and warn them of hallucinations and not being able to find the sources.
I agree you shouldn't use LLMs to produce material wholesale, but I think it can be positively useful when used thoughtfully.
I recently taught a high school equivalent philosophy class, and wanted to design an exercise for my students to allocate a limited number of organs to recipients that were not directly comparable. I asked an LLM to generate recipient profiles for the students to choose between. First pass, the recipients all needed different organs, which kind of ruined the point of the dilemma! I told it so, and second pass was great.
Even with the extra handholding, the LLM made good materials faster than if I would have designed them manually. But if I had trusted it blindly, the materials would have been useless.
How can you ensure that the exercise actually teaches the students anything in this case? Shouldn't you be building the exercise around the kinds of issues that are likely to come up, or that are difficult/interesting?
If you're teaching ethics in high school (which it sounds like you are), how many minutes does it take to write three or four paragraphs, one per case, highlighting different aspects that the student would need to take into account when making ethical decisions? I would estimate five to ten. A random assortment of cases from an LLM is unlikely to support the ethical themes you've talked about in the rest of the class, and the students are therefore also unlikely to be able to apply anything they've learned in class before then.
This may sound harsh, but to me it sounds like you've created a non-didactic, busywork exercise.
> How can you ensure that the exercise actually teaches the students anything in this case?
By participating in the exercise during class. Introducing the cases, facilitating group discussions, and providing academic input when bringing the class back together for a review. I'm not just saying "hey take a look at this or whatever".
> If you're teaching ethics in high school (which it sounds like you are)
Briefly and temporarily. I have no formal pedagogic background. Input appreciated.
> This may sound harsh, but to me it sounds like you've created a non-didactic, busywork exercise.
I may not have elaborated well enough on the context. I'm not creating slop in order to avoid doing work. I'm using the tools available to do more work faster - and sometimes coming across examples or cases that I realized I wouldn't have thought of myself. And, crucially, strictly supervising any and all work that the LLM produces.
If I had infinite time, then I'd happily spend it on meticulously handcrafting materials. But as this thread makes clear, that's a rare luxury in education.
I've done years of private 1:1 teaching and some class teaching though not class lecturing, which is presumably the material you're talking about.
> As a former teacher, I know you need to have a good grasp of the material you are using in order to help students understand it. The material should also be in a similarly structured form thoughout a course, which will reinforce the expectations of the students, making their mental load lesser. The only way to do this is to prepare the material yourself.
It's absolutely necessary to have a good fundamental understanding of the material yourself. These teachers abusing AI and not even catching these obvious issues, clearly don't have such an understanding - or they're not using any of it, which is effectively the same. In fact, they're likely to have a much worse understanding than your average frontier LLM, especially given this post is about high school level teaching.
> The only way to do this is to prepare the material yourself.
As brought up in other comments, what is yourself? For decades teachers have been using premade lesson plans, either third-party, school supplied or otherwise obtained, with minor edits. All teachers? Of course not, but it's completely normalized. Are they doing it themselves? If not, then the remainder did it together with Google and Wikipedia. Were they also not doing it themselves? Especially given how awful modern Google is (and the worldwide number of high school teachers using something like Kagi will be <100 people), simply using a frontier model, especially with web search enabled, is simply a better version of doing that, if used in the same way.
If you use a prepared lesson plan it at least has some structure to it that students can learn to expect, and if you search for information from the internet, you are still compiling it yourself, which again means structure, a structure _you_ made using information that _you_ have parsed and decided to include. You will also have sources.
It can be just as true for LLM output. One can use LLMs simply in a way to gather information, replacing Google by a much better one, and create lesson plans yourself based on the information provided. Nothing stands in the way of you doing that. Just like the many devs on this website who basically use it as a StackOverflow to ask questions to rather than "give me the full code of the solution" and just blindly copy-pasting that in.
LLMs with web search enabled also give sources.
There's not much reason to believe LLMs are incapable of outputting a similarly well-structured lesson plan as those provided by third parties. It fits well into what they're capable of.
The problem is, again, that the teachers in question clearly don't care about the quality of their teaching in the slightest. Are LLMs a tool that could amplify their awfulness further? Sure, but at the same time they can amplify the goodness of the hopefully larger group of teachers who do care.
As a former teacher, I know you need to have a good grasp of the material you are using in order to help students understand it. The material should also be in a similarly structured form thoughout a course, which will reinforce the expectations of the students, making their mental load lesser. The only way to do this is to prepare the material yourself.
Material created by LLM will have the issues you mentioned, yes, but it will also be less easy to teach, for the reasons mentioned above. In the US, where teaching is already in a terrible state, I wouldn't be surprised if this is accepted quietly, but it will have a long lasting negative impact on learning outcomes.
If we project this forward, a reliance on AI tools might also create a lower expectation of the quality of the material, which will drag the rest of the material down as well. This mirrors the rise of expendable mass produced products when we moved the knowledge needed to produce goods from workers to factory machines.
Commodities are one thing, you could argue that the decrease in quality is offset by volume (I wouldn't, but you could), but for teaching? Not a good idea. At most, let the students know how to use LLMs to look for information, and warn them of hallucinations and not being able to find the sources.