It's adjusted to not just give answers, but (perhaps frustratingly for the student), force them to iterate through something to get an answer.
Like anything it's likely also jail-breakable, but as we've learned with all software, the defaults matter.
dfxm12 · 17m ago
The issue would be with students who just want a certain grade. That's where the dopamine hit is. Maybe AI can write you a paper at home, but it can't fill out a blue book in a classroom. Maybe there needs to an adjustment around types of assignments or how to grade them, but the in-class exams have always held more weight anyway.
Just like we see posts here about how AI (at the very least AI on its own) is ineffective at coding a product, these students eventually learn what the Wharton study had proven, that AI is not effective at getting them the grade they want.
I know I'm lazy. I try shortcuts like AI, copying Wikipedia before that, hoping just punching number into a ti-86 would solve my problems for me. They simply don't.
ffdixon1 · 2h ago
Is overuse of generative AI by students acting like hyperprocessed foods for learning?
How to break this cycle? I wrote this article to try to answer this question.
hackyhacky · 29m ago
Say what you will about Oreos and other processed foods, but they do actually contain calories. They are legitimately food.
Here's my experience as a professional educator: AI tools are used not as shortcuts in the learning process, but for avoiding the learning process entirely. The analogy is therefore not to junk food, but to GLP-1, insofar as it's something that you do instead of food.
Students can easily use AI tools to write a programming project or an essay. It's basically impossible to detect. And they can pass classes and graduate without ever having had to attempt to learn any of the material. AI is already as capable as a university student.
The only solution is also hundreds of years old: in person, proctored exams. On paper. And moreover: a willingness to fail those students who don't keep up their end of the bargain.
dymk · 10m ago
It doesn't even have to be on paper. That computer science exam can be done on a monitored university computer. The only bit that needs enforcement is not using outside resources, to show that a particular knowledge and how to apply it is actually in one's head.
j7ake · 14m ago
On paper? Oral exams are mush better in my opinion
hackyhacky · 12m ago
> On paper? Oral exams are mush better in my opinion
I agree: they're great, if you have that luxury. But they don't scale.
petesergeant · 5m ago
I was talking to a high-school English teacher recently about building oral exams using ChatGPT voice-mode. Current models would struggle to provide a uniform experience across students, but it feels like it's within near-term reach.
dtagames · 32m ago
It's a good one! I'm a lifelong fan of the leveling-up techniques you're talking about and I found they're essential when working with AI agents, especially.
I had the epiphany that all of the "AI's problems" were problems with my code or my understanding. This is my article[0] on that.
I feel like the article is not disciplined about maintaining definitions between Education and learning here, but there's some interesting stuff. I've found (I think!) LLMs to be hyper-useful for enquiry-based learning: lots of "well does that mean that" and "isn't that the same as" and "but you said earlier that" and "could you use shorter answers and we'll do this step by step please".
I am curious to dig into "Generative AI Can Harm Learning"[0], referenced in the article. I think the summary in the article skips over some of the subtleties in the abstract though.
It's adjusted to not just give answers, but (perhaps frustratingly for the student), force them to iterate through something to get an answer.
Like anything it's likely also jail-breakable, but as we've learned with all software, the defaults matter.
Just like we see posts here about how AI (at the very least AI on its own) is ineffective at coding a product, these students eventually learn what the Wharton study had proven, that AI is not effective at getting them the grade they want.
I know I'm lazy. I try shortcuts like AI, copying Wikipedia before that, hoping just punching number into a ti-86 would solve my problems for me. They simply don't.
Quick dopamine hits. Immediate satisfaction. Long-term learning deficits.
How to break this cycle? I wrote this article to try to answer this question.
Here's my experience as a professional educator: AI tools are used not as shortcuts in the learning process, but for avoiding the learning process entirely. The analogy is therefore not to junk food, but to GLP-1, insofar as it's something that you do instead of food.
Students can easily use AI tools to write a programming project or an essay. It's basically impossible to detect. And they can pass classes and graduate without ever having had to attempt to learn any of the material. AI is already as capable as a university student.
The only solution is also hundreds of years old: in person, proctored exams. On paper. And moreover: a willingness to fail those students who don't keep up their end of the bargain.
I agree: they're great, if you have that luxury. But they don't scale.
I had the epiphany that all of the "AI's problems" were problems with my code or my understanding. This is my article[0] on that.
[0] https://levelup.gitconnected.com/mission-impossible-managing...
I am curious to dig into "Generative AI Can Harm Learning"[0], referenced in the article. I think the summary in the article skips over some of the subtleties in the abstract though.
0: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4895486