You've basically got Einstein, Curie, Tesla, Newton, Charpentier, and Gladys West in your pocket, ready to help. We carry the sum of human knowledge around with us, but no one's teaching us how to actually use it. Schools are more focused on teaching you not to use it. That's the real problem.
Our system still rewards memorisation over reasoning, and when students use AI to solve a problem it's treated as cheating, even if it's the same thing they'll use in the real world.
IMO, teachers should see AI as an extension of a student's brain, not a replacement. Once that clicks, they'll start using it to unlock their students full potential, not shut it down out of fear.
But... let's be honest. It takes skilled teachers to make that work. Letting students use AI without any guidance is just as damaging as banning it altogether.
I told my students once: you're going to build a little app that solves a real problem in our school. It can be anything. You'll demo the app when it's done. But I don't want you to send me the code, I want the prompts you used to build it. The better your prompts, the better your grade.
dr_kiszonka · 17h ago
The problem is that many students outsource reasoning to AI.
pyman · 17h ago
This problem has been around since calculators were invented. But I do agree with you that it's a real issue: students not using their own brain, just relying on the artificial one.
That's why teaching methods need to evolve. They should be asked to show the prompts they used to understand the problem, and document the process. That's how you teach reasoning and how to use the tool properly.
For ex: I'm learning how to cook using ChatGPT, and with every prompt I learn something new and I understand more each time. But I still have to go to the kitchen and actually cook for my girlfriend. Same with students. They can use AI to learn, but they still need to step up to the board and solve a problem if they want to pass the exam.
old_man_cato · 17h ago
Man, you sound so young. That's not a slight or anything. It just puts into perspective who's on this site commenting so confidently.
old_man_cato · 17h ago
We don't know how to use them. That's the entire problem. They were released into the wild by corporations and no one knows how to use them in a way that doesn't harm us.
That's the alignment problem.
pyman · 17h ago
100%. It's mostly large corporations and governments doing the real experimenting at scale. We didn't vote for any of this but we're living with the consequences.
chiph2o · 16h ago
Not sure if we got Einstein, Curie, Tesla, Newton in our pocket. May be just a very advanced search engine.
Illusion of thinking paper claims we are struggling with even medium logical problems and we cannot apply logic of one problem to another similar new problem.
belter · 16h ago
> I want the prompts you used to build it. The better your prompts, the better your grade.
You should have not said that. They will use a model to create the prompts... :-)
You should have said: I will break your code and you have to fix it in front of me.
joshstrange · 16h ago
> I will break your code and you have to fix it in front of me.
For the coding aspect of our interview test, I tell people they can use whatever they want to write it (LLM tools included) and they do all of the work on their own machine without us watching the screen in real time (to make it less stressful). After they’re finished, I ask them to walk me through the code, to make one addition, and fix a bug that almost everyone makes. This helps demonstrate their ability to explain code (even if they were assisted in writing it) and to modify existing code. This exercise is shockingly effective.
None of this is done as a “gotcha“. The change that I asked them to make is incredibly minor, and in most cases can be done completely additively to the existing code. As for the common bug, it is due to people “pattern matching” and misunderstanding part of the instructions, but also very easy to fix once you understand the mistake.
I’ve been very pleased with using this as a filter. In some cases, the results were surprising and, in others, they confirmed what I already thought/suspected.
pyman · 12h ago
Sounds like a good process, but I'm curious: are you looking for someone who understands code, or someone who knows how to solve problems and happens to understand code?
pyman · 12h ago
I think it depends on the age. The younger the student, the more important it is to learn how to solve problems than to code. Those who choose to become software developers will eventually learn how to read code. For me, what matters early on is learning how to break down a problem, understand what matters, and think clearly. The rest is just memorising text.
Our system still rewards memorisation over reasoning, and when students use AI to solve a problem it's treated as cheating, even if it's the same thing they'll use in the real world.
IMO, teachers should see AI as an extension of a student's brain, not a replacement. Once that clicks, they'll start using it to unlock their students full potential, not shut it down out of fear.
But... let's be honest. It takes skilled teachers to make that work. Letting students use AI without any guidance is just as damaging as banning it altogether.
I told my students once: you're going to build a little app that solves a real problem in our school. It can be anything. You'll demo the app when it's done. But I don't want you to send me the code, I want the prompts you used to build it. The better your prompts, the better your grade.
That's why teaching methods need to evolve. They should be asked to show the prompts they used to understand the problem, and document the process. That's how you teach reasoning and how to use the tool properly.
For ex: I'm learning how to cook using ChatGPT, and with every prompt I learn something new and I understand more each time. But I still have to go to the kitchen and actually cook for my girlfriend. Same with students. They can use AI to learn, but they still need to step up to the board and solve a problem if they want to pass the exam.
That's the alignment problem.
Illusion of thinking paper claims we are struggling with even medium logical problems and we cannot apply logic of one problem to another similar new problem.
You should have not said that. They will use a model to create the prompts... :-) You should have said: I will break your code and you have to fix it in front of me.
For the coding aspect of our interview test, I tell people they can use whatever they want to write it (LLM tools included) and they do all of the work on their own machine without us watching the screen in real time (to make it less stressful). After they’re finished, I ask them to walk me through the code, to make one addition, and fix a bug that almost everyone makes. This helps demonstrate their ability to explain code (even if they were assisted in writing it) and to modify existing code. This exercise is shockingly effective.
None of this is done as a “gotcha“. The change that I asked them to make is incredibly minor, and in most cases can be done completely additively to the existing code. As for the common bug, it is due to people “pattern matching” and misunderstanding part of the instructions, but also very easy to fix once you understand the mistake.
I’ve been very pleased with using this as a filter. In some cases, the results were surprising and, in others, they confirmed what I already thought/suspected.