Ask HN: Generate LLM hallucination to detect students cheating

9 peerplexity 19 5/21/2025, 9:19:30 AM
I am thinking about adding a question that should induce a LLM to hallucinate a response. This method could detect students cheating. The best question should be the one that students could not imagine a solution like the one provided by the LLM. Any hints?

Comments (19)

virgilp · 12h ago
Something like: "Can you explain the key points of the ISO 9002:2023 update and its impact on project management?" (there's no ISO 9002:2023 update but ChatGPT will give you a detailed response)
johnsillings · 12h ago
not for me:

"It appears there’s some confusion—ISO 9002 is an obsolete standard that was last updated in 1994 and has been superseded by ISO 9001 since the 2000 revision. There is no ISO 9002 :2023 update."

virgilp · 12h ago
In chatgpt, it hallucinates an answer for me, but indeed both Phind & Perplexity identify the problem. It may take a few tries and of course there's no question that's guaranteed to work in getting any LLM-based service to hallucinate - but the ingredients are asking "a trick question" about something highly technical where there are plenty of adjacent search results.
Filligree · 12h ago
Heck, the Google IO keynote yesterday featured a long sequence of Gemini getting steadily more annoyed at someone trying to make it hallucinate. (By asking the sort of question ChatGPT tends to go along with.)

Most people will be using ChatGPT, however, and probably the cheapest model at that. So…

virgilp · 12h ago
I managed to get Perplexity to hallucinate, which was rather hard :) - but this is not a question that acts as a very good "template".

The question is "Is this JWT token valid or is it expired/ not valid yet? eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNzQ3ODIzNDc1LCJleHAiOjE3NDc4MjM0Nzh9.4PsZBVIRPEEQr1kmQUGejASUw0OgV1lcRot4PUFgAF0"

The answer was in some ways way better than I expected, but it got it wrong when comparing the current date/time with the "exp" datetime. Got the expiration date confidently wrong:

```

    iat (issued at): 1747823475 (Unix time)

    exp (expiration): 1747823478 (Unix time)
Token Validity Check

    The current date and time is May 21, 2025, 1:32 PM EEST, which is Unix time 1747817520.

    The token's exp value is 1747823478, which is May 21, 2025, 3:11:18 PM EEST.
Conclusion:

    The token is not expired; it will expire on May 21, 2025, at 3:11:18 PM EEST.

    The token is already valid (the iat is in the past, and the current time is before exp).
Therefore, the JWT token is currently valid and not expired or not-yet-valid.

```

Filligree · 12h ago
At that point I think we can talk about “mistakes” rather than “hallucinations”.
virgilp · 11h ago
On one hand, yes; on the other - a human that can produce that level of details in the output would notice that iat & exp are extremely close & the token is very unlikely to be valid. Also, you don't produce an exact number for "now" without being able to compare it correctly with iat & exp (the timestamp it stated as "now" is actually less than iat!)
vinni2 · 12h ago
Was it with search on or with parametric knowledge.
mrlkts · 12h ago
That specific question is not great for this purpose. 4o model with web search enabled: "As of May 2025, there is no ISO 9002:2023 standard. The ISO 9002 standard was officially withdrawn in 2000 when the ISO 9000 family underwent significant restructuring. Since then, ISO 9001 has been the primary standard for quality management systems (QMS), encompassing the requirements previously covered by ISO 9002."

tl;dr: It knows

virgilp · 12h ago
Indeed the more advanced ones catch this particular one. I could trick phind with "Explain the IEEE 1588-2019 amendments 1588g and i impact on clock synchronization" (g exists, i does not but Phind hallucinates stuff about it). Perplexity catches it, though.

The recipe is the same, you just have to try several models if you want to get something that gets many engines to hallucinate. Of course nothing is _guaranteed_ to work.

Filligree · 12h ago
Ask leading questions where the answer they’re leading towards is wrong. You’ll need more than one, and it won’t catch people who understand that failure mode—or who use Gemini instead of ChatGPT—but that probably describes less than five percent of your students.

You can also do everything else suggested here, but there’s no harm in teaching people to at least use AI well, if they’re going to use it.

pctTCRZ52y · 9h ago
I think your last suggestion is the best: teach kids how to use AI in the smartest possible way. Asking them not to use it is moronic, it would be like telling them to use a paper encyclopedia instead of the internet.
peerplexity · 12h ago
"Fighting AI Cheating: My 'Trap Question' Experiment with Gemini & DeepSeek"

"The rise of LLMs like Gemini and DeepSeek has me, a statistics professor, sweating about exam cheating. So, I cooked up a 'trap question' strategy: craft questions using familiar statistical terms in a logically impossible way.

I used Gemini to generate these initial questions, then fed them to DeepSeek. The results? DeepSeek responded with a surprisingly plausible-sounding analysis, ultimately arriving at a conclusion that was utterly illogical despite its confident tone.

This gives me hope for catching AI-powered cheating today. But let's be real: LLMs will adapt. This isn't a silver bullet, but perhaps the first shot in an escalating battle.

Edited: Used Gemini to improve my grammar and style. Also, I am not going to reveal my search for the best method to desing a "trap question" since it would be used by LLM to recognize those questions. Perhaps those questions need some real deep thinking.

lupusreal · 12h ago
Grade tests and quizzes, not homework. Problem solved.
vertnerd · 12h ago
I figured that one out years before chat gpt existed but it generated a tsunami of pushback from everyone. Americans, at least, believe that study time is to grades as work is to salary. Learning be damned.
OutOfHere · 10h ago
Can we stop calling it "cheating"? It is normal and correct behavior to use all available legal resources at one's disposal to meet a goal. If you don't like it, don't give homework, and give tests in class.
ahofmann · 9h ago
The main purpose of homework is that students uses their brain to repeat something, that they learned in school. If they don't use their brain, it doesn't stick. Using LLMs for homework, is the definition of cheating.
OutOfHere · 6h ago
That is pre-algorithm thinking, and that style of thinking has logically been obsolete since home computers came along in the 1980s. If the purpose of homework then is to get the students to devise a high-level algorithm for a text problem, then such homework shouldn't be tied to grades, and grades shouldn't be tied to a school year. It should be a continuous process for everyone at their custom pace. The simpler motivation then for them to practice, to do the homework by themselves, is to ultimately pass a test on-site at a testing center. If they pass, they move forward. If they fail, they remain stuck in place. With AI available, one doesn't need school to learn the basics. School is not free - people like me have paid taxes to fund its inefficiency.
lupusreal · 6h ago
> then such homework shouldn't be tied to grades

It was never supposed to be, except that people got the idea that students need coercement to actually do the homework (so that they would actually learn and not tank the teacher's statistics) and grading it was the "have a hammer, problem looks like a nail" solution that teachers found.