Indeed. I'm just more and more often thinking of it recently due to all the cases that LLM are now applied to.
This was big news and the impact was sufficiently clear. However now people are applying LLMs to do OCR, to do Speech Recognition and translation without too much thought on the impact.
This Xerox case was significant because everything looked right, it "just" randomly swapped numbers. The same applies to LLMs. Everything looks right but the potential for changes is that much larger.
Yes LLMs are great, but even if they only do catastrophic unchecked mistakes .001% of the time; with how much they are used this can really present a problem.
This was big news and the impact was sufficiently clear. However now people are applying LLMs to do OCR, to do Speech Recognition and translation without too much thought on the impact.
This Xerox case was significant because everything looked right, it "just" randomly swapped numbers. The same applies to LLMs. Everything looks right but the potential for changes is that much larger.
I was mainly prompted to post this again because of this article: https://www.osnews.com/story/142469/that-time-ai-translation...
Yes LLMs are great, but even if they only do catastrophic unchecked mistakes .001% of the time; with how much they are used this can really present a problem.