I started writing these micro-essays on Instagram a few months ago called Pensées (meaning, Thoughts), among other contents. I assume some of my readers here don’t use the platform, so I have decided to start sharing here. Again these are micro-essays just about 100-300 words. Hope you enjoy reading.
Anyone who has used apps like chatGPT and other similar genAI apps must have experienced AI hallucination (a.k.a false knowledge).
Although improvement can be made with 'fine-tuning', iterative querying, the so-called retrieval-augmented generation, and what-not.
Hallucination remains a problem that some experts have argued cannot be completely eliminated.
Wittgenstein's ruler thought experiment on circular validation connects neatly to this problem of AI hallucinations.
The Wittgenstein ruler: “Unless you have confidence in the ruler’s reliability, if you use a ruler to measure a table you may also be using the table to measure the ruler.”
The AI generated output cannot by itself prove the model's competence, just as an imperfect ruler cannot be verified by using it to measure something.
In order to verify the competence of an LLM, we need a separate, independent (preferably liable) source of information, say a flesh and blood human expert.
As such, the notion of fully autonomous AI performing jobs (specifically for zero-margin-for-error-jobs) given today's state-of-the-art is fairly ludicrous.