RE: AI HALLUCINATIONS – WE ACTUALLY WANT THEM?
Correct. Though interestingly… most hallucinations are due to the probabilism inherent in the ‘temperature’ of a given LLMs selection of a sequence of terms.
This would end if we took the opposite position (musk, tesla, driving) where we are testing against real world models (operational possibility) instead of linguistic (analogical) probabilities.
However, just is as there is a difference between testimony (truth) which we want from AIs, and the ideation (ideas) which we want from AIs, there is a difference between whther a hallucination consists of unwanted error or hallucination consists of wanted ideation. 😉
In other words we do want hallucination – just only when we want ideation..
Reply addressees: @RussellJohnston @templexaciounes
Source date (UTC): 2024-12-18 20:56:25 UTC
Original post: https://twitter.com/i/web/status/1869486902618841088
Replying to: https://twitter.com/i/web/status/1869484857954320668
Leave a Reply