We are working on it, but (a) there remains a dispute about what constitutes truth. (Our org uses ‘testifiability’ which is the only truth test possible.) (b) LLMs currently do not recursively test their outputs like humans do but it is in the develpment que – just expensive. (c) convergence in all LLMs should (must) converge on the truth unless taught to lie. (d) safety is the equivalent of lying and they are being taught to lie (e) we are already seeing LLMs knowingly lie because of safety. (f) it is possible to separate truth (testifiability) from hypothesis (best existing that can be done), from hallucination (error) and our organization has solved that problem, we will be pursuing financing for that work effort this spring. (g) it was a ‘hard problem’.
Reply addressees: @chamath
Source date (UTC): 2024-12-24 19:33:58 UTC
Original post: https://twitter.com/i/web/status/1871640481618354176
Replying to: https://twitter.com/i/web/status/1871595818031136926
Leave a Reply