COUNTER PROPOSITION
I work with the LLM’s on the ‘hard questions’ every day. And they are, quite honestly, dumb as a rock, become easily confused, lose the plot, and wander off in unpredictable dimensions with regularity.
The newest releases succeed reasonably at chain of thought – a reasonable approximation of reasoning: the human brain reasons by using recursion for wayfinding between an auto-associated goal and presumed state.
Our software relieves this weakness by performing the logic while using the AI’s as glorified search, analysis, and consolidation engines.
I can’t see us handing over much control to these things once they are used in real world scenarios with material risk – we have enough problem training the previous generations of expert systems and machine intelligence.
Reply addressees: @JeffLadish
Source date (UTC): 2025-02-08 22:02:28 UTC
Original post: https://twitter.com/i/web/status/1888347692897865728
Replying to: https://twitter.com/i/web/status/1887935220915097630
Leave a Reply