AI’s LIMITED ABILITY TO “REASON”.
1) Symbols are abstractions of language. Each symbol requiring a substantial investment in self-training by repetition.
2) language itself does in fact follow a data structure. Each word consists of a set of dimensions related to all other words by some distance or other.
3) as such, as in many things, mathematical reducibility is a smaller set with more inference than computational or linguistic reducibility, and as such is more prone to errors of inference (probability).
4) We are following the Pareto distribution of all knowledge production. Meaning that the progress we have made covers 80% of the problem. But the majority of work necessary for our desired utility (reduction of error bars) requires many more multiples of incremental investment than those made so far.
5) As such this is why we must identify ‘holes’ in reasoning and produce training in specific fields that ‘fills those holes’ and as such produces the inference and reduction of error that is desired.
6) At present these ai’s are exceptional at the breadth of knowledge available AND summarizing and generalizing from that knowledge. However, just as the problem of induction has been well understood for hundreds of years, the challenge of evolving from generalization to deduction to induction is a long path that few humans are able to follow given years of practice.
Cheers
CD
Reply addressees: @RokoMijic
Source date (UTC): 2024-08-17 22:42:58 UTC
Original post: https://twitter.com/i/web/status/1824940007439908864
Replying to: https://twitter.com/i/web/status/1824855639723524265
Leave a Reply