Category: AI, Computation, and Technology

  • I have. I do. Long way to go yet. But as I said, as far as I know it should be p

    I have. I do. Long way to go yet. But as I said, as far as I know it should be possible. In the medium and long term, the question is the one I posited. I just don’t know.


    Source date (UTC): 2025-02-11 06:48:52 UTC

    Original post: https://twitter.com/i/web/status/1889204943791517921

    Reply addressees: @lumpenspace @SCTempo @dwarkesh_sp

    Replying to: https://twitter.com/i/web/status/1889204326205448571

  • MORE ON ASSOCIATIVE CAPACITY IN LLMs I should add that while we are self impress

    MORE ON ASSOCIATIVE CAPACITY IN LLMs
    I should add that while we are self impressed by the successes in math (other than counting) and programming, and search-synthesis composition of writing, we forget that we are discussing a testable (closure) system in mathematics and programming (relatively simple), and an untestable (unclosed) system in writing, but with intuiting and reasoning have no closure except demonstration in the mind or in the world itself – both require embodiment, spatio-temporal models, and operational models.
    As such I’d assume the operational world model of cars and androids would need be combined with the linguistic model in order to produce the same, which is why world modeling (simulations) are so effective at training AIs especially under time compression.
    The question then is the bridge between language and action. Can the LLM model evolve (emergence) using langauge as a system of representation and measurement of both embodiment and space-time? I can’t see how without information density as high as a simulation provides.
    That doesn’t mean I’m right however. 😉 All words may be measurements, but can LLMs evolve (emergence) a pseudo-language of their own to reflect the information density of simulations? You’d think so.
    Cheers
    Curt Doolittle
    NLI

    Reply addressees: @SCTempo @dwarkesh_sp


    Source date (UTC): 2025-02-11 06:40:45 UTC

    Original post: https://twitter.com/i/web/status/1889202897583493120

    Replying to: https://twitter.com/i/web/status/1889199816489783305



    IN REPLY TO:

    Unknown author

    THE TRANSFORMER LIMITATION IN NEUROSCIENTIFIC PROSE:
    Correct. Or stated in neuroscience, (apologies if this is too dense to easily interpret), the prompt (language) invokes a set of relations (text equivalent of episodic memories) but it’s network (auto-associative memory) of referents is of lower resolution than that of humans (facets, objects, spaces, places, locations, actors, generalizations, sequences, abstractions, causal relations, valences) is limited to those in the language in the prompt (word-world model) and not the human intuitionistic model (sense-perception-embodiment world model) where abstractions (first principles, logical associations) from the entire corpus of extant and yet unstated or unknown abstractions (causal relations, valences) and first principles (logical relations in sense-perception world model) are associated at levels from neural microcolums to regions to networks to a continuous stream of network adaptations.
    As such the world model of language (word-world model) is one of low precision, is absent embodiment, spatio-temporal, and operational word models (precision) necessary for pattern identification (logical association) of that which is yet UNSTATED in language in sufficient density as to cause association with the model (word-world model) produced in the LLM by the prompt.
    I work on this issue and this is why the prompt must include the logical relation you’re asking the LLM to consider because it cannot make that connection alone.
    Now, I see this as a scaling problem on one hand, meaning one of the necessity of embodiment, spatio-temporal, and operational abstractions in the model, and that the attention must be recursive (wayfinding) in order to cover the field of associations that the hierarchical temporal memory of the brain so easily performs.
    On the other hand whether this problem is solvable within the LLM model by increases in the emergence we’ve seen of late is hard for me to predict. In the meantime we are left with prompts and traditional pseudocode or software (chain of thought) to control that which it cannot on it’s own, as it’s still limited to the equivalent of a synthesizing search engine otherwise.
    Cheers
    Curt Doolittle
    NLI

    Original post: https://x.com/i/web/status/1889199816489783305

  • THE TRANSFORMER LIMITATION IN NEUROSCIENTIFIC PROSE: Correct. Or stated in neuro

    THE TRANSFORMER LIMITATION IN NEUROSCIENTIFIC PROSE:
    Correct. Or stated in neuroscience, (apologies if this is too dense to easily interpret), the prompt (language) invokes a set of relations (text equivalent of episodic memories) but it’s network (auto-associative memory) of referents is of lower resolution than that of humans (facets, objects, spaces, places, locations, actors, generalizations, sequences, abstractions, causal relations, valences) is limited to those in the language in the prompt (word-world model) and not the human intuitionistic model (sense-perception-embodiment world model) where abstractions (first principles, logical associations) from the entire corpus of extant and yet unstated or unknown abstractions (causal relations, valences) and first principles (logical relations in sense-perception world model) are associated at levels from neural microcolums to regions to networks to a continuous stream of network adaptations.
    As such the world model of language (word-world model) is one of low precision, is absent embodiment, spatio-temporal, and operational word models (precision) necessary for pattern identification (logical association) of that which is yet UNSTATED in language in sufficient density as to cause association with the model (word-world model) produced in the LLM by the prompt.
    I work on this issue and this is why the prompt must include the logical relation you’re asking the LLM to consider because it cannot make that connection alone.
    Now, I see this as a scaling problem on one hand, meaning one of the necessity of embodiment, spatio-temporal, and operational abstractions in the model, and that the attention must be recursive (wayfinding) in order to cover the field of associations that the hierarchical temporal memory of the brain so easily performs.
    On the other hand whether this problem is solvable within the LLM model by increases in the emergence we’ve seen of late is hard for me to predict. In the meantime we are left with prompts and traditional pseudocode or software (chain of thought) to control that which it cannot on it’s own, as it’s still limited to the equivalent of a synthesizing search engine otherwise.
    Cheers
    Curt Doolittle
    NLI

    Reply addressees: @SCTempo @dwarkesh_sp


    Source date (UTC): 2025-02-11 06:28:30 UTC

    Original post: https://twitter.com/i/web/status/1889199816301064192

    Replying to: https://twitter.com/i/web/status/1888621235548094886

  • Civ AI Risk

    Civ AI Risk

    Civ AI Risk https://t.co/amQmKh6nro


    Source date (UTC): 2025-02-09 18:49:17 UTC

    Original post: https://twitter.com/i/web/status/1888661464438960417

  • COUNTER PROPOSITION I work with the LLM’s on the ‘hard questions’ every day. And

    COUNTER PROPOSITION
    I work with the LLM’s on the ‘hard questions’ every day. And they are, quite honestly, dumb as a rock, become easily confused, lose the plot, and wander off in unpredictable dimensions with regularity.
    The newest releases succeed reasonably at chain of thought – a reasonable approximation of reasoning: the human brain reasons by using recursion for wayfinding between an auto-associated goal and presumed state.
    Our software relieves this weakness by performing the logic while using the AI’s as glorified search, analysis, and consolidation engines.
    I can’t see us handing over much control to these things once they are used in real world scenarios with material risk – we have enough problem training the previous generations of expert systems and machine intelligence.

    Reply addressees: @JeffLadish


    Source date (UTC): 2025-02-08 22:02:28 UTC

    Original post: https://twitter.com/i/web/status/1888347692897865728

    Replying to: https://twitter.com/i/web/status/1887935220915097630

  • RT @pmarca: The reality is the secrets are out. Everyone knows how to code a tra

    RT @pmarca: The reality is the secrets are out. Everyone knows how to code a transformer, how to RLHF, how to use reinforcement learning fo…


    Source date (UTC): 2025-02-06 19:54:35 UTC

    Original post: https://twitter.com/i/web/status/1887590735530369407

  • RT @Feni__Sam: @ScottAdamsSays Grok helps me understand the layers going on here

    RT @Feni__Sam: @ScottAdamsSays Grok helps me understand the layers going on here.

    Here’s an analysis of the layers of meaning and insinua…


    Source date (UTC): 2025-02-06 03:03:40 UTC

    Original post: https://twitter.com/i/web/status/1887336328666481061

  • No. It makes martin furious. The flip side is that the AI (at least chatgpt) app

    No. It makes martin furious. The flip side is that the AI (at least chatgpt) appears to convey my ideas more accessibly.


    Source date (UTC): 2025-02-04 03:37:19 UTC

    Original post: https://twitter.com/i/web/status/1886620019733488123

    Reply addressees: @Hail__To_You

    Replying to: https://twitter.com/i/web/status/1886611821903351972

  • RT @curtdoolittle: @CloudByter Thinking… IT is not a competitive advantage but a

    RT @curtdoolittle: @CloudByter Thinking… IT is not a competitive advantage but a defence against competitive advantage – for the simple rea…


    Source date (UTC): 2025-02-01 23:48:53 UTC

    Original post: https://twitter.com/i/web/status/1885837757685318108

  • Thinking… IT is not a competitive advantage but a defence against competitive ad

    Thinking… IT is not a competitive advantage but a defence against competitive advantage – for the simple reason that it is easily replicated.


    Source date (UTC): 2025-02-01 23:48:34 UTC

    Original post: https://twitter.com/i/web/status/1885837677846770064

    Reply addressees: @CloudByter

    Replying to: https://twitter.com/i/web/status/1885827431585493312