Author: Curt Doolittle

  • btw. just tried a great example. but “nope”. Does this mean … I see. We need a

    btw. just tried a great example. but “nope”.
    Does this mean …
    I see. We need a measure of intelligence by subtlety of association. I am asking it for at least my level of it yet I’m on the margin.
    Thank you. I’ll think about developing a test or scoring system. Appreciate…


    Source date (UTC): 2025-02-11 07:23:51 UTC

    Original post: https://twitter.com/i/web/status/1889213743911080219

    Reply addressees: @lumpenspace @SCTempo @dwarkesh_sp

    Replying to: https://twitter.com/i/web/status/1889205313531969675

  • You might have an argument if the scale of spending wasn’t negatively correlated

    You might have an argument if the scale of spending wasn’t negatively correlated with education of students.
    You can’t make education easier except through repetition and gamification in competition.
    You failed dramatically.
    You did this. You made your corrupt bed. Now live with…


    Source date (UTC): 2025-02-11 07:17:29 UTC

    Original post: https://twitter.com/i/web/status/1889212141691130244

    Reply addressees: @RolandForTexas

    Replying to: https://twitter.com/i/web/status/1889018912748007621

  • Will give it another shot in the morning. it’s possible what I”m testing is a br

    Will give it another shot in the morning. it’s possible what I”m testing is a bridge to far as yet. -thanks.


    Source date (UTC): 2025-02-11 06:51:26 UTC

    Original post: https://twitter.com/i/web/status/1889205586505687541

    Reply addressees: @lumpenspace @SCTempo @dwarkesh_sp

    Replying to: https://twitter.com/i/web/status/1889205313531969675

  • I have. I do. Long way to go yet. But as I said, as far as I know it should be p

    I have. I do. Long way to go yet. But as I said, as far as I know it should be possible. In the medium and long term, the question is the one I posited. I just don’t know.


    Source date (UTC): 2025-02-11 06:48:52 UTC

    Original post: https://twitter.com/i/web/status/1889204943791517921

    Reply addressees: @lumpenspace @SCTempo @dwarkesh_sp

    Replying to: https://twitter.com/i/web/status/1889204326205448571

  • She’s a nitwit. She’s certainly not capable of owning a practice. So, whatever m

    She’s a nitwit. She’s certainly not capable of owning a practice. So, whatever medical-mill she works for will very likely ‘correct’ her.


    Source date (UTC): 2025-02-11 06:46:14 UTC

    Original post: https://twitter.com/i/web/status/1889204278226497792

    Reply addressees: @azealiaslacewig

    Replying to: https://twitter.com/i/web/status/1889181804138987962

  • A baker has to make a cake, but a doctor doesn’t have to care for a patient? (I’

    A baker has to make a cake, but a doctor doesn’t have to care for a patient?

    (I’m sure someone will jump on this before we do but I’d love the press coverage for filing that suit. 😉 ) https://twitter.com/RealJamesWoods/status/1889165106187284617

  • thought you’d get a chuckle. 😉

    – thought you’d get a chuckle. 😉


    Source date (UTC): 2025-02-11 06:43:43 UTC

    Original post: https://twitter.com/i/web/status/1889203644932976767

    Reply addressees: @RealJamesWoods @WerrellBradley

    Replying to: https://twitter.com/i/web/status/1889165106187284617

  • Man is the measure of all things perceivable by man. The question is, can that w

    Man is the measure of all things perceivable by man. The question is, can that which is perceivable be reduced to language when even humans struggle with it. Yet what is available to auto association in humans is far beyond that demonstrated by the LLMs.


    Source date (UTC): 2025-02-11 06:43:03 UTC

    Original post: https://twitter.com/i/web/status/1889203479404794061

    Reply addressees: @SCTempo @dwarkesh_sp

    Replying to: https://twitter.com/i/web/status/1889199816489783305



    IN REPLY TO:

    Unknown author

    THE TRANSFORMER LIMITATION IN NEUROSCIENTIFIC PROSE:
    Correct. Or stated in neuroscience, (apologies if this is too dense to easily interpret), the prompt (language) invokes a set of relations (text equivalent of episodic memories) but it’s network (auto-associative memory) of referents is of lower resolution than that of humans (facets, objects, spaces, places, locations, actors, generalizations, sequences, abstractions, causal relations, valences) is limited to those in the language in the prompt (word-world model) and not the human intuitionistic model (sense-perception-embodiment world model) where abstractions (first principles, logical associations) from the entire corpus of extant and yet unstated or unknown abstractions (causal relations, valences) and first principles (logical relations in sense-perception world model) are associated at levels from neural microcolums to regions to networks to a continuous stream of network adaptations.
    As such the world model of language (word-world model) is one of low precision, is absent embodiment, spatio-temporal, and operational word models (precision) necessary for pattern identification (logical association) of that which is yet UNSTATED in language in sufficient density as to cause association with the model (word-world model) produced in the LLM by the prompt.
    I work on this issue and this is why the prompt must include the logical relation you’re asking the LLM to consider because it cannot make that connection alone.
    Now, I see this as a scaling problem on one hand, meaning one of the necessity of embodiment, spatio-temporal, and operational abstractions in the model, and that the attention must be recursive (wayfinding) in order to cover the field of associations that the hierarchical temporal memory of the brain so easily performs.
    On the other hand whether this problem is solvable within the LLM model by increases in the emergence we’ve seen of late is hard for me to predict. In the meantime we are left with prompts and traditional pseudocode or software (chain of thought) to control that which it cannot on it’s own, as it’s still limited to the equivalent of a synthesizing search engine otherwise.
    Cheers
    Curt Doolittle
    NLI

    Original post: https://x.com/i/web/status/1889199816489783305

  • MORE ON ASSOCIATIVE CAPACITY IN LLMs I should add that while we are self impress

    MORE ON ASSOCIATIVE CAPACITY IN LLMs
    I should add that while we are self impressed by the successes in math (other than counting) and programming, and search-synthesis composition of writing, we forget that we are discussing a testable (closure) system in mathematics and programming (relatively simple), and an untestable (unclosed) system in writing, but with intuiting and reasoning have no closure except demonstration in the mind or in the world itself – both require embodiment, spatio-temporal models, and operational models.
    As such I’d assume the operational world model of cars and androids would need be combined with the linguistic model in order to produce the same, which is why world modeling (simulations) are so effective at training AIs especially under time compression.
    The question then is the bridge between language and action. Can the LLM model evolve (emergence) using langauge as a system of representation and measurement of both embodiment and space-time? I can’t see how without information density as high as a simulation provides.
    That doesn’t mean I’m right however. 😉 All words may be measurements, but can LLMs evolve (emergence) a pseudo-language of their own to reflect the information density of simulations? You’d think so.
    Cheers
    Curt Doolittle
    NLI

    Reply addressees: @SCTempo @dwarkesh_sp


    Source date (UTC): 2025-02-11 06:40:45 UTC

    Original post: https://twitter.com/i/web/status/1889202897583493120

    Replying to: https://twitter.com/i/web/status/1889199816489783305



    IN REPLY TO:

    Unknown author

    THE TRANSFORMER LIMITATION IN NEUROSCIENTIFIC PROSE:
    Correct. Or stated in neuroscience, (apologies if this is too dense to easily interpret), the prompt (language) invokes a set of relations (text equivalent of episodic memories) but it’s network (auto-associative memory) of referents is of lower resolution than that of humans (facets, objects, spaces, places, locations, actors, generalizations, sequences, abstractions, causal relations, valences) is limited to those in the language in the prompt (word-world model) and not the human intuitionistic model (sense-perception-embodiment world model) where abstractions (first principles, logical associations) from the entire corpus of extant and yet unstated or unknown abstractions (causal relations, valences) and first principles (logical relations in sense-perception world model) are associated at levels from neural microcolums to regions to networks to a continuous stream of network adaptations.
    As such the world model of language (word-world model) is one of low precision, is absent embodiment, spatio-temporal, and operational word models (precision) necessary for pattern identification (logical association) of that which is yet UNSTATED in language in sufficient density as to cause association with the model (word-world model) produced in the LLM by the prompt.
    I work on this issue and this is why the prompt must include the logical relation you’re asking the LLM to consider because it cannot make that connection alone.
    Now, I see this as a scaling problem on one hand, meaning one of the necessity of embodiment, spatio-temporal, and operational abstractions in the model, and that the attention must be recursive (wayfinding) in order to cover the field of associations that the hierarchical temporal memory of the brain so easily performs.
    On the other hand whether this problem is solvable within the LLM model by increases in the emergence we’ve seen of late is hard for me to predict. In the meantime we are left with prompts and traditional pseudocode or software (chain of thought) to control that which it cannot on it’s own, as it’s still limited to the equivalent of a synthesizing search engine otherwise.
    Cheers
    Curt Doolittle
    NLI

    Original post: https://x.com/i/web/status/1889199816489783305

  • THE TRANSFORMER LIMITATION IN NEUROSCIENTIFIC PROSE: Correct. Or stated in neuro

    THE TRANSFORMER LIMITATION IN NEUROSCIENTIFIC PROSE:
    Correct. Or stated in neuroscience, (apologies if this is too dense to easily interpret), the prompt (language) invokes a set of relations (text equivalent of episodic memories) but it’s network (auto-associative memory) of referents is of lower resolution than that of humans (facets, objects, spaces, places, locations, actors, generalizations, sequences, abstractions, causal relations, valences) is limited to those in the language in the prompt (word-world model) and not the human intuitionistic model (sense-perception-embodiment world model) where abstractions (first principles, logical associations) from the entire corpus of extant and yet unstated or unknown abstractions (causal relations, valences) and first principles (logical relations in sense-perception world model) are associated at levels from neural microcolums to regions to networks to a continuous stream of network adaptations.
    As such the world model of language (word-world model) is one of low precision, is absent embodiment, spatio-temporal, and operational word models (precision) necessary for pattern identification (logical association) of that which is yet UNSTATED in language in sufficient density as to cause association with the model (word-world model) produced in the LLM by the prompt.
    I work on this issue and this is why the prompt must include the logical relation you’re asking the LLM to consider because it cannot make that connection alone.
    Now, I see this as a scaling problem on one hand, meaning one of the necessity of embodiment, spatio-temporal, and operational abstractions in the model, and that the attention must be recursive (wayfinding) in order to cover the field of associations that the hierarchical temporal memory of the brain so easily performs.
    On the other hand whether this problem is solvable within the LLM model by increases in the emergence we’ve seen of late is hard for me to predict. In the meantime we are left with prompts and traditional pseudocode or software (chain of thought) to control that which it cannot on it’s own, as it’s still limited to the equivalent of a synthesizing search engine otherwise.
    Cheers
    Curt Doolittle
    NLI

    Reply addressees: @SCTempo @dwarkesh_sp


    Source date (UTC): 2025-02-11 06:28:30 UTC

    Original post: https://twitter.com/i/web/status/1889199816301064192

    Replying to: https://twitter.com/i/web/status/1888621235548094886