Category: Uncategorized

  • Neurons are profoundly more complex in their abilities and profoundly numerous,

    Neurons are profoundly more complex in their abilities and profoundly numerous, and profoundly fast in what they can ‘calculate’ in massive parallel in real time.
    LLM’s don’t start with your senses and work up to words, they start with words and work from there.
    And they aren’t conscious in the sense that humans experience and value the experience of this continuous stream of auto-associative dreams-to-reason that we do.
    But, otherwise, the artificial neural networks in LLMs are pretty close analogies to the neural networks in living animals.

    Reply addressees: @s_everson @xriskology


    Source date (UTC): 2024-08-17 23:18:45 UTC

    Original post: https://twitter.com/i/web/status/1824949012870418432

    Replying to: https://twitter.com/i/web/status/1824947552091025505

  • There is zero chance you or anyone else could debate me on this topic. That is w

    There is zero chance you or anyone else could debate me on this topic. That is why you resort to name calling.
    Sorry. I’m correct. It’s what it is.
    Cheers


    Source date (UTC): 2024-08-17 23:12:53 UTC

    Original post: https://twitter.com/i/web/status/1824947537746272335

    Reply addressees: @theyakadude

    Replying to: https://twitter.com/i/web/status/1824916529940504813

  • Q: “It’s it really reasoning or with each prompt…” It doesn’t remember just th

    Q: “It’s it really reasoning or with each prompt…”
    It doesn’t remember just the prompt but the combination of prompts and outputs during the chat (it’s context window).
    So you’re building a network of contexts that gradually increases in statistical narrowing of outputs (relevance) until it runs out of context memory, or gets sidetracked. So that’s why you must produce prompts that not only increase the contextual precision but also correct it’s past inferences.
    It’s not any different from speaking to someone with whom you’re unfamiliar and trying to get them to understand what might be unfamiliar to them. You’re trying to narrow the parameters (context) sufficiently to produce unambiguity so that agreement (understanding), deduction, induction, and even abduction (guessing) can occur.
    Same thing. 😉

    Reply addressees: @Nunnie3001 @xriskology


    Source date (UTC): 2024-08-17 23:11:46 UTC

    Original post: https://twitter.com/i/web/status/1824947257005023232

    Replying to: https://twitter.com/i/web/status/1824940236540944877

  • “Reasoning is wayfinding.” All human cognition evolved from coordination of two

    “Reasoning is wayfinding.”
    All human cognition evolved from coordination of two hemispheres for the purpose of movement in pursuit of acquisition of resources (or escape from harm to them). Once one has dependent offspring, a nest and a social group, one requires wayfinding.
    Wayfinding requires a context or episode (index), a need or want (motive), auto association (goal), and relations (paths). That’s all that’s required to organize a neural network to continuously learn.
    Everything in the brain and mind is an evolution of the minimum capabilities to navigate.
    Evolution in ability is just ‘more neurons’ in a hierarchical, recursive organization of increasing abstraction of increasing complexity of information – but performing the same very simple (even trivial) operations.
    Stimulation > Nervous system > Memory > categorization (perception) > Episodic memory > Auto association > wayfinding > [repeat]
    It’s not even complicated. There is just a lot of it going on in a continuously updating stream of massively parallel competition for survival of a ‘route’ that achieves an end.
    Consciousness is just the memory of the past few seconds of the surviving memories that result from that adversarial process and how we predict it will affect our homeostasis (state).
    CD

    Reply addressees: @s_everson @xriskology


    Source date (UTC): 2024-08-17 23:06:05 UTC

    Original post: https://twitter.com/i/web/status/1824945826822864896

    Replying to: https://twitter.com/i/web/status/1824941509147340909

  • Imprecise presumption. At present there is an inversion between the scope of inf

    Imprecise presumption. At present there is an inversion between the scope of information available and the number of recursive adversarial processes that self eliminate error.
    This is possible to overcome by a division of knowledge, more layers of attention, more parallel attempts, and more adversarial competition between them for Identity consistency, correspondence, operational possibility and even rational choice and reciprocity.
    In other words we’re climbing a Pareto power curve where each incremental step in decreasing error bars is qualitatively more difficult.
    The ‘simple version’ is that LLMS solve the input and output problem, but they have quite far to go in imitating the brain’s massively parallel competition at all levels from facets to objects to spaces to borders, to places to locations, to episodes, to predictions from sets of episodes,wayfinding between present episode and desired episode, to adversarily competition between those routes.
    What’s been amazing is just how great the tools are at generalization. That’s the easy part. the hard part is analysis by adversarial competition. Which means we probably have to convert to neuromorphic hardware (many tiny cores) updating continuously from large collections of traditional cores by costly updates we call ‘training’.
    Cheers
    CD

    Reply addressees: @Danil_KV


    Source date (UTC): 2024-08-17 22:54:39 UTC

    Original post: https://twitter.com/i/web/status/1824942950452719616

    Replying to: https://twitter.com/i/web/status/1824905597210550623

  • They are using coding as a test because (a) the market of early adopters favors

    They are using coding as a test because (a) the market of early adopters favors it, and it produces the highest early adopter returns. (b) The vocabulary and grammar of coding is operational to begin with so unlike ordinary language, where we must deduce or infer the operations (actions) necessary for each change in state, in programming the declarations are openly stated. (c) as such ordinarly language is logic (which requires understanding of embodiment (our physical and mental possibilities)
    In other words, failure to compile is easier than failure to understand simply because of the scope of premises and references, data types and set of possible operations. Ergo – it’s pre-calculated reason so to speak.

    Reply addressees: @ProperlyAds @JimLeadGen


    Source date (UTC): 2024-08-17 22:46:57 UTC

    Original post: https://twitter.com/i/web/status/1824941011115884544

    Replying to: https://twitter.com/i/web/status/1824921178496176153

  • AI’s LIMITED ABILITY TO “REASON”. 1) Symbols are abstractions of language. Each

    AI’s LIMITED ABILITY TO “REASON”.
    1) Symbols are abstractions of language. Each symbol requiring a substantial investment in self-training by repetition.
    2) language itself does in fact follow a data structure. Each word consists of a set of dimensions related to all other words by some distance or other.
    3) as such, as in many things, mathematical reducibility is a smaller set with more inference than computational or linguistic reducibility, and as such is more prone to errors of inference (probability).
    4) We are following the Pareto distribution of all knowledge production. Meaning that the progress we have made covers 80% of the problem. But the majority of work necessary for our desired utility (reduction of error bars) requires many more multiples of incremental investment than those made so far.
    5) As such this is why we must identify ‘holes’ in reasoning and produce training in specific fields that ‘fills those holes’ and as such produces the inference and reduction of error that is desired.
    6) At present these ai’s are exceptional at the breadth of knowledge available AND summarizing and generalizing from that knowledge. However, just as the problem of induction has been well understood for hundreds of years, the challenge of evolving from generalization to deduction to induction is a long path that few humans are able to follow given years of practice.
    Cheers
    CD

    Reply addressees: @RokoMijic


    Source date (UTC): 2024-08-17 22:42:58 UTC

    Original post: https://twitter.com/i/web/status/1824940007439908864

    Replying to: https://twitter.com/i/web/status/1824855639723524265

  • Yes they can reason but only in incremental steps. So assisting them in that ste

    Yes they can reason but only in incremental steps. So assisting them in that stepwise process requires an understanding of producing a prompt with sufficient context for it to grasp what you are asking and ONLY what you are asking. The ONLY part is what fools most people.

    I work on very complex problems and I find that OpenAI is far better at incremental reasoning, while Anthropic is a bit better at narration.

    Reply addressees: @xriskology


    Source date (UTC): 2024-08-17 22:34:59 UTC

    Original post: https://twitter.com/i/web/status/1824937997810253824

    Replying to: https://twitter.com/i/web/status/1824919013883089338

  • RT @JayMan471: As a non-white man and second generation immigrant living in Amer

    RT @JayMan471: As a non-white man and second generation immigrant living in America, of course certain policies of the Democrats direct ben…


    Source date (UTC): 2024-08-17 22:32:36 UTC

    Original post: https://twitter.com/i/web/status/1824937400386158943

  • RT @ThruTheHayes: WAR Y’all talk about it an awful lot; yet, most of what presen

    RT @ThruTheHayes: WAR

    Y’all talk about it an awful lot; yet, most of what present day people demonstrate is being comfortably conquered. T…


    Source date (UTC): 2024-08-17 22:32:18 UTC

    Original post: https://twitter.com/i/web/status/1824937325916205106