Category: AI, Computation, and Technology

  • Q: For Followers. Is this slide clearer now?

    Q: For Followers. Is this slide clearer now?

    Q: For Followers. Is this slide clearer now? https://t.co/VKsT7ONBrf


    Source date (UTC): 2021-04-23 15:16:23 UTC

    Original post: https://twitter.com/i/web/status/1385613528816619521

  • @SicSemperTyrannis_1776 That’s pretty good. Yes. “What can you write with this c

    @SicSemperTyrannis_1776 That’s pretty good. Yes. “What can you write with this combination of types. The continuous combination of operations like the recombinate potential of language, or the potential of n-dimensions in math, or the combination of applied chemistry, or biochemistry, or genetics is infinite. But it has to be constructable.


    Source date (UTC): 2021-04-22 01:49:23 UTC

    Original post: https://gab.com/curtd/posts/106106464735657239

  • COMMENT ONT HE PRESENT CONDITION OF AI (and the possibility of the next AI winte

    COMMENT ONT HE PRESENT CONDITION OF AI (and the possibility of the next AI winter … or not )

    https://propertarianinstitute.com/2021/04/19/comment-on-the-present-condition-of-ai/


    Source date (UTC): 2021-04-19 20:05:57 UTC

    Original post: https://twitter.com/i/web/status/1384236850647162884

  • Comment on the present condition of AI

    Good. Accurate. Would say that I think at least some of us who are aware of the three shortcomings that confirm your opinion. hardware, world model, self-training, sufficient recursion of prediction.
     
    (a) neural nets today can categorize and predict within a trained domain. in other words, they aren’t ai’s their robots (machines)
    (b) adversarial neural nets can only improve that process
    (c) the hardware is inverted from the brain which has many millions of tiny processors (columns) working in parallel vs serial or batches of serial processing.
    (d) the brain works on sequences in time that test for coherence of prediction between ‘nodes’ (groups of neurons, columns, macro columns)
    (e) the coherent predictions across these subsystems survive competition with one another for integration,
    (f) integration of relatively simultaneous predictions produces our experience of a moment.
    (g) the brain creates an index of coherence producing an episodic memory out of location, place, borders, landmarks, objects, head direction, eye direction, the direction of movement, rate of turn, and rate of movement.
    (h) it is these episodes that survive the test of coherence over time in a continuous stream of input that we auto-associate with one another, producing predictions.
    (i) we ‘wayfind’ by recursion.
    (j) we develop a hierarchy of recursion, and eventually what we call consciousness if enough recursion is possible, across enoug neurons, with enough biological economy to maintain that neural activity.
     
    So thats a more precise manner of explainin the authors correct assessment that all we have done is produce hardware cheap enough to accomplish what all of us working on AI in the 80s knew already. And thanfully tools that make development cheap enough. But really, Baysian systems are just another form of database for the categorization of stimuli.
  • Comment on the present condition of AI

    Good. Accurate. Would say that I think at least some of us who are aware of the three shortcomings that confirm your opinion. hardware, world model, self-training, sufficient recursion of prediction.
     
    (a) neural nets today can categorize and predict within a trained domain. in other words, they aren’t ai’s their robots (machines)
    (b) adversarial neural nets can only improve that process
    (c) the hardware is inverted from the brain which has many millions of tiny processors (columns) working in parallel vs serial or batches of serial processing.
    (d) the brain works on sequences in time that test for coherence of prediction between ‘nodes’ (groups of neurons, columns, macro columns)
    (e) the coherent predictions across these subsystems survive competition with one another for integration,
    (f) integration of relatively simultaneous predictions produces our experience of a moment.
    (g) the brain creates an index of coherence producing an episodic memory out of location, place, borders, landmarks, objects, head direction, eye direction, the direction of movement, rate of turn, and rate of movement.
    (h) it is these episodes that survive the test of coherence over time in a continuous stream of input that we auto-associate with one another, producing predictions.
    (i) we ‘wayfind’ by recursion.
    (j) we develop a hierarchy of recursion, and eventually what we call consciousness if enough recursion is possible, across enoug neurons, with enough biological economy to maintain that neural activity.
     
    So thats a more precise manner of explainin the authors correct assessment that all we have done is produce hardware cheap enough to accomplish what all of us working on AI in the 80s knew already. And thanfully tools that make development cheap enough. But really, Baysian systems are just another form of database for the categorization of stimuli.
  • it’s fking impossible. I’ve added a js modification to twitter, so that it stops

    it’s fking impossible. I’ve added a js modification to twitter, so that it stops the pink coloring of long texts. I can take a snapshot and post the image of the text. But even so, still too short. So is gab. It really does take 4K to make a full argument.


    Source date (UTC): 2021-04-18 20:02:10 UTC

    Original post: https://twitter.com/i/web/status/1383873509030383618

    Reply addressees: @LukeWeinhagen @ThruTheHayes @juniorwolf @G_Eats_Midwits

    Replying to: https://twitter.com/i/web/status/1383871123352297479

  • @Hail Gab doesn’t allow chaining of posts like twitter, nor does it have a suffi

    @Hail Gab doesn’t allow chaining of posts like twitter, nor does it have a sufficient character limit like facebook. So we are stuck with starting in a post and adding to it with comments.


    Source date (UTC): 2021-04-17 20:50:02 UTC

    Original post: https://gab.com/curtd/posts/106082638419285760

  • So the only means of defeating competitors (or enemies) with technology is to de

    So the only means of defeating competitors (or enemies) with technology is to deprive them of the economic returns of any given technology.

    This is what I hope to achieve with Runcible. https://twitter.com/curtdoolittle/status/1381650823797739523

  • Had to add the text to the title of the tweet for searching. The content of the

    Had to add the text to the title of the tweet for searching.
    The content of the slide doesn’t have the self reference any longer. 😉


    Source date (UTC): 2021-04-11 17:30:22 UTC

    Original post: https://twitter.com/i/web/status/1381298592967712769

    Reply addressees: @WorMartiN

    Replying to: https://twitter.com/i/web/status/1381291164557586442

  • Yeah, I think that the future (at least my product includes it) is more like Scr

    Yeah, I think that the future (at least my product includes it) is more like Scrivener than Word or any alternative. Lots of reasons.


    Source date (UTC): 2021-04-07 18:31:21 UTC

    Original post: https://twitter.com/i/web/status/1379864389138931715

    Reply addressees: @skyfire1201

    Replying to: https://twitter.com/i/web/status/1379861053589233670