Category: AI, Computation, and Technology

  • Artificial Intelligence: “You Can’t Get There from Here”

    Apr 1, 2020, 2:56 PM

    —“We can’t approach anything like intelligence with artificial neural networks … not in their current form.”— Hawkins —“All the trickly things we have done over the past seventy years hasn’t mattered – we’ve just taken advantage of moore’s law … it’s all short term gains.”— Rich Sutton (“Bitter Lesson”) —“If we scale up the current technology it won’t make any difference.”— —“You can’t mathematically model anything as complex as the brain, only mathematically explain why the biology does what it does, but it can’t be analyzed completely… it’s out of the realm of possibility. (we can build )”– —“Sparse Representations”– The neural networks of excitatory and inhibitory neurons, and preparatory dendrites, compete over time to ‘announce’ a winner that’s passed forward for integration into the current hierarchy of models. Very hard to fool, very robust networks. —“Machine learning… the next step has to be orthogonal to what we’re doing, because we’re at its limit.”— —“and ANN needs a lot of data. The human brain doesn’t. It’s extremely efficient.”—

    WHY I MOVED ON FROM AI This is why I stopped working on AI in the 80’s. Intelligence requires completely different computer architectures. It’s interesting that I got so close with the “before(state) during(change) after(state)” data structure: sequences; and with a hierarchy of geometric representations. But at <5mhz and 64k, even working in assembly language, I could already tell that it couldn’t be done with existing computers. We would need to invert the entire architecture to millions of tiny cores with local durable memory, at low power. If I had instead written and published a paper at the time rather than ‘moving on’ I would have bragging rights today. lol. If we had followed turing’s advice and made logical computers rather than numerical, we would be closer. but our emphasis on mathematics (the math trap again!!!) pushed engineering of computers in the wrong direction.

  • Artificial Intelligence: “You Can’t Get There from Here”

    Apr 1, 2020, 2:56 PM

    —“We can’t approach anything like intelligence with artificial neural networks … not in their current form.”— Hawkins —“All the trickly things we have done over the past seventy years hasn’t mattered – we’ve just taken advantage of moore’s law … it’s all short term gains.”— Rich Sutton (“Bitter Lesson”) —“If we scale up the current technology it won’t make any difference.”— —“You can’t mathematically model anything as complex as the brain, only mathematically explain why the biology does what it does, but it can’t be analyzed completely… it’s out of the realm of possibility. (we can build )”– —“Sparse Representations”– The neural networks of excitatory and inhibitory neurons, and preparatory dendrites, compete over time to ‘announce’ a winner that’s passed forward for integration into the current hierarchy of models. Very hard to fool, very robust networks. —“Machine learning… the next step has to be orthogonal to what we’re doing, because we’re at its limit.”— —“and ANN needs a lot of data. The human brain doesn’t. It’s extremely efficient.”—

    WHY I MOVED ON FROM AI This is why I stopped working on AI in the 80’s. Intelligence requires completely different computer architectures. It’s interesting that I got so close with the “before(state) during(change) after(state)” data structure: sequences; and with a hierarchy of geometric representations. But at <5mhz and 64k, even working in assembly language, I could already tell that it couldn’t be done with existing computers. We would need to invert the entire architecture to millions of tiny cores with local durable memory, at low power. If I had instead written and published a paper at the time rather than ‘moving on’ I would have bragging rights today. lol. If we had followed turing’s advice and made logical computers rather than numerical, we would be closer. but our emphasis on mathematics (the math trap again!!!) pushed engineering of computers in the wrong direction.

  • Yes We Archive Everything in The Movement

    Apr 3, 2020, 12:15 PM

    —“Is this all being catalogued ?”—Matt Fahnestock

    I don’t understand? PROCESS: I Post on Facebook > weekly backup > post to website > continuous backup > Integrate into ‘book’ on website > continuous backup > publish book. I am irregular at posting from FB to the site because I don’t have a way of automating it. I usually do a few months at a time. There are well over ten thousand posts there. The website also auto-posts to an hidden fb account as backup. We download FB and Site backups regularly I do, Brandon does, Martin does. In the event anything happens to me ownership passes to individuals in the movement. In addition Brandon Hayes collects key posts from everyone and organizes them by dozens of topics. He is also working on a book. Brandon’s inventory is organized differently from the site and my book but it collects information from more members of the movement.

  • Yes We Archive Everything in The Movement

    Apr 3, 2020, 12:15 PM

    —“Is this all being catalogued ?”—Matt Fahnestock

    I don’t understand? PROCESS: I Post on Facebook > weekly backup > post to website > continuous backup > Integrate into ‘book’ on website > continuous backup > publish book. I am irregular at posting from FB to the site because I don’t have a way of automating it. I usually do a few months at a time. There are well over ten thousand posts there. The website also auto-posts to an hidden fb account as backup. We download FB and Site backups regularly I do, Brandon does, Martin does. In the event anything happens to me ownership passes to individuals in the movement. In addition Brandon Hayes collects key posts from everyone and organizes them by dozens of topics. He is also working on a book. Brandon’s inventory is organized differently from the site and my book but it collects information from more members of the movement.

  • On Secure Communications and Organizing

    On Secure Communications and Organizing. https://propertarianism.com/2020/05/10/on-secure-communications-and-organizing/


    Source date (UTC): 2020-05-10 20:46:02 UTC

    Original post: https://twitter.com/i/web/status/1259585509585825803

  • On Secure Communications and Organizing.

    NOPE.

    —“I have a question, is there such a thing as secure communication?”— Via PM

    1. No.
    2. But Telegram is good, signal is better, and ProtonMail is best.

    3. The only reason for secure comm is planning – and while abstract discussion is legal, planning is not.

    4. There is no utility – only harm – in planning or taking any action other than protest, and there is no need for privacy in protest. Just the opposite.

    5. Swarming acts of anti-2A oppression is legit, and that’s the model to organize on, but limit your activity to organizing on 2A activism. Everything else is off limits.

    6. Trying to organize otherwise only associates you with the people most likely to say stupid things – and get you into trouble with their LARPing. (young male testosterone vs young female estrogen – both types of ‘excess’.)

    7. Otherwise organization isn’t necessary. Prep isn’t necessary. Just show up.

    8. It will be obvious when you need to show up. So just show up.

    9. There are millions of guys like you. Stop looking for confirmation. Get fit. Learn. Read. Watch Videos. Just show up. We will win: An offer that all but the state can’t refuse, and versus “Pirates on a red ocean of blue islands.”

    (And don’t involve me in anything. I stay within the lines. I know what I’m doing. I write constitutional law, I work on strategy. That’s my job. I don’t have time for anything else.)  

  • On Secure Communications and Organizing.

    NOPE.

    —“I have a question, is there such a thing as secure communication?”— Via PM

    1. No.
    2. But Telegram is good, signal is better, and ProtonMail is best.

    3. The only reason for secure comm is planning – and while abstract discussion is legal, planning is not.

    4. There is no utility – only harm – in planning or taking any action other than protest, and there is no need for privacy in protest. Just the opposite.

    5. Swarming acts of anti-2A oppression is legit, and that’s the model to organize on, but limit your activity to organizing on 2A activism. Everything else is off limits.

    6. Trying to organize otherwise only associates you with the people most likely to say stupid things – and get you into trouble with their LARPing. (young male testosterone vs young female estrogen – both types of ‘excess’.)

    7. Otherwise organization isn’t necessary. Prep isn’t necessary. Just show up.

    8. It will be obvious when you need to show up. So just show up.

    9. There are millions of guys like you. Stop looking for confirmation. Get fit. Learn. Read. Watch Videos. Just show up. We will win: An offer that all but the state can’t refuse, and versus “Pirates on a red ocean of blue islands.”

    (And don’t involve me in anything. I stay within the lines. I know what I’m doing. I write constitutional law, I work on strategy. That’s my job. I don’t have time for anything else.)  

  • The Hardware Problem Limiting AI

    The Hardware Problem Limiting AI https://propertarianism.com/2020/05/09/the-hardware-problem-limiting-ai/


    Source date (UTC): 2020-05-09 16:44:41 UTC

    Original post: https://twitter.com/i/web/status/1259162383769120769

  • The Hardware Problem Limiting AI

    Apr 28, 2020, 9:44 AM Yep. The problem is the entire industry is built for central computation limited by frequency (heat) rather than distributed association and prediction limited only by numbers(cool) – and what we need is billions of trivial circuits whose primary difference from neurons(really dendrites-synapses) is in creating many local logical (addresses) rather than physical (dendritic-synaptic) connections, storing trivial (sparse) bits of memory in sequences. In addition the context (episode) creates an index in time AND space and each fragment of information is locally co-associated with those positions. Every neuron, micro-column, macro-column, region of the brain is trivially simple, but in concert they produce in parallel what cannot be done by increasing frequency(and heat). Graphics processors are architected for parallel processing and so we ‘hijacked’ them in the 2000’s for AI use. And since the human brain uses triangles and hexagons for producing its world model, the graphics processor does solve HALF of the underlying problem: the neocortex is a doubling (folding over) with six layers, of the entorhinal cortex (three layers), dividing the responsibility of identity (top) and relative position (bottom), with the outputs passed forward in the cognitive hierarchy in a vast market competition for coherence. So it’s not that we don’t (at least now, because this is all recent knowledge) know how to create general intelligence (I certainly do). Its that (as turing said) we built the machines for math (top down) instead of thinking (bottom up) with math as merely one of the grammars (logics) resulting from it. So the current issue as I understand it, is that we cannot achieve in software what we need hardware for. So we need a manhattan project to produce thinking machines only because the industry is constructed for the opposite aim, and current (primitive) neural networks can categorize but only do so with vast amounts of information and manual tuning. In this illustration from the attached web page, we see the limit of what current AI is able to do: categorize, and only after lots of training and tuning. This means application specific hardware because the hardware is constructed ‘incorrectly’ still for the task of general intelligence. Conversely there are many functions where we do not want a general intelligence – which exchanges increase in possibility of error for decrease in cost of adaptation. Robots are dangerous because they’re not intelligent, but there are many cases where not-intelligent danger, and intelligent danger are a trade off. So we have market demand for (a) simple software problems (b) application specific ai problems, and (c) general ai problems.’  

  • The Hardware Problem Limiting AI

    Apr 28, 2020, 9:44 AM Yep. The problem is the entire industry is built for central computation limited by frequency (heat) rather than distributed association and prediction limited only by numbers(cool) – and what we need is billions of trivial circuits whose primary difference from neurons(really dendrites-synapses) is in creating many local logical (addresses) rather than physical (dendritic-synaptic) connections, storing trivial (sparse) bits of memory in sequences. In addition the context (episode) creates an index in time AND space and each fragment of information is locally co-associated with those positions. Every neuron, micro-column, macro-column, region of the brain is trivially simple, but in concert they produce in parallel what cannot be done by increasing frequency(and heat). Graphics processors are architected for parallel processing and so we ‘hijacked’ them in the 2000’s for AI use. And since the human brain uses triangles and hexagons for producing its world model, the graphics processor does solve HALF of the underlying problem: the neocortex is a doubling (folding over) with six layers, of the entorhinal cortex (three layers), dividing the responsibility of identity (top) and relative position (bottom), with the outputs passed forward in the cognitive hierarchy in a vast market competition for coherence. So it’s not that we don’t (at least now, because this is all recent knowledge) know how to create general intelligence (I certainly do). Its that (as turing said) we built the machines for math (top down) instead of thinking (bottom up) with math as merely one of the grammars (logics) resulting from it. So the current issue as I understand it, is that we cannot achieve in software what we need hardware for. So we need a manhattan project to produce thinking machines only because the industry is constructed for the opposite aim, and current (primitive) neural networks can categorize but only do so with vast amounts of information and manual tuning. In this illustration from the attached web page, we see the limit of what current AI is able to do: categorize, and only after lots of training and tuning. This means application specific hardware because the hardware is constructed ‘incorrectly’ still for the task of general intelligence. Conversely there are many functions where we do not want a general intelligence – which exchanges increase in possibility of error for decrease in cost of adaptation. Robots are dangerous because they’re not intelligent, but there are many cases where not-intelligent danger, and intelligent danger are a trade off. So we have market demand for (a) simple software problems (b) application specific ai problems, and (c) general ai problems.’