Apr 1, 2020, 2:56 PM
—“We can’t approach anything like intelligence with artificial neural networks … not in their current form.”— Hawkins
—“All the trickly things we have done over the past seventy years hasn’t mattered – we’ve just taken advantage of moore’s law … it’s all short term gains.”— Rich Sutton (“Bitter Lesson”)
—“If we scale up the current technology it won’t make any difference.”—
—“You can’t mathematically model anything as complex as the brain, only mathematically explain why the biology does what it does, but it can’t be analyzed completely… it’s out of the realm of possibility. (we can build )”–
—“Sparse Representations”–
The neural networks of excitatory and inhibitory neurons, and preparatory dendrites, compete over time to ‘announce’ a winner that’s passed forward for integration into the current hierarchy of models. Very hard to fool, very robust networks.
—“Machine learning… the next step has to be orthogonal to what we’re doing, because we’re at its limit.”—
—“and ANN needs a lot of data. The human brain doesn’t. It’s extremely efficient.”—
WHY I MOVED ON FROM AI
This is why I stopped working on AI in the 80’s. Intelligence requires completely different computer architectures. It’s interesting that I got so close with the “before(state) during(change) after(state)” data structure: sequences; and with a hierarchy of geometric representations. But at <5mhz and 64k, even working in assembly language, I could already tell that it couldn’t be done with existing computers. We would need to invert the entire architecture to millions of tiny cores with local durable memory, at low power. If I had instead written and published a paper at the time rather than ‘moving on’ I would have bragging rights today. lol.
If we had followed turing’s advice and made logical computers rather than numerical, we would be closer. but our emphasis on mathematics (the math trap again!!!) pushed engineering of computers in the wrong direction.