—“You’d probably benefit from learning some basic coding, because it is a very different way of thinking. You just can’t handwave stuff with a computer…”—Moritz Bierling
Exactly. Which is what operational prose prevents – our endemic ‘hand waving’.
—“You’d probably benefit from learning some basic coding, because it is a very different way of thinking. You just can’t handwave stuff with a computer…”—Moritz Bierling
Exactly. Which is what operational prose prevents – our endemic ‘hand waving’.
—“You’d probably benefit from learning some basic coding, because it is a very different way of thinking. You just can’t handwave stuff with a computer…”—Moritz Bierling
Exactly. Which is what operational prose prevents – our endemic ‘hand waving’.
Now just as in databases and oop, we have standardized design patterns and made minor improvements, but these fail to overcome the limitations of the model. We use AI to categorize inputs (sensory inputs) but there is no ‘intelligence’ to it at all.
@LLaddon That’s simply not true. There has been little theoretical improvement since the 50’s, and we have done little other than take advantage of Moore’s law. When I got out of AI in the 80’s it was obvious that we simply couldn’t do any better at 4mhz in 64k of RAM.
That’s simply not true. There has been little theoretical improvement since the 50’s, and we have done little other than take advantage of Moore’s law. When I got out of AI in the 80’s it was obvious that we simply couldn’t do any better at 4mhz in 64k of RAM.
Yep. The problem is the entire industry is built for central computation limited by frequency (heat) rather than distributed association and prediction limited only by numbers(cool) – and what we need is billions of trivial circuits whose pr… https://t.co/sqJQ8LxB0t
photos_and_videos/TimelinePhotos_kg5QueHwVw/95016109_268491657882328_6860192446895095808_n_268491654548995.jpg THE HARDWARE PROBLEM LIMITING AI
Yep. The problem is the entire industry is built for central computation limited by frequency (heat) rather than distributed association and prediction limited only by numbers(cool) – and what we need is billions of trivial circuits whose primary difference from neurons(really dendrites-synapses) is in creating many local logical (addresses) rather than physical (dendritic-synaptic) connections, storing trivial (sparse) bits of memory in sequences. In addition the context (episode) creates an index in time AND space and each fragment of information is locally co-associated with those positions.
Every neuron, micro-column, macro-column, region of the brain is trivially simple, but in concert they produce in parallel what cannot be done by increasing frequency(and heat).
Graphics processors are architected for parallel processing and so we ‘hijacked’ them in the 2000’s for AI use. And since the human brain uses triangles and hexagons for producing its world model, the graphics processor does solve HALF of the underlying problem: the neocortex is a doubling (folding over) with six layers, of the entorhinal cortex (three layers), dividing the responsibility of identity (top) and relative position (bottom), with the outputs passed forward in the cognitive hierarchy in a vast market competition for coherence.
So it’s not that we don’t (at least now, because this is all recent knowledge) know how to create general intelligence (I certainly do). Its that (as turing said) we built the machines for math (top down) instead of thinking (bottom up) with math as merely one of the grammars (logics) resulting from it.
So the current issue as I understand it, is that we cannot achieve in software what we need hardware for. So we need a manhattan project to produce thinking machines only because the industry is constructed for the opposite aim, and current (primitive) neural networks can categorize but only do so with vast amounts of information and manual tuning.
In this illustration from the attached web page, we see the limit of what current AI is able to do: categorize, and only after lots of training and tuning. This means application specific hardware because the hardware is constructed ‘incorrectly’ still for the task of general intelligence.
Conversely there are many functions where we do not want a general intelligence – which exchanges increase in possibility of error for decrease in cost of adaptation. Robots are dangerous because they’re not intelligent, but there are many cases where not-intelligent danger, and intelligent danger are a trade off.
So we have market demand for (a) simple software problems (b) application specific ai problems, and (c) general ai problems.THE HARDWARE PROBLEM LIMITING AI
Yep. The problem is the entire industry is built for central computation limited by frequency (heat) rather than distributed association and prediction limited only by numbers(cool) – and what we need is billions of trivial circuits whose primary difference from neurons(really dendrites-synapses) is in creating many local logical (addresses) rather than physical (dendritic-synaptic) connections, storing trivial (sparse) bits of memory in sequences. In addition the context (episode) creates an index in time AND space and each fragment of information is locally co-associated with those positions.
Every neuron, micro-column, macro-column, region of the brain is trivially simple, but in concert they produce in parallel what cannot be done by increasing frequency(and heat).
Graphics processors are architected for parallel processing and so we ‘hijacked’ them in the 2000’s for AI use. And since the human brain uses triangles and hexagons for producing its world model, the graphics processor does solve HALF of the underlying problem: the neocortex is a doubling (folding over) with six layers, of the entorhinal cortex (three layers), dividing the responsibility of identity (top) and relative position (bottom), with the outputs passed forward in the cognitive hierarchy in a vast market competition for coherence.
So it’s not that we don’t (at least now, because this is all recent knowledge) know how to create general intelligence (I certainly do). Its that (as turing said) we built the machines for math (top down) instead of thinking (bottom up) with math as merely one of the grammars (logics) resulting from it.
So the current issue as I understand it, is that we cannot achieve in software what we need hardware for. So we need a manhattan project to produce thinking machines only because the industry is constructed for the opposite aim, and current (primitive) neural networks can categorize but only do so with vast amounts of information and manual tuning.
In this illustration from the attached web page, we see the limit of what current AI is able to do: categorize, and only after lots of training and tuning. This means application specific hardware because the hardware is constructed ‘incorrectly’ still for the task of general intelligence.
Conversely there are many functions where we do not want a general intelligence – which exchanges increase in possibility of error for decrease in cost of adaptation. Robots are dangerous because they’re not intelligent, but there are many cases where not-intelligent danger, and intelligent danger are a trade off.
So we have market demand for (a) simple software problems (b) application specific ai problems, and (c) general ai problems.
In engineering, a dog is a tool or part of a tool that prevents movement or imparts movement by offering physical obstruction or engagement of some kind.
It may hold another object in place by blocking it, clamping it, or otherwise obstructing its… https://t.co/r9mSt6xoMs
In engineering, a dog is a tool or part of a tool that prevents movement or imparts movement by offering physical obstruction or engagement of some kind.
It may hold another object in place by blocking it, clamping it, or otherwise obstructing its movement. Or it may couple various parts together so that they move in unison – the primary example of this being a flexible drive to mate two shafts in order to transmit torque.
This word usage is a metaphor derived from the idea of a dog (animal) biting and holding on, the “dog” name derived from the basic idea of how a dog jaw locks on, by the movement of the jaw, or by the presence of many teeth.
The first shutter dog named the “rat tail shutter dog” was hand forged in Colonial Williamsburg. Made with a hammer and anvil, steel was formed into an elongated hook that spiraled at the bottom. The earliest method of mounting the rat tail shutter dog involved a wrought nail hammered in to a wooden structure. The wrought nail later evolved in to a threaded bolt.ETYMOLOGY: “SHUTTER DOG”
In engineering, a dog is a tool or part of a tool that prevents movement or imparts movement by offering physical obstruction or engagement of some kind.
It may hold another object in place by blocking it, clamping it, or otherwise obstructing its movement. Or it may couple various parts together so that they move in unison – the primary example of this being a flexible drive to mate two shafts in order to transmit torque.
This word usage is a metaphor derived from the idea of a dog (animal) biting and holding on, the “dog” name derived from the basic idea of how a dog jaw locks on, by the movement of the jaw, or by the presence of many teeth.
The first shutter dog named the “rat tail shutter dog” was hand forged in Colonial Williamsburg. Made with a hammer and anvil, steel was formed into an elongated hook that spiraled at the bottom. The earliest method of mounting the rat tail shutter dog involved a wrought nail hammered in to a wooden structure. The wrought nail later evolved in to a threaded bolt.