(FB 1548826120 Timestamp)
more…
—“In order to understand why silicon engines have had such astounding energy-efficiency gains compared to combustion engines — and why Jevons Paradox has a long way to go yet — one has to look to the difference in the nature of the tasks the two kinds of engines perform. One produces information that consumes power, and the other produces power.
All engines convert energy â?? effectively â??refineâ?? it — from a chaotic form into a more highly ordered form. Combustion engines (and mechanical ones) are designed to effect a physical action. Efficiency is constrained by the immutable laws of thermodynamics, and things like friction, inertia and gravity. Logic engines, on the other hand, donâ??t produce physical action but are designed to manipulate the idea of the numbers zero and one.
A logic engineâ??s purpose is rooted in simply knowing and storing the fact of the binary state of a switch; i.e., whether it is on or off. There is no useful information in the â??sizeâ?? of the on or off state. It was obvious from early days that more and faster logic would require shrinking the size of the switch representing the binary state, chasing the physics that allows faster flips using less energy per flip. To be simplistic, itâ??s like choosing to use a grain of sand instead of a bowling ball to represent the number one.
And, unlike other engines, logic engines can be accelerated through the clever applications of mathematics, i.e.,â??software.” Itâ??s no coincidence that this year also marks the 60th anniversary of the creation of that new term, coined by an American statistician.
With software one can, for example, employ a trick equivalent to ignoring how much space lies between the grains of sand. This is what a compression algorithm does to digitally represent a picture using far less data, and thus energy. No such options exist in the world of normal engines.
Tracing progress from 1971 when the first widely used integrated circuit was introduced by Intel, its vaunted 4004 with 2,300 transistors, weâ??ve seen the size of the transistor drop so much that a Central Processing Unit (CPU) today has billions of transistors, each able to operate 10,000-fold faster. That combination has yielded the billion-fold gains in computing power weâ??ve witnessed per chip.
If combustion engines had achieved that kind of scaling efficiency, a car engine today would generate a thousand-fold more horsepower and have shrunk to the size of an ant: with such an engine, a car could actually fly, very fast. But only in comic books does the physics of propulsion scale that way. In our universe, power scales the other way. An ant-sized engine â?? which can be built — produces roughly a million times less power than a Prius.
One consequence of the trajectory of making logic engines both cheaper and more powerful is that overall spending on such engines has soared (a related variant of Jevons Paradox). Each year the worldâ??s manufacturers now purchase some $300 billion worth of semiconductor engines in order to build and sell computing machines. That total is some 20% to 50% more than global spending on the piston engines used to build all the worldâ??s wheeled transportation machines. And the former is growing faster than the latter.
The scaling â??lawâ?? of transistor-based engines was first codified by Intel co-founder Gordon Moore in an April 1965 article. There he wrote that advances in the techniques of silicon etching allowed transistor dimensions to shrink so fast that the number of them per integrated circuit doubled every two years. That also constituted a doubling in energy efficiency. And while Mooreâ??s observation has been enshrined as a â??law,” itâ??s not a law of nature but a consequence of the nature of logic engines.
Thus, back to Jevons and his paradox. Quite obviously the marketâ??s appetite for logic engines â?? data â?? has grown far faster than the efficiency improvements of those engines, so far. What next for Mooreâ??s Law? If logic engines continue the trajectory of recent decades, we should expect to see a lot more surprises in both economic and business domains, never mind energy.
Much scholarship has been devoted to the question of the future of Mooreâ??s Law. Some pundits have been claiming that more efficient logic engines are now needed in order to constrain potential runaway energy use by the digital infrastructure. (Thatâ??s a constituency that has clearly not accepted the reality of the Jevons Paradox.) Meanwhile, other pundits have declared that the end of Mooreâ??s Law is in sight.
Because of Mooreâ??s Law, CPUs now run so hot â?? a direct consequence of speed — that heat removal is an existential challenge. Inside the CPU itself, chip designers now build heat management software that can even throttle speed back to cool things down. Some datacenter operators have tackled the challenge, for example, by turning to water-cooled logic engines â?? a solution combustion engineers have long been familiar with. Even more challenging, logic switches have become so fast that moving the data around on the CPUâ??s silicon surface is actually constrained by the speed of light. And the logic switches are so small that conventional materials and tools are indeed maxing out. (For a lucid summary of all this, see this recent essay by Rodney Brooks, emeritus professor at MIT.)
But itâ??s important to keep in mind that Mooreâ??s Law is, as weâ??ve noted, fundamentally about finding ways to create ever tinier on-off states. In that regard, in the words of one the great physicists of the 20th century, Richard Feynman, â??thereâ??s plenty of room at the bottom” when it comes to logic engines. To appreciate how far away we still are from a â??bottom,” consider the Holy Grail of computing, the human brain, which is at least 100 million times more energy efficient than the best silicon logic engine available.
Engineers today are deploying a suite of techniques in the pursuit of â??logicâ?? density and speed that can be grouped into three buckets: clever designs, embedding software, and using new materials to make transistors still smaller.
The basic design of the transistor itself is no longer just the original planar, two-dimensional structures, but has gone 3D. The density from going vertical (think microscopic skyscrapers) buys another decade of Mooreâ??s Law progress. Another design innovation is the â??multi-core” microprocessor, which integrates dozens of CPUs onto a single silicon chip, each with billions of transistors. And now there are also entirely different engine types, much as aerospace engineers used to break the sound barrier, going from propellers to jet and then rocket engines. The equivalent with logic engines are Graphics Processing Units (GPUs) and Neural Processing Units (NPUs).
For specialized tasks GPUs and NPUs outperform CPUs. GPUs were pioneered for gaming to render realistic video, i.e. â??graphics,” (a subset of artificial reality) where the measure-of-merit is in imagines rather than logic operations processed per second. Then the NPUs donâ??t even use a CPUâ??s linear logic, but emulate instead a non-linear neural brain structure and offer â??artificial intelligence” capabilities for such things as voice recognition, navigation and diagnostics. Computers that integrate CPUs, GPUs and NPUs are the equivalent of a vehicle with an SUVâ??s utility, a Ferrariâ??s speed, and a semitrailerâ??s carrying capacity.
And, in no small irony, software is increasingly deployed within the CPU to manage traffic and power, and optimize the precious resources on the silicon â??real estate.” Such software is the microscopic equivalent of blending AirBnB with Uber and Waze, at hyper-speed. Logic engines can, in effect, literally pull themselves up by their own bootstraps.
Then there are new materials and the associated specialized machines for using them to fabricate even smaller transistors for all classes of digital engine. Here the automotive analog is in using better lubricants, or switching to aluminum and carbon-fiber bodies, or replacing carburetors with fuel injection. While there is an ultimate limit to a silicon transistorâ??s dimension â?? no smaller than the width of a half-dozen atoms â?? the distance to that limit represents as much progress as has occurred from 1998 to date.
â??Merelyâ?? achieving, over the coming two decades, as much progress as has happened with Mooreâ??s Law since 1998 will be world changing.
Perhaps Mooreâ??s Law, though, is no longer the best metric for the next 60 years of logic engine progress. Transistors have evolved from being viewed as components in isolated machines to become ubiquitous in markets, more common than grains of wheat. Logically, we should move beyond just counting them.
Cost is the metric that matters in every market and application: in this case the cost of processing power. As we briefly illustrated above, in addition to size there are a variety of ways (e.g., designs, software) to advance the effect of Mooreâ??s Law, which is the increase of processing power at declining costs. In fact, since the year 2000, the increase in logic operations purchasable per dollar has grown some 10-fold more than the increase in â??rawâ?? logic operations generated per watt,
Ray Kurzweil, widely known for his book The Singularity Is Near, may have been the first to document this cost reality. Kurzweilâ??s map shows that in the 1960s mainframe era, $1,000 dollars bought about one calculation per second. By the year 2000, a single dollar bought 10,000 calculations per second. Today a dollar buys one billion calculations per second. (All in constant dollar terms.)
And thatâ??s only half the story. The cloud, accelerating the cost declines of processing power through utility-class economies of scale (see Part 1 in this series), now combines with ever-cheaper ubiquitous high-speed wireless networks (Part 2 in this series). The integration of all these trends radically reduces costs and democratizes access to both data and the logic engines that process it. We donâ??t yet have a metric to quantify the effect of this economically incendiary combination.
How much processing power might society ultimately consume? Data and information constitute a feature of our universe that is, like the universe itself, essentially infinite. There is no limit to our appetite to acquire data in order to gain greater knowledge and control of our world.
So, finally, on this 60th anniversary of the logic engine, one more transportation analogy: the 60th anniversary of the invention of the internal combustion engine was 1936. We know now that 1936 was early days in the rise in consumption of road-miles. When it comes to consumption of processing power, itâ??s the equivalent of 1936.
By now the energy implications of what we might call the Jevons digital Paradox should be obvious. But far more important are the implications of all the innovations yet to appear from the full flowering of the era of digital engines. “—