If you’re aware of the history of computers and machine intelligence at all, you know that raw computational power has come a long way since the early transistors. Gordon Moore’s famous law has more or less held up: the number of transistors that will fit on a microchip has doubled every 18 months or so, thus producing a steady increase in processing power, speed and storage capacity for so many of our digital devices.
Here’s Moore’s Law graphed out, courtesy of wikipedia:
As the fruits of Moore’s Law ripened, it became easy to imagine that machines would quickly become so smart that they would rival human intelligence. Yet alas, they haven’t yet. Despite this tremendous increase in machine computational power — and it is impressive, probably the single most driving force of human advancement in recent history — the dream of Artificial Intelligence, or of the Singularity, remains unfulfilled.
And Moore’s Law is slowing down and coming up against the laws of physics. There are many opinions on when the law might collapse, but here’s the always entertaining Michio Kaku discussing the issue:
As Kaku indicates, there are alternatives to the silicon and code path we’ve been beating for the past 50 years.
Here’s one: SyNAPSE, a project from DARPA.
To quote their site:
“Current programmable machines are limited not only by their computational capacity, but also by an architecture requiring human-derived algorithms to describe and process information from their environment. In contrast, biological neural systems, such as a brain, autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations. Since real-world problems generally have many variables and nearly infinite combinatorial complexity, neuromorphic electronic machines would be preferable in a host of applications. Useful and practical implementations, however, do not yet exist.
“The vision for the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program is to develop electronic neuromorphic machine technology that scales to biological levels. SyNAPSE supports an unprecedented multidisciplinary approach coordinating aggressive technology development activities in the following areas: hardware, architecture, simulation, and environment.”
The approach here, then, is inspired by the brain, where the architecture is composed of simple units in a flexible structure that effectively responds to complex input. So the SyNAPSE vision is something along the lines of this chart they provided:
Simple systems with complex environmental capacity. Kind of like us, really. I am reminded of the human brain, which is again the SyNAPSE model, but I think also of much of nature. Consider colonies of ants or bees: together they make a flexible system of simple components that respond effectively to a complex environment. But the question is always whether the collective hive can be considered intelligent, or rather be considered an intelligence. And thus, we are back into the questions around artificial intelligence and the Singularity that I find so fascinating: will it be a single-system intellect or some kind of hive mind? Or could there be both? What is really possible?
Either way, I do believe there’s something great about the simple chart from the SyNAPSE project, above. That green star on the horizontal axis, labelled “human level performance” and “dawn of a new age,” that’s the Singularity, isn’t it? And see the yellow star? DARPA intends to get us closer.
SyNAPSE. Interesting project. Worth following.