The Technological Maximum
My alternative to Vernor Vinge's idea of the 'technological singularity'
Vernor Vinge proposed the idea of the "technological singularity." As Wikipedia puts it, "In futures studies, a technological singularity represents an 'event horizon' in the predictability of human technological development past which present models of the future cease to give reliable or accurate answers, following the creation of strong artificial intelligence or the amplification of human intelligence."
I have a different idea, which I call the Technological Maximum.
My main problem with the singularity is that it rests on vaguely defined or frankly undefined concepts--specifically, what is technology and is there such a thing as generalized, quantifiable and plottable "technological change?" The vagueness of the core premises allow proponents of the singularity to predict such things as an exponential rate of change; the fact that they can't say what that would look like is taken to be a feature of the idea, where in fact it's an indication of the dodginess of the original premise.
As to the idea that a singularity occurs when "present models of the future cease to give reliable or accurate answers" I've got news for you: present models of the future have never given reliable or accurate answers about the future. With regard to predicting the future, it's all singularity, all the time, baby.
There is, however, one grain of truth to the idea of the singularity, and that is because we are already starting to create machines that nobody understands, using genetic algorithms. Someone could study eg. the strange electrical circuits that have been evolved in some experiments, and figure out how they work, but if we started to apply genetic algorithms to the design of many or most devices, we would rapidly reach a point where nobody had the time and resources to figure out how the majority of new devices work. It would happen this way: you have a certain set of resources and want to build an X using those resources. Traditional designs won't work. You employ a genetic algorithm and evolve a design that does work with maximum efficiency given your resources. Build X and repeat for any Y. (A good example is the evolved antenna used by NASA one of its newest satellites.) It's by no means clear that we can design analysis software to keep up with such creativity, or that we could keep up with the reports it produced if a large number of manufactured items were being evolved in this way.
This is the technological maximum: an actual, rather than mythological, form of Vinge's technological singularity. This is not an incomprehensible future, rather a world in which perfectly prosaic devices exist, but where any particular device's design is not necessarily understood by any human being. Any one of these devices could be said to exemplify Clarke's Law. And we are indeed about ten years away from this point in our design and manufacturing capabilities.
The reason that this is a maximum is that each device or system can be optimized for its function. Each is the maximal solution to its particular problem. At this point, it becomes nonsense to talk about "higher" technologies. What's higher than an optimal solution?
If every machine is potentially unique, evolved for its particular role and the resources available to construct it--and is an optimum solution for its application--then technological change has reached its end, or become infinite if you want to look at it that way. Why? Because there would be no "technology" of transport, or power generation, for instance; there'd only be individual cases, each one different and unpredictable in advance. And if science is entering a stage of increasing refinement--splitting hairs on the subatomic scale--the process of discovery may never end, but new discoveries' ability to influence technology may become effectively zero. And that, potentially within our lifetime.
The enduring faith of many who believe in the singularity is that "strong AI" will result in exponential progress. To quote again from Wikipedia:
Vinge writes that superhuman intelligences, however created, will be even more able to enhance their own minds faster than the humans that created them. "When greater-than-human intelligence drives progress," Vinge writes, "that progress will be much more rapid." This feedback loop of self-improving intelligence, he predicts, will cause large amounts of technological progress within a short period of time.
But is "superhuman intelligence" a coherent concept? Measures like IQ imply no maximum to intelligence. There is a maximum, however, if intelligence is defined as problem-solving ability. It's the same as the technological maximum: the point at which your solutions are optimal.
As with definitions of technology, the problem here is with the use of an outmoded model of what intelligence and consciousness are. Proponents of the singularity tend to associate intelligence with greater cognitive complexity (and usually, "higher" consciousness). This is a mistake. In fact, it's the same mistake as the Intelligent Design crowd make.
Stephen Wolfram points out in A New Kind of Science that
- Some of the simplest systems in nature are capable of producing hugely complex outputs; and
- Complex systems are no better at producing complex results; on the contrary, they are often capable only of producing simple results.
What does this mean for the idea of the singularity? It means that "godlike AI" is probably a myth, but even if it existed it could not necessarily compete creatively with even the simplest physical system. Or, rather: the simplest physical system can be a godlike AI in creative terms. To nail this concept to the floor in technical terms, we could agree with Wolfram that a Class 4 cellular automaton (eg. Rule 110) is computationally undecidable and exhibits universality, thus is maximally complex and maximally creative.
Where "godlike AI" could compete is not with problem-solving ability, but with problem definition--and problem definition is all about values. And why would we create AI to decide our values (problems and acceptable solutions) for us?
"Superhuman intelligence" is a squib. It fizzles upon examination. Something much more interesting replaces it, namely the idea of Edisonian AI. This is the kind of AI that dovetails with the genetic manufacturing process described above. It's the enabler of the technological maximum, but it's not conscious, and after a certain point doesn't benefit from increases in "intelligence." Its power derives entirely from its modeling capability, so what drives it is increases in the accuracy of physical theories.
In the coming age of the technological maximum, technology ceases to drive social change. The rules become reversed: change is all about values, about what we want, and ideology, religion, aesthetics and culture dominate the landscape of change.
The critical question for the future will not be "what's possible", but "what do you want?" and do you want it more than others competing to use the same resources?
In other words, the dominant force for change in the future will not be technology; it will be politics.