Skip to content. | Skip to navigation

Downloads

I've made my first novel, Ventus, available as a free download, as well as excerpts from two of the Virga books.  I am looking forward to putting up a number of short stories in the near future.

Complete novel:  Ventus

 

To celebrate the August, 2007 publication of Queen of Candesce, I decided to re-release my first novel as an eBook. You can download it from this page. Ventus was first published by Tor Books in 2000, and and you can still buy it; to everyone who would just like to sample my work, I hope you enjoy this version.

I've released this book under a Creative Commons license, which means you can read it and distribute it freely, but not make derivative works or sell it.

Book Excerpts:  Sun of Suns and Pirate Sun

I've made large tracts of these two Virga books available.  If you want to find out what the Virga universe is all about, you can check it out here:

Major Foresight Project:  Crisis in Zefra

In spring 2005, the Directorate of Land Strategic Concepts of National Defense Canada (that is to say, the army) hired me to write a dramatized future military scenario.  The book-length work, Crisis in Zefra, was set in a mythical African city-state, about 20 years in the future, and concerned a group of Canadian peacekeepers who are trying to ready the city for its first democratic vote while fighting an insurgency.  The project ran to 27,000 words and was published by the army as a bound paperback book.

If you'd like to read Crisis in Zefra, you can download it in PDF form.

Personal tools

The Technological Maximum

My alternative to Vernor Vinge's idea of the 'technological singularity'

Vernor Vinge proposed the idea of the "technological singularity." As Wikipedia puts it, "In futures studies, a technological singularity represents an 'event horizon' in the predictability of human technological development past which present models of the future cease to give reliable or accurate answers, following the creation of strong artificial intelligence or the amplification of human intelligence."

I have a different idea, which I call the Technological Maximum.

My main problem with the singularity is that it rests on vaguely defined or frankly undefined concepts--specifically, what is technology and is there such a thing as generalized, quantifiable and plottable "technological change?" The vagueness of the core premises allow proponents of the singularity to predict such things as an exponential rate of change; the fact that they can't say what that would look like is taken to be a feature of the idea, where in fact it's an indication of the dodginess of the original premise.

As to the idea that a singularity occurs when "present models of the future cease to give reliable or accurate answers" I've got news for you: present models of the future have never given reliable or accurate answers about the future. With regard to predicting the future, it's all singularity, all the time, baby.

There is, however, one grain of truth to the idea of the singularity, and that is because we are already starting to create machines that nobody understands, using genetic algorithms. Someone could study eg. the strange electrical circuits that have been evolved in some experiments, and figure out how they work, but if we started to apply genetic algorithms to the design of many or most devices, we would rapidly reach a point where nobody had the time and resources to figure out how the majority of new devices work. It would happen this way: you have a certain set of resources and want to build an X using those resources. Traditional designs won't work. You employ a genetic algorithm and evolve a design that does work with maximum efficiency given your resources. Build X and repeat for any Y. (A good example is the evolved antenna used by NASA one of its newest satellites.) It's by no means clear that we can design analysis software to keep up with such creativity, or that we could keep up with the reports it produced if a large number of manufactured items were being evolved in this way.

This is the technological maximum: an actual, rather than mythological, form of Vinge's technological singularity. This is not an incomprehensible future, rather a world in which perfectly prosaic devices exist, but where any particular device's design is not necessarily understood by any human being. Any one of these devices could be said to exemplify Clarke's Law. And we are indeed about ten years away from this point in our design and manufacturing capabilities.

The reason that this is a maximum is that each device or system can be optimized for its function. Each is the maximal solution to its particular problem. At this point, it becomes nonsense to talk about "higher" technologies. What's higher than an optimal solution?

If every machine is potentially unique, evolved for its particular role and the resources available to construct it--and is an optimum solution for its application--then technological change has reached its end, or become infinite if you want to look at it that way. Why? Because there would be no "technology" of transport, or power generation, for instance; there'd only be individual cases, each one different and unpredictable in advance. And if science is entering a stage of increasing refinement--splitting hairs on the subatomic scale--the process of discovery may never end, but new discoveries' ability to influence technology may become effectively zero. And that, potentially within our lifetime.

Edisonian AI

The enduring faith of many who believe in the singularity is that "strong AI" will result in exponential progress. To quote again from Wikipedia:

Vinge writes that superhuman intelligences, however created, will be even more able to enhance their own minds faster than the humans that created them. "When greater-than-human intelligence drives progress," Vinge writes, "that progress will be much more rapid." This feedback loop of self-improving intelligence, he predicts, will cause large amounts of technological progress within a short period of time.

But is "superhuman intelligence" a coherent concept? Measures like IQ imply no maximum to intelligence. There is a maximum, however, if intelligence is defined as problem-solving ability. It's the same as the technological maximum: the point at which your solutions are optimal.

As with definitions of technology, the problem here is with the use of an outmoded model of what intelligence and consciousness are. Proponents of the singularity tend to associate intelligence with greater cognitive complexity (and usually, "higher" consciousness). This is a mistake. In fact, it's the same mistake as the Intelligent Design crowd make.

Stephen Wolfram points out in A New Kind of Science that

  • Some of the simplest systems in nature are capable of producing hugely complex outputs; and
  • Complex systems are no better at producing complex results; on the contrary, they are often capable only of producing simple results.
To believe otherwise is to believe the argument of Intelligent Design: that complex systems require a complex designer.

What does this mean for the idea of the singularity? It means that "godlike AI" is probably a myth, but even if it existed it could not necessarily compete creatively with even the simplest physical system. Or, rather: the simplest physical system can be a godlike AI in creative terms. To nail this concept to the floor in technical terms, we could agree with Wolfram that a Class 4 cellular automaton (eg. Rule 110) is computationally undecidable and exhibits universality, thus is maximally complex and maximally creative.

Where "godlike AI" could compete is not with problem-solving ability, but with problem definition--and problem definition is all about values. And why would we create AI to decide our values (problems and acceptable solutions) for us?

"Superhuman intelligence" is a squib. It fizzles upon examination. Something much more interesting replaces it, namely the idea of Edisonian AI. This is the kind of AI that dovetails with the genetic manufacturing process described above. It's the enabler of the technological maximum, but it's not conscious, and after a certain point doesn't benefit from increases in "intelligence." Its power derives entirely from its modeling capability, so what drives it is increases in the accuracy of physical theories.

In the coming age of the technological maximum, technology ceases to drive social change. The rules become reversed: change is all about values, about what we want, and ideology, religion, aesthetics and culture dominate the landscape of change.

The critical question for the future will not be "what's possible", but "what do you want?" and do you want it more than others competing to use the same resources?

In other words, the dominant force for change in the future will not be technology; it will be politics.

Document Actions
About Me

I'm a Canadian author and futurist with ten published novels and twelve years of experience in strategic foresight. I write, give talks, and conduct workshops on numerous topics related to the future, including:

  • Future of government
  • Bitcoin and digital currencies
  • The workplace in 2030
  • Artificial Intelligence
  • Augmented cognition

For a complete bio, go here.

The Future of Governance

I use Science Fiction to communicate the results of actual futures studies. Some of my recent research relates to how we'll govern ourselves in the future. I've worked with a few clients on this and published some results.

Here are two examples--and you can read the first for free:

The Canadian army commissioned me to write Crisis in Urlia, a fictionalized study of the future of military command-and-control. You can download a PDF of the book here:


Crisis in Urlia

For the "optimistic Science Fiction" anthology Hieroglyph, I wrote "Degrees of Freedom," set in Haida Gwaii. "Degrees of Freedom" is about an attempt to develop new governing systems by Canadian First Nations people.


I'm continuing to research this exciting area and would be happy to share my findings.

 
Help support my writing
Science Fiction that's about something

“The most thought-provoking and interesting work of hard SF that I've read in the past year."
—Charles Stross

"With paradigm shifts one inside another like a set of Russian dolls, this splendid novel propagates into a demolition derby of Big Ideas. Required post-human reading.”
—Scott Westerfeld, author of Uglies, Pretties, and Specials

“An astonishing saga. One helluva read!”
—Charles Harness

“Karl Schroeder has always had a knack for intelligent and provocative thought experiments disguised as space opera. Now he ups the ante with a fascinating riff on consensual [and conflicting] realities. Lady of Mazes contains more cool ideas than Ventus and Permanence combined.”
—Peter Watts