Monday, January 02, 2006

God And
The Singularity

The Speculist ponders God and The Singularity, and shows us some of the possible ways people will look at the future and its implications:

1. God as Model of the Good

Ray Kurzweil lays out some challenging ideas in The Singularity is Near, perhaps none is more challenging than this passage which concludes the chapter entitled "Ich Bin Ein Singularitarian":

"Evolution moves towards greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity, and greater levels of subtle attributes such as love. In every monotheistic tradition God is likewise described as all of these qualities, only without limitation: infinite knowledge, infinite intelligence, infinite beauty, infinite creativity, infinite love, and so on. "

"Of course, even the accelerating growth of evolution never achieves an infinite level, but as it explodes exponentially it certainly moves rapidly in that direction. So evolution moves inexorably towards this conception of God, although never quite reaching this ideal. We can regard, therefore, the freeing of our thinking from the severe limitations of its biological form to be an essentially spiritual undertaking."

This raises some interesting questions about the relationship between God and the Singularity. Just to rattle off a few...

Does the Singularity bring us closer to God?

Does God show up at the Singularity?

Are we going to somehow create God?

Are we going to somehow become God?

... before we get to the answers, let's spend some time on why we would even be talking about God in relationship to the Singularity. For starters, there's probably not a lot of overlap between theists and Singularitarians.

Devout believers tend to view the Singularity as a kind of competing eschatology, while "devout" (doesn't seem to be the right word, does it?)

Singularitarians tend to be agnostics and atheists. There are exceptions, of course, but they are mostly outliers -- scientifically minded folks who have room in their world view for an amorphous, noncommital "spirituality" and fringe believers who are okay with making pretzels out of established doctrine (a la Tipler) in order to be able to affirm everything they want.

Those are perhaps needlessly nasty caricatures, but they get the point across. Very much to his credit, what Kurzweil seems to be presenting is a merger of both these positions, absent the cynicism and simplistic rationalizations.

A while back, during a between-session break at Accelerating Change 2005, I had the good fortune to have a chat with two prominent individuals, one a life-extension advocate, the other a thought leader on the subject of artificial intelligence. We were talking about the Singularity and the probability of a hard versus soft takeoff when suddenly we found oursleves on the topic of where this is all going in the long run.

One of us dared to suggest that God might figure into the picture, pointing out parallels between the scenario we were examining and a story from the Bible. This was immediately dismissed by another as reliance on "fiction," but the third participant suggested that the Bible story referenced should be viewed as myth, not in a pejorative sense, but as a potential source of wisdom and instruction irrespective of whether it describes something that happened historically.

This was an attempt, I believe, to establish some kind of common ground between believers and nonbelievers. And I think it's similar to what Kurzweil does above by referring to God not as an entity but rather as a collection of characteristics.

Some of the characteristics that Kurzweil mentions are things that we would normally associate with the idea of the Technological Singularity, namely:

...while the rest might seem a little out of place:
subtle attributes such as love

But then again, maybe not so out of place. If we add empathy and kindness as subheadings under the "subtle attributes," what begins to emerge is something not unlike Friendly AI as defined by our friends at the Singularity Institute for Artificial Intelligence:

A "Friendly AI" is an AI that takes actions that are, on the whole, beneficial to humans and humanity; benevolent rather than malevolent; nice rather than hostile.

The evil Hollywood AIs of The Matrix or Terminator are, correspondingly, "hostile" or "unFriendly".

Arguably, a highly creative intelligence could emerge with a strong aesthetic sense and still have no empathy for us whatsoever. But I believe that if we find a way to instill a notion of beauty into an artificial intelligence, that notion will depend upon an underlying concept of goodness, which -- with any luck at all -- we will help the new intelligence to extend into the ethical as well as aesthetic sphere of thought.

So there, I believe, is the common ground that believers and Singularitarians have in exploring the relationship between God and the Singularity. Both have a keen interest in goodness. In working to bring about an emergent superhuman intelligence, the Singularitarian can find in the idea of God (or at least in some of the more prominent ideas about God) a model, a template, an ideal.

Pastorius question: Does a concept of beauty inevitably lead to a developed moral sensibility?

A believer might counter that to attempt to create God would be the worst kind of hubristic folly, and blasphemy to boot. We'll look at these objections in greater detail later.
But no one is talking about creating God. A Christian mother who tries to instill Christ-like qualities in her children would not be accused of blasphemy ..."

... if I trade (god-like) that term for "godly"? -- seems like something we can all agree is a pretty good idea.

Technologists will see this as responsible design, akin to the safety considerations that must enter into the introduction of any new machine. Believers will see it as a moral imperative. If the new intelligence is our offspring, the imperative is to raise the child with the right values. If it is a soulless machine, the imperative is to see to it that it is used for the best ends possible.

I think this is a rather wise way of looking at the problem. Truth is, these "machines" will be melds of human mind and computer technology. Therefore, any sense of goodness with which they are programmed will, theoretically, be of human origin, and will be subject to the free will of the machine. This means that their sense of morality will evolve as time goes by, as does our moral sense currently.

Could it be that the future computer/brain technological meld will be imbued with the same varied moral sensibilities that the current human race carries with it? Could it be they will grapple with the same dilemnas writ large by questions of potential immortality, all-pervading surveillance, and the potential of individuals to create massive nuclear destruction?

The dilemnas of the future are coming at us with such speed and inevitability that it is almost as if the future were already with us; palpable and breathing in our faces.

We must choose wisely.