Wednesday, March 2, 2011

Is the "Singularity" Near?


Technologist Raymond Kurzweil believes that within 30 years, computers will overtake humans in intelligence, leading to the "Singularity".

At one time, in religious dominated America, one often saw people walking the streets with placards (usually outside of movie theaters), with ominous words to the effect: 'The END is NEAR!' Now, according to a recent issue 0f TIME (Feb. 21, p. 20) the sign is more likely to read: 'The Singularity is NEAR!"

What is this "Singularity" and what does it portend? Will it be as catatrophic as the mythical "end times" we used to hear about? Or worse?

From what I gathered in the TIME article, the Singularity is the term techies give to the stage when computers acquire an artificial intelligence equal to human intelligence. And if that's the case, the argument goes, then they may also have the capacity to acquire consciousness and even surpass humans. Shortly thereafter, the "human era" of history and dominance ends, and the computer era begins. We will begin then to serve the computers, and be at their beck and call, much like the hapless people in the film 'The Forbin Project'.

Chiming in on this issue is technologist Raymond Kurzweil who argues that merely equalling humans in intelligene won't be the end of it, and "there's no reason to think computers won't stop getting more powerful. They would keep developing until they are far more intelligent than we are."

He adds that the acceleration would be exponential because at some phase they'd take over their own development from humans. Thus, imagine a computer scientist that was itself a super-intelligent computer. It would be able to design super-high speed and intelligent supercomputers, not to mention put Einstein to shame in arriving at novel cosmological theories. It may even have the philosophical smarts to either finally prove or disprove the existence of God. Not to mention write novels far superior to any by Scott Turow, James Thurber, or whoever.

The TIME author is honest enough to admit that for most people, reading this elicits a gag reflex or alarm bells on the bunkum meter. For many others, they wouldn't even bother reading, dismissing it as about as likely as 'V'-type aliens landing on Earth and carrying away all the conservatives and religious fundamentalists (Islamic, Christian and Jewish). However, as the author also points out, we'd be unwise to dismiss it because hundreds of serious people: scientists, philosophers, psychologists, economists and even politicians are conducting conferences about it. They believe it's not a matter of 'if' but when, and the date most people give (or rather year) is 2045.

As the author notes:

"The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea, it's a serious hypothesis about the future of life on Earth."

Indeed, so just imagine this life, ca. 2045, for a second: humans are tailor-made (via genetic engineering) to maximize their output for the benefit of computers. When the computer orders the human to eat, he eats, and when he says to sleep, the human sleeps. If the human tries to argue, an instant taser-like shock instantly commands obedience. If the human wants to attend a church or even a science meeting, it will be at the computer's discretion, not by any human choice. If the human's production levels aren't up to snuff then very likely the human is given a long time out, doing extra chores for the computer.

Why are so many tech types drawn to this? Perhaps because of the shock value, like an intellectual freak show. Nor is it a totally novel idea. British mathematician J.J. Good, in 1965, described something very similar, to wit:

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines, there would unquetionably be an 'intelligence explosion'."

Not to be too nit picky here, but an 'explosion' isn't the same as a singularity. That term is really borrowed from astrophysics and refers to the very center of a black hole, which theoretically has zero volume and enormous mass, pressure - hence the ordinary laws of physics can't apply. In effect, a singularity is technically a maximal collapse point.

Now, I hasten to add here, that the future depiction of human slaves to computers isn't necessarily the only vision. Indeed, I already blogged on the potential for humans to merge with de facto super-quantum computers:

http://brane-space.blogspot.com/2011/02/quantum-cybernetic-man-how-soon.html

If then, the super-computers that are super-intelligent give humanity a choice: be my slave or let me be uploaded into your neurons, which will it be? I believe most normal humans (outside of the extreme religionists) would choose uploading. In fact, Kurzweil for his part predicts that by the mid 2020s we will successfully reverse-engineer the human brain, meaning that the neural network mapping I outlined in the blog link above would be feasible. Hence, human cybernetic begins or what Kurzweil calls "transcendant Man" would be a reality.

In this version of future life, cybernetic Humans work in tandem and in cooperation with the super-intelligent computers, perhaps toward the goal of "terraforming" the Earth back to what it was before man-made pollution of the atmosphere and oceans ruined it for all biological organisms.

How vast will the artificial intelligence level be by the Singularity year of 2045? Kurzweil puts it at "about a billion times the sum of all the human intelligence that exists today."

Think all this is nonsense? Think again! There is already a Singularity University to study these issues and also a Singularity Institute for Artificial Intelligence based in San Francisco. The latter counts among its advisors Peter Thiel, a former CEO of Paypal. The Institute holds an annual summit, called the Singularity Summit.

But let's back up a bit. Might all these futurists and tech-happy folk actually be victims of hubris? Maybe technological hubris?

There are two obstacles I see to this Singularity coming true, one practical and one theoretical.

First the practical. Peak Oil, if it hasn't commenced already, is going to be coming within a few years. This is fact, not speculation. Many otherwise intelligent people dismiss "Peak Oilers" as about the same sort of types that worry over UN black helicopters and similar hokey nonsense, but they're most unwise to do so.

No less a pragmatist than oil tycoon T. Boone Pickens, quoted in The Financial Times, May 21, 2008 (‘Oil Futures Near $140 Amid fears of Shortage’) page 1A, asserted that we’re now at the point where demand for oil is 87 billion barrels a day, while only 85 billion can be produced. This is acknowledging Peak Oil by any other name. Meanwhile, in The Wall Street Journal of May 22, there appeared the article ‘Energy Watchdog Warns of Oil-Production Crunch’ (p. A1)

Meanwhile, from the same FT article, the IEA (International Energy Agency) forecasts as much as a 12.5 billion barrel a day shortfall by 2015. This will certainly surge oil prices to probably well over $600 a barrel and at least $8 per gallon (or what Europeans are now paying) at the pump. Most Peak Oil signifiers pitch its entry threshold at $7 a gallon.

Many people mistake 'Peak Oil' for total cessation of oil output, which is not so. It means literally, that a peak in global production has been putatively reached and that marked the end of "cheap oil". Meaning, the energy returned on the energy invested to get it was large enough - say a 20:1 ratio, to warrant a lot of low cost energy consumption. However, as years elapse after the peak and the easy to get oil stock dwindles, it means less oil comes but with much greater cost. Thie higher cost accrues because the oil is harder to obtain and much more energy and technology must be invested in getting it (see more on deep ocean drilling, and while at it, check on the Deep Horizon disaster last year).

In more practical terms, what it means is that if 2008 was the year of peak oil production (given T. Boone Pickens' numbers) then the worldwide oil production in 2028 will be the SAME as in 1988, demanding that Q(net) > 0. Also, it means that production in 2048 will be the same as 1968, 2068 will be the same as 1948, and 2088 will be the same as 1928! All this while the population is expected to reach 9 billion or more in the same period! In other words, as time goes on the available accessible oil constantly diminishes even as population constantly rises with the same demands.

My point in all this, and it is vividly demonstrated now as oil prices spike toward $3.50/gal. with the Libyan crisis (showing how low production margins really are, since the Libyan oil output is barely 2% of the global total) is that the energy to produce the super-intelligent computers of which the Singularitarians dream, will simply not be available. Indeed, I forecast that by 2015, the oil production problem will be so serious that high tech manufacturing will come nearly to a halt as output is prioritized, e.g. running hospitals and ERs, as well as pumping water, will be more important than constructing and manufacturing quantum super-computers.

At the theoretical level one can also summon objections. In respect of the super-intelligent computers that come with the Singularity, their key distinguishing feature is self-replication, or the ability to design and manufacture more computers like themselves, or even superior to it.

But as I see it, the claim of AI-self-replication violates two basic principles: Gödel's Incompleteness Theorem, and the Principle of Information Entropy. Let's take the Incompleteness Theorem first. Basically, it asserts that all structured logical systems must be incomplete. That is, there exist propositions within them that are true, but cannot be proven. There also exist other propositions that are 'undecidable' from a true-false test. One such structured logical system might be the computing code for a von Neumann machine. This may comprise some 'N' formal statements - that are used to specify the behavioral and other limits for the machine. The key question is whether these 'N' coded statements, comprising the entire matrix of machine information, can successfully be replicated in another machine. This is possible if E, the entropy (in bits/second) of the message/information source (von Neumann machine), is less than the capacity of the information channel.[1]

For the sake of argument, let 10^10 bits be associated with each statement in a von Neumann code (program). Each machine processes an Entropy E = N x 10^10 bits/second. In addition, we have: E = N x 10^10 bits/second. Here is the problem: all the statements in the code concern the behavior per se: what the von Neumann machine is to do. They include nothing about replicating itself! This replicative statement therefore exists outside the code - it is a meta-statement! One requires at minimum an Entropy:

E' = (N+1) x 10^10 bits/second

We can see, E' > E, so the message cannot be transferred! The entropy exceeds the bounds of the information channel capacity. Phrased another way, the statement for replication arises from outside the system of code, not from within its instructions. In other words, the command 'Replicate!' may be true but it is unprovable within the Gödelian loop to which all logical systems are constrained. And, so long as it is unprovable, it will not be executed by any von Neumann machine! In effect, there can be no 'self-reproducing universal constructor with human level intelligence'.

Sure, there is no doubt in my mind that computers will become much faster and even "smarter", but there are certain parameter bounds on exceeding that, starting with energy limits imposed on their construction (via Peak Oil and energy constraints), as well as theoretical limits imposed by the information entropy issues and the Gödelian Incompleteness theorems. Given these, I personally would worry as much about being subjugated by AI-computers, as I'd worry about being abducted by extraterrestrials during my next jog!



[1] Information channel: the medium through which information can pass in bits per second. The information capacity is the proportion of genuine information I (in bits per second) to wasted information or entropy (H). Thus, roughly: I - H.

No comments:

Post a Comment