Sunday, December 14, 2014

Stephen Hawking Warns of 'the Rise of the Machines' - Should We Take Heed?

Prof. Stephen Hawking, in his latest fretful sortie, has warned that artificial intelligence "could outsmart us all" and lead to our takeover unless we have the foresight and sense to "establish colonies on other worlds" in order to avoid a near technological certainty.  Hawking went on to elaborate saying that AI "could become a real danger in the not too distant future" if it became capable of designing improvements to itself.

The problem is that human biological advancement is limited by the rate of DNA change- say in complexity - and the nearest shift won't be for another 18 years. Meanwhile, by Moore's law, computers double their speed and capacity every 18 months. In other words, poor humans are left in their dust.

But is this really something to worry about?

Well, on the affirmative side, researchers at the Oxford Martin school at Oxford University have warned governments to plan for the risk of "robo-wars" where autonomous robotic weapons  - guided by AI- can identify and kill targets without human intervention. These targets could include humans too.

Of course, this can't fully be achieved until the details of quantum computing are finally nailed and we still have a long way to go, especially in resolving the "entanglement" problem - whereby internal quantum bits (qubits) can be disassociated with distant entities outside the system. Until there is some measure of control here, workable -feasible quantum computers will remain largely a pipe dream as well as human quantum cyborgs.

In quantum computing, one confronts the quantum bit or qubit, as opposed to the ordinary bit. The latter may be 1 or 0 but the former can be 1 AND 0 at the same time, The linkage of 1 and 0 is then a superposition of states, i.e. U = U(1) + U(0).  Thus, in its superposed state  a quantum bit exists as two equally probable possibilities. According to one hypothesis from physicist David Deutsch, the quantum bit is "operating in two slightly different universes at the same time"  - one for which it's 1 and the other U(0) for which it's 0.  In Deutsch's parlance "it's the first technology to allow useful tasks to be performed in collaboration between parallel universes."

Most noteworthy here, if a qubit can be in two states at one time, it can perform two computations at the same time.  Then two qubits could perform four simultaneous computations, three could perform 2^3 = 8 and so on.

The closest thing humanity has to a quantum computer right now is located in a burg called Burnaby - directly east of Vancouver, B.C.  Burnaby is the HQ for D-Wave Two,  of which there are now 5 in existence. At its heart is a niobium computer chip chilled to 20 millikelvins or minus 459.6 K, nearly 2C colder than the Boomerang Nebula. This is essential to minimize external interactions which become more likely as the temperature increases.

As fancy and formidable as D-Wave 2 might be, it's nowhere near ready to act as a quantum "mind machine" or indeed, any basis for artificial intelligence.  In many ways quantum computers are a solution still looking for the right problem. The bugbear? Existing so-called quantum computers have to be maximally isolated, like the D-Wave 2 in Burnaby, B.C. No information can be allowed to escape because any interaction with the outside world will cause errors to creep into any calculations. Even the most basic computations are made even more difficult by the fact that in the isolated state you still have to control them.

Not surprisingly, quantum computer mavens and techies have had to get around these problems by compromise. Enter then what's called an adiabatic quantum computer which works by means of a process called quantum annealing. Basically, the qubits are linked together by couplings. These couplings are then programmed using a special algorithm that specifies certain interactions between the qubits. (If this is a 1, then that has to be a 0, and so forth).  The qubits are then put into a state of quantum superposition in which they're free to explore all those two to the nth qubits possibilities simultaneously.   They are then allowed to settle back into a 'classical' state to become separate 1s and 0s again.   The qubits naturally seek out the lowest energy state consistent with the requirements specified in the original algorithm - and the answer can be read in the final qubit .

But this is nowhere near adequate to even design a laughable version of AI. The hang up? Well, the adiabatic quantum computer can only solve one class of problem, which goes by the moniker "discrete combinatorial optimization". This type  entails finding the best (optimum), the shortest or the fastest (or the cheapest or most efficient) way of doing a task.  For example, say a European traveler wishes to go to Paris, London, Berlin, Zürich and Rome all in one week with the cheapest transport feasible but also the highest quality he can buy. How does he do it? Quantum annealing computers can provide an answer.

But it can't even do a credible "imitation game" - the basis of the Turing problem, formulated by computer whiz Alan Turing and touched on in the new movie, 'The Imitation Game'. The  question arises: Can a machine be constructed that sends signals to a human so the latter can’t detect the ‘sender’ is a machine? Ideally, the machine earmarked for this job ought to be able to respond so that the human finds it difficult if not impossible to easily uncover the scam.

In fact, information engineer Alan Turing first proposed a variation on the above assuming signals from an indefinitely long tape mechanism, as a low –level test of consciousness. His basis for giving the machine a ‘pass’ rested on whether it bested a human in a high level skill game all of the time (the long tape is needed to solve what is called the 'halting problem'). Realistically, I surmise no real progress will be made toward this resolution of the Turing problem and real AI, until a whole new quantum design arrives not limited  to solving only discrete combinatorial optimization.

My insight, elaborated fully in my book, (Beyond Atheism, Beyond God)  is that if one incorporates logic gates  based on Pauli spin operators,  much of the problem can be solved. Then the entry of novelty and emergence is enabled. For example, the NOT gate can be represented by what is called a unitary matrix, or Pauli spin matrix-operator σ_x =

(0.......1)
(1.......0);


Similarly:

σ_y =

(0........-i)
(i......... 0);

σ_z =

(1....... 0)
(0...... -1)

 Incorporation of such Pauli (quantum) gates meets a primary application requirement for feed forward networks, in describing synapse function. (See e.g. Yaneer Bar-Yam, 'Dynamics of Complex Systems', Addison-Wesley, pp. 298-99.)  The closer computer "synapse" functions approximate to the actual human counterpart, the greater the convergence to genuine artificial intelligence.

My estimate is that it will be at least another 100 years before such Pauli spin logic gates can be effectively integrated into quantum computers - to pave the way for genuine AI. By that time, yes, if human are smart enough they will have colonized other planets - so even if the super machines take over Earth there will be humans out there who won't be enslaved by them!

No comments: