Friday, February 28, 2014

Why robots will not be smarter than humans by 2029

In the last few days we've seen a spate of headlines like 2029: the year when robots will have the power to outsmart their makers, all occasioned by an Observer interview with Google's newest director of engineering Ray Kurzweil.

Much as I respect Kurzweil's achievements as an inventor, I think he is profoundly wrong. Of course I can understand why he would like it to be so - he would like to live long enough to see this particular prediction come to pass. But optimism doesn't make for sound predictions. Here are several reasons that robots will not be smarter than humans by 2029.

  • What exactly does as-smart-as-humans mean? Intelligence is very hard to pin down. One thing we do know about intelligence is that it is not one thing that humans or animals have more or less of. Humans have several different kinds of intelligence - all of which combine to make us human. Analytical or logical intelligence of course - the sort that makes you good at IQ tests. But emotional intelligence is just as important, especially (and oddly) for decision making. So is social intelligence - the ability to intuit others' beliefs, and to empathise. 
  • Human intelligence is embodied. As Rolf Pfeifer and Josh Bongard explain in their outstanding book you can't have one without the other. The old Cartesian dualism - the dogma that robot bodies (the hardware) and mind (the software) are distinct and separable - is wrong and deeply unhelpful. We now understand that the hardware and software have to be co-designed. But we really don't understand how to do this - none of our engineering paradigms fit. A whole new approach needs to be invented.
  • As-smart-as-humans probably doesn't mean as-smart-as newborn babies, or even two year old infants. They probably mean somehow-comparable-in-intelligence-to adult humans. But an awful lot happens between birth and adulthood. And the Kurzweilians probably also mean as-smart-as-well-educated-humans. But of course this requires both development - a lot of which somehow happens automatically - and a great deal of nurture. Again we are only just beginning to understand the problem, and developmental robotics - if you'll forgive the pun - is still in its infancy.
  • Moore's Law will not help. Building human-equivalent robot intelligence needs far more than just lots of computing power. It will certainly need computing power, but that's not all. It's like saying that all you need to build a cathedral is loads of marble. You certainly do need large quantities of marble - the raw material - but without (at least) two other things: the design for a cathedral, and/or the knowhow of how to realise that design - there will be no cathedral. The same is true for human-equivalent robot intelligence. 
  • The hard problem of learning and the even harder problem of consciousness. (I'll concede that a robot as smart as a human doesn't have to be conscious - a philosophers-zombie-bot would do just fine.) But the human ability to learn, then generalise that learning and apply it to completely different problems is fundamental, and remains an elusive goal for robotics and AI. In general this is called Artificial General Intelligence, which remains as controversial as it is unsolved.

These are the reasons I can be confident in asserting that robots will not be smarter than humans within 15 years. It's not just that building robots as smart as humans is a very hard problem. We have only recently started to understand how hard it is well enough to know that whole new theories (of intelligence, emergence, embodied cognition and development, for instance) will be needed, as well as new engineering paradigms. Even if we had solved these problems and a present day Noonian Soong had already built a robot with the potential for human equivalent intelligence - it still might not have enough time to develop adult-equivalent intelligence by 2029.

That thought leads me to another reason that it's unlikely to happen so soon. There is - to the best of my knowledge - no very-large-scale multidisciplinary research project addressing, in a coordinated way, all of the difficult problems I have outlined here. The irony is that there might have been. The project was called Robot Companions, it made it to the EU FET 10-year Flagship project shortlist but was not funded.


Search Results

16 comments:

  1. Fully agree. One day they might sort the 'Hard Problem' of consciousness or even acknowledge it. Not skate by it. Google and other seem to see it as another little data-mining problem."Done them. How hard can it be?"
    Ray Kurzweil does not seem like an engineer to me. With, according to Wikipedia, his BS in computer science and clever coding and other firms making his scanner hardware [Was it Texas or HP?].
    My engineering blog [http://isambardkingdom.com] has had a go at this ['How many engineering ages? http://bit.ly/1gyDUOU and 'The name of the engineer' http://bit.ly/1dL9AO1 ]
    I am annoyed at Google and other software giants commandeering the name of 'engineer' because they think it has a certain ring to it and also because they can.
    You may not agree of course, but that's OK. Like to hear your thoughts on this. Given you a follow on Twitter. I tweet as @isambardslad

    ReplyDelete
    Replies
    1. Many thanks for your comment. I have no issue with Google appointing directors of engineering. Their core technology is after all software engineering - and very good it is too.

      What puzzles me is why so many good people, including within the AI community, think AGI is about to be solved. Maybe Google see it as a data-mining problem. I don't know. But one thing I feel sure of is that there will be no silver bullet that cracks AGI. I don't think it's a reducible problem.

      Delete
    2. Great to see and engineer taking philosphy aspects seriously. AGI is the big one.
      On your first point, I see the very term 'software engineering' as a category mistake. It is coding and not engineering [second link in my previous comment]. In my blog I proposed that they are 'logicians' . not engineers. Don't get me wrong, there are engineers in IT. Such guys as those who design hard drives and chips - geniuses AND engineers.

      Delete
    3. Sorry, but 100% disagree with you on Software Engineering. It's a discipline and profession, with v well regarded journals, professors etc, and coding is only a small part of software engineering. I led a number of large SE project in safety critical systems, and worried about validation, quality assurance etc, all part of software engineering.

      Delete
    4. Engineering is making use of natural forces in the Universe. See http://bit.ly/1dL9AO1 Writing software or coding is an exercise in logic. All the checks such as validation and QA etc are there to validate a complicated chain of logic and is vital in such as Sizewell B control systems. It does still does not make coding engineering. Just because Journals and professors assert that it is so does not make it so.

      Delete
    5. Of course you are entitled to your opinion and, as a Chartered Engineer, I respect your strong feelings on the matter. The fact remains that engineering is broadly defined by, for instance, the UK Engineering Council - the British Computer Society is a member institution licensed to award Chartered Engineer status.

      I would also add that electronic and mechanical engineering is just as much to do with the application of logic and mathematics, as it is materials and forces.

      Delete
    6. Guess we leave it there. I feel guilty anyway coming on strong like this in after all what is your blog: it's as though I came in your house leaving muddy footprints on the carpet. I still admire your topics enormously and hope to continue follow you in blog and on Twitter.

      Delete
  2. After all that, I forgot to say the main thing. I cannot pose as CEng even by a sin of omission. I was not. I am began my engineering design career before it was essential.

    ReplyDelete
  3. Great post, many good points!
    Kurzweils reliance on computational power really seems to be a big mistake (hopefully its not stopping him from thinking further), and I also think an AI would probably have to have a human-like body and to live through "youth" to be able to live in a world that at least has some overlap with how we see our world (these are mainly practical problems I'ld say).
    I am one of those people that are more optimistic about learning and consciousness though. I think consciousness is actually a condition for (embedded) learning, as it seems to split the world into you/stuff you're responsible for and the rest/stuff you're not responsible for (so, actual feedback for "your" actions and non-feedback).
    So far, the closest thing in AI/machine learning is the difference between signal and noise I guess, but when the AI is embedded in a dynamic world, it really needs a more explicit sense of self. Its hard to tell how difficult that will be exactly, but Im optimistic, because I think a lot of people are scratching the surface already (roboticists, companies like google and ibm, etc).

    ReplyDelete
  4. Robots become smarter only if a human make the robots so. To make a robot smarter the maker should be even smarter, so indeed robots will not be smarter than humans.

    ReplyDelete
    Replies
    1. I don't think so.

      Kurzweill has tackled this problem extensively in his books. U don't need the invention process to be smarter. Evolution, is not that smart, yet created human intelligence. So why can't human intelligence at scale create artificial intelligence?

      Delete
  5. Effectively, all that we learn about the ways of nature seems to advice that our engineering paradigms - indeed, the very way our conscious mind works - is at odd with it. From biological molecules that have dozens of functions to almost indescribable feedback paths, nature seems to have few or no use to for our "narrative" approach to things.

    It seems just likely that our minds, at their base, follow the same pattern... be made out of highly not-linear relations and processes, with the rational mind we are so proud of being just a fairly simpler, emerging process.

    On the other hand, while the difficulties of AGI are overwhelming, I am not sure that we could not "find" some kind of intelligence.

    It's just improbable that we will engineer it, more that we'll obtain it through some kind of "mindless" evolution.

    I t would be a funny moment, creating a kind of intelligence we do not really understand, through the effort of our own - equally beyond understanding - intelligence.

    ReplyDelete
  6. In regards to the last paragraph: What is your opinion on the effort undergone by Ben Goertzel and the Opensource project OpenCog (http://opencog.org)?

    ReplyDelete
  7. As a biological scientist with a lot of training in the physical sciences, I think people haven't even begun to scratch the surface of the processing power of neurons; let alone the brain. Many of these Moore arguments seem based on treating all of the processing as existing between neurons and that neurons need to be actively sending signals to create thought/consciousness. This is true to an extent, but neuronal signaling is not like a transistor being on and off in a logic circuit. Cells themselves may be doing quite a bit of "processing" in ways we have not even predicted and with all of the varying proteins on a neuron, we can imagine a neuron storing many more states than 0/1. Also, few people seem to be considering the possibility that a neuron can process multiple kinds of signals simultaneously. We have a decent understanding of action potentials but there are other slower moving signals and chemical changes which are distinct from an action potential and which could in theory allow other processing simultaneously. So many questions need answering.

    Also, I question the idea that many of these fancy learning algorithms represent true learning. I suspect you will find that when faced with certain problems these machines keep falling into a set of common approaches and fail to adapt. (People do that as well, but the dynamics of the brain allow us to reconstruct pathyways in ways we don't anticipate.)

    ReplyDelete
    Replies
    1. Fascinating. However I suspect you are wrong to assume because human brains are so multifarious in their processing that a binary state computer with simply awesome raw power will not out-perform us someday soon. I also think learning is not that much of a mysterious process - in fact us humans are particularly bad at it (probably because of the fog created by our complicated brain architecture).

      It seems quite plain to me that computers will outstrip us in every way quite soon - except I don't expect AI to develop consciousness. Consciousness is probably just a useful illusion created by evolution. What better way to ensure the survival of a species than to evolve consciousness so that each individual operates under the misapprehension that its own life matters? Computers will not ever need to think in that way.

      But when it comes to intelligence, prepare to be demoted to the position that chimps currently occupy in the brain charts! Look on the bright side; it'll give us an excuse to start chucking our poo about the place and that looks like fun.

      Delete