Centric connect.engage.succeed

On the verge of machine learning and machine intelligence

Geschreven door Ben van Lier - 06 mei 2015

Ben van Lier
In the film Transcendence, the lead actress Rebecca Hall suggests that “intelligent machines will soon allow us to conquer our most intractable challenges”. In the film, the human brain is uploaded to a collection of distributed computers. The collective thus created can learn far more quickly than would be possible among humans.

From fiction to real life
The question as to whether machines can learn to think independently has occupied scientists and the corporate world for decades. The first formal considerations in this area can be attributed to British mathematician Alan Turing. His life formed the basis for the splendid film, ‘The Imitation Game’. It seems that Turing’s ideas are now slowly but surely coming to pass in real technological applications from Google, IBM, Facebook and Microsoft among others.

Ahead of his time
In a 1950 article Alan Turing [1] proposed conducting a scientific investigation into the fundamental principles required for building a machine which could learn independently. However in the article Turing also warned that “An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he maybe still be able to some extent to predict his pupil’s behaviour”. Turing’s ideas were far in advance of his time. Not only did he state the possibility of being able to construct a machine which could learn independently, and could learn to think. He also determined that when observing the results of this learning, humans would not be fully able to work out exactly how this learning actually occurred within the machine. The suggested research was never carried out, in the wake of his tragic death (in 1954).

Follow-up to the project
In 1955 the American John McCarthy [2] proposed a summer research project for Dartmouth College. McCarthy’s proposal, co-signed by Minsky, Rochester and Shannon, suggested a two-month study conducted by ten people, “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”. Now, sixty years later and as Artificial Intelligence undergoes several evolutionary stages, we are gradually starting to approach what these scientists intended.

Machine learning stimuli
Kevin Kelly [3] believes that the rapid developments in machine learning which we have experienced in recent years have been enabled by three major advances in this field. He names the first as the increasing processing power of ‘parallel computing’ which makes it possible to calculate and process growing volumes of data simultaneously in parallel processes. In so doing, neural networks appear to bear a growing resemblance to the processing abilities of the human brain. The second development is the inconceivable quantity of data and information which has been collected and made available in recent years, about our world (and human behaviour within it). With this available data and information, we can enhance the learning ability of machines with neural networks, based on different learning levels. These levels operate on the basis of ‘parallel computing’, where the network can accelerate the learning ability exponentially. Finally Kelly names the improved algorithms stemming from the research work of Geoff Hinton of the University of Toronto. The new and highly-improved algorithms he developed in 2006 make it possible for machine learning to occur at increasing speed and with increasing effectiveness, by using the outcomes of one learning level more effectively for the next one. The combination of these three breakthroughs in machine learning has led, in Kelly’s view, to a “perfect storm of parallel computation, bigger data, and deeper algorithms which have generated the 60-years-in-the-making overnight success of AI. And this convergence suggests that as long as these technological trends continue, and there's no reason to think they won’t, AI will keep improving”.

AI in practice
Based on these possibilities, Google recently announced, for example, that the Deep Q-Network has succeeded in teaching a computer to play the Atari game Breakout, independently. The program learned the game’s possibilities independently, without any background information on the game itself. The software only had access to the scores and the pixels on the screen. As John Clark [4] suggests: “Google has created the computer equivalent of a teenager: an artificial intelligence system that spends all of its time playing and mastering video games.” Similar developments have now also been announced by Microsoft. Hernandez [5]: “According to Microsoft, Adam is twice as adept as previous systems at recognising images including, say, photos of a particular breed of dog or a type of vegetation while using 30 times fewer machines.” “Facebook has also been working on similar developments with the DeepFace project”, notes Nordrun [6], and he continues: “Facebook’s developers are in a race against other major technology companies, including Google, to create the fastest and most sophisticated systems, not only for facial recognition but also for a whole suite of products built on the tenets of artificial intelligence.” Another example is IBM, which is working on developing a ‘machine of deep learning’ in the shape of the Watson project. Murphy [7]: “What separates Watson from the average computer is its ability to find patterns in vast amounts of data.”

Discussion
The rapid evolution of machine learning and machine intelligence has already led to international discussion, with participants including Stephen Hawking, Bill Gates and Elon Musk. In these discussions, as Danks puts it, concerns of a philosophical nature are raised relating to the rapid evolution of machine learning and machine intelligence. These concerns focus mainly on the future relationship between man and the increasingly intelligent machine. But also to the question: “What is learning, and how do you determine that a machine can learn?” For instance, Danks [8] writes: “Machine learning algorithms perform complex, but clearly specified sequences of computations, and so questions arise about whether the methods qualify as learning or whether the assumptions necessary for the inductive inference can be suitably tested.” And that brings us right back to Turing’s original question.

Awareness
On one hand this still leaves us very far removed from what Bostrom [9] has defined as superintelligence: “A system composed of a larger number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system.” On the other hand there is Turing’s secondary question, namely whether as humans we are able to determine who or what is learning in a world where people and machines are interlinked in networks. In these networks, people and machines will learn increasingly on the basis of the available data and information. Based on the outcomes of this learning, they will also communicate and interact more, and will thus become more intelligent. The question is just how aware we are of the developments sketched here, the impact they will have on our daily lives and work, and the degree to which we need to prepare for the changes this entails for people, organisations and society?

Ben van Lier works at Centric as Director Strategy & Innovation and, in that function, is involved in research and analysis of developments in the areas of overlap between organisation and technology within the various market segments.

[1] Turing A.M. (1950) Computing Machinery and Intelligence. Mind volume 49 pp. 433-460
[2] McCarthy J. Minsky M.L. Rochester N. and Shannon C.E. (1955) A proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955.
[3] Kelly K. http://www.wired.com/2014/10/future-of-artificial-intelligence/
[4] Clark J. http://www.bloomberg.com/news/articles/2015-02-25/google-s-computers-learn-to-play-video-games-by-themselves
[5] Hernandez D. http://www.wired.com/2014/07/microsoft-adam/
[6] Nordrun A. http://www.ibtimes.com/facebook-artificial-intelligence-companys-ai-chief-explains-how-he-tags-your-photos-1859128
[7] Murphy M. http://qz.com/381226/ibms-ai-supercomputer-has-come-up-with-some-pretty-incredible-food-pairings/
[8] Danks D. (2014) Learning in The Cambridge Handbook of Artificial Intelligence. Eds. Frankish K. and Ramsey M. Cambridge University press, Cambridge ISBN 9780521691918 pp. 165
[9] Bostrom N. (2014) Superintelligence. Paths, Dangers, Strategies. Oxford University Press, Oxford. ISBN 9780199678112 pp. 54

       
Reacties
  • Thingks
    Henri Koppen
    06 mei 2015
    Nice write-up Ben. Good to read with nice references.
    However, it may be a bit misleading to use AI and machine learning without explaining how AI is different from ML.

    With ML algorithms there's no awareness or intelligence whatsoever. It has nothing to do with thingking computers. As you write in the Break-out example, the computer does not understand the game.

    With ML you can have perfect modern Chinese translations of everything that you throw in it but the system will not understand Chinese at all.

    For years we searched for AI and (human) intelligence and how to create it, no we are getting results but in a different way we expected. We still have "dumb" systems with no mind, but we can still get great results.

    The fear Elon Musk spreads is not justified in my opinion.

    What I do agree upon is that compute power and parallel computing lead to practical benefits.

    Nonetheless, nice to read and good structure. Now the question is: How will Centric apply to this new possibilities?
  • Centric
    Ben van Lier
    07 mei 2015
    Dear Henri,
    Thanks for your nice reaction. I can agree with you that terms like awareness, intelligence or consciousness are difficult scientific concepts. Despite this, I believe that within the years to come the Chinese Room argument based on the ideas of John Searle will be overcome. For this moment, I stuck to what was written by Turing himself: “the extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind as by the properties of the object under consideration”.
Schrijf een reactie
  • Captcha image