Oxford University Press's
Academic Insights for the Thinking World

On deep learning, artificial neural networks, artificial life, and good old-fashioned AI

At a theoretical level, the concept of artificial intelligence has fueled and sharpened the philosophical debates on the nature of the mind, intelligence, and the uniqueness of human beings. Insights from the field have proved invaluable to biologists, psychologists, and linguists in helping to understand the processes of memory, learning, and language. Today, we’re continuing our Q&A with Maggie Boden, Research Professor of Cognitive Science at the University of Sussex, and one of the best known figures in the field of Artificial Intelligence. In this second part to the Q&A, Maggie answers four more questions about this developing area.

What are artificial neural networks (ANNs)?

ANNs are computer systems made of large number of interconnected units, each of which can compute only one (very simple) thing. They are (very broadly) inspired by the structure of brains.

Most ANNs can learn. They usually do this by changing the ‘weights’ on the connections, which makes activity in one unit more or less likely to excite activity in another unit. Some ANNs can also add/delete connections, or even whole units. So ANNs can (sometimes) be evolved, not meticulously built.

Some learn by being shown examples (labelled as being instances of the concept concerned). Others can learn simply by being presented with data within which they find patterns for themselves. Sometimes, the human researchers weren’t aware that these patterns were present in the data. So ANNs can be very useful for data-mining.

What is deep learning?

Deep learning (DL) is the use of multilevel neural networks to find patterns in huge bodies of data (e.g. millions of images, or speech-sounds). The system isn’t told what patterns to look for, but finds them for itself.

The theoretical ideas on which it is built are over twenty years old. But it has now sprung into prominence, because recent huge advances in computational power and data-storage have made it practically feasible.

It is called ‘deep’ learning because the pattern that is learnt is not a single-level item, but a structure represented on various hierarchical levels.

The lowest level of the network finds very basic patterns (e.g. light-contrasts in visual images), which are passed on to the next level. This finds patterns at a slightly higher level (e.g. blobs and lines). The subsequent levels continue (finding corners, simple shapes… and finally, visible objects). In effect, then, the original images are analysed in depth by the multilevel ANN.

DL has had some widely-reported results. For instance, when Google presented a set of 1,000 large computers with 10 million images culled randomly from YouTube videos, one unit (compare: one neurone) learnt, after three days, to respond to images of a cat’s face. It hadn’t been told to do this, and the images hadn‘t been labelled (there was no This is a cat).

That happened in 2012. Now, there is an annual competition (The Large Scale Visual Recognition Challenge) to increase the number of recognized images, and to decrease the constraints concerned—e.g. the number and occlusion of objects.

‘Successes’ are constantly reported in the media. However, this form of computer vision is not at all the same as human vision. For instance, the cats-face recognizer responded only to frontal images, not to profiles of cats’ faces. Moreover, if there had been lots of cats’ profiles on YouTube, so that a profile-detector eventually emerged, the DL system would not have known that the two images relate to one and the same thing: a cat’s face. In general, DL systems have no understanding (i.e. no functional grasp) of 3D-space, no knowledge of what a profile, or occlusion, actually is.

There are many other things that DL cannot do, including some—e.g. logical reasoning–that no-one has the remotest idea of how it could do. It follows that its potential for practical applications, although significant, is much less wide than some people imagine.

DL is the latest example in a long line of AI techniques that have been hyped by the press and cultural commentators –and sometimes by AI professionals, who should know better. (The outstanding example of DL is the program AlphaGo, which beat the human Go world-champion in March 2016.)

Circuit board. CC0 Public domain via Pixabay.

What is Artificial Life?

Artificial life (A-Life) is a branch of AI that models biological, or very basic psychological, phenomena. It studies (for example) reflex responses, insect navigation, evolution, and self-organization.

The type of robotics favoured in A-Life is ‘situated’ robotics. Here, the robot responds automatically to particular environmental cues, when it encounters them in the particular situation. The inspiration is not deliberate human reasoning (as it was in the early AI robots), but the reflex activities of insects. For instance, the behaviour (and neuroanatomy) of cockroaches has been used to suggest ways of building six-legged robots that can clamber over obstacles (not just avoid them), remain stable on rough ground, and pick themselves up after falls.

Self-organization is the characteristic property of living things. Work in A-Life has hugely improved our understanding of this apparently paradoxical concept.

Did GOFAI fail?

GOFAI, or Good Old-Fashioned AI (also called symbolic, classical, or traditional AI), pioneered fundamental ideas that are still crucial in state-of-the-art AI. These include heuristics, planning, default reasoning, knowledge representation, and blackboard architectures.

Today’s AI planners, for example (widely used in manufacturing, retailing. and the military), are much more complex, and significantly less limited, than the GOFAI versions. But they are based in the same general ideas and techniques.

The USA’s Department of Defense, which paid for the majority of AI research until very recently, has said that the money saved (by AI planning) on battlefield logistics in the first Iraq war outweighed all their previous investment.

Some modern planners have tens of thousands of lines of code, defining hierarchical search-spaces on numerous levels. They don’t assume that all the sub-goals can be worked on independently. That is, they realize that the result of one goal-directed activity may be undone by another, and can do extra processing to combine the sub-plans if necessary. Nor do they assume (as the early planners did) that the environment is fully observable, deterministic, finite, and static. The system can monitor the changing situation during execution, and make changes in the plan—and/or its own “beliefs” about the world—as appropriate.

GOFAI techniques have been supplemented by other types of AI. For example, artificial neural networks and evolutionary programming. So symbolic AI isn’t the only game in town (actually, it never was, not even in the 1950s).

But to say that it has failed is a mistake.

The only sense in which it has, truly, failed is that the pioneers’ dream of building a general artificial intelligence has not been achieved—and, despite current fears about “the Singularity”, is not yet in sight.

You can also read part one of Maggie Boden’s Q&A.

Recent Comments

There are currently no comments.