Thursday, 15 August 2019

A Possible Evolutionary Neural Net Model of Turing's Child Brain

Picture source

In 1950 Turing wrote a paper Compuing Machinery and Intelligence and suggested that one approach to building an intelligent system, that could play the imitation game, might be to start by simulating a child's brain.
Presumably the child-brain is something like a note-book as one buys it from the stationers. Rather little mechanism, and lots of blank sheets. …  Our hope is that there is so little mechanism in the child-brain that something like it can be easily programmed.”
 While an enormous amount of research had been done relating to the brain and artificial intelligence since you will find nothing in the published literatureto suggest that Turing's hope was justified.

This paper looks at the archives of an early and very unconventional computer language, CODIL, which (like dozens of other experimental languages at the time) never became commercially viable. CODIL was designed to help a human user and a computer clerk work together symbiotically on complex problems where uncertainties made the conventional pre-define algorithm approach impractical. 

But the underlying mechanism supporting CODIL is very simple (like Turing's child brain), and could be a model of how information has passed from parents to their children for thousands of generation - making a very deep learning network. 

The assessment shows that CODIL could be considered to be a language for exchanging information between two neural networks - one the brain of the human user and the other in the computer.  This was achieved by handling information in a way that resembles human short term memory. In addition, because the human is providing the information in "network form" the computer does not need to use time-consuming trial and error learning, avoiding the learning problems of much A.I> research.

Because the work was abandoned in 1988 many of the crucial questions, such as "Could a CODIL-like model supporrt natural language?" were not examind before the project closed. As a result this paper looks ways in which the CODIL research relates to evolution of human intelligence.

I would be very grateful for any comments you can make, with citations to any similar reseach I may have missed,  In particular I would be interested to know if you think the research should be restarted.

The Black Hole In Brain Research

Picture source
The following post was first published in 2011 and is relevant to my latest note A Possible Evolutionary Neural Net Model of Turing's Simple Child Brain which attempts to bridge the black hole.

The whole issue of how the brain works, and the origins of human intelligence, are matters of considerable interest, and most universities will have several very different departments which have staff researching some aspect of the area. In hospitals and medical research departments people study the effects of injuries and illnesses that effect the brain. Modern equipment can identify electrical signals and locate their sources. Computer Science Schools may have departments of Artificial Intelligence, while there is interest among psychologists, educationalists, linguists, biologists, evolutionists and many others. In each speciality there will be a mountain of publications, many of excellent scientific quality. It is quite clearly impossible for anyone to be fluent with the current research right across the board and there is so much narrow specialisation that researchers in one area may find it very difficult to understand some of the other specialities,.

So lets all stand back as far as we can and take an overall, and hopefully non-partisan, look.

While the amount of knowledge we have of the brain has increased enormously in recent years there is a problem. If we were working towards some kind of unified theory of the brain the aim would be to link all these different disciplines together in a central model. What we find in the middle is more like a large black hole with people from different disciplines kibbling round the edge. For example there is no clear route which links the neurons to language learning or the ability to play chess well. Particularly when making comparisons with animals, one is even uncertain what how the term “intelligence “ should be defined.

An analogy might help to identify the problem. An alien life form with a very different technology visits this planet and takes away a number of working personal computers, each actively running a different application. The aim to investigate the nature of the intelligence inside the boxes. They make enormous progress. They “kill” one or two and find the circuit board and the various components and study these in great detail. They design monitors which tell them which components are electrically active and examine each application in great detail. For instance they discover that the chess is a game – and they use their own technology to play the game and win.

Despite all this research they still do not understand the key to these boxes apparent intelligence. What they are missing is an understanding of the instruction set of the central processor and the way it interacts with the memory, and the way this allows information for each application to be mapped onto the memory.

I am suggesting that the “black hole” in brain research is equivalent to the one described in the analogy. We know that the brain's central processors must be neurons, working singly or in groups, and that memory must be held somewhere in the network of nerves. The brain somehow uses this arrangement to support a wide variety of tasks (often different in different people) as varied as supporting one or more spoken languages, playing chess, flying an aeroplane, earning a living or arguing with 100% conviction for (or against) the existence of a god.

The target of these Brain Storms is to explore the options for such a “brain central processor” and one option that will be examined will be to look at of an unconventional computer language called CODIL, which was designed to be human friendly and which can demonstrably support a wide range of applications. This suggests some features suggesting one possible model, but there may be others and you suggests are very welcome.

The next few Brain Storms posts will look at the evolution of Mankind and what it tells us about the evolution of man's self- named “intelligence” and the possible relation between human and animal “intelligence”. I will then describes a provisional target model for the Storm – and start looking at how far the CODIL model succeeds, and fails in other ways, to meet the target. Hopefully by this stage other people with have joined in with other ideas – so that together we can define a possible predictive model for more detailed examination.

Sunday, 28 October 2018

Does Technology affect the size of our brains?

On the Futurelearn Course Introduction to Psychologymade the distinction between genetic development of the brain (how big it is) and the cultural development - where the culture is always changing generation by generation.

DB asked  I am guessing that the evolution of technology may also relate to our brains' size?

If you think about it modern technology means that we don’t have to use our brains so intensely as in the past. I was brought up “under the counter” in a newsagent and tobacconist’s shop and as a child enjoyed serving customers – and all sales were done in your head in old £sd money, no bar codes and computer tills to help. In fact the work of the average shop assistant has been significantly deskilled in my lifetime. 

Saturday, 27 October 2018

Summary of "An Evolutionary Model of Human Intelligence"

An Evolutionary Model of Human Intelligence
By Chris Reynolds
Draft Summary (full paper to follow)

While there is a vast amount of published scientific research about the structure of the human brain, how it evolved, and what it can do, there is a significant gap in our knowledge about it. Basically we evolved in a complex environment and there is no adequate mathematical model of the “complex biological computer” in our heads. What is missing is an explanation of how the activity of single neurons evolved to support the intellectual performance of the human species.  

This paper lays the foundations for a predictive evolutionary model which suggests why there was a significant spurt in human intelligence, compared with animals, and provides an explanation for human brain information processing strengths, and also its limitations, such as confirmation bias.

The proposed model is based on very general decision making nodes which are mapped onto an infinite recursive neural network. This network can be considered the biological equivalent of the infinite numeric array of numbers which is the logical basis for the well known deterministic stored program computer model. Within the model the decision making nodes exchange messages, and in theory all nodes and messages can be broken down into collections of simpler nodes and messages. Individual nodes may be anything from a single neuron, via complete brains, to a human committee, or even man-made tools such as computers. Because the network is infinite, it is capable of representing every neuron of every animal and human brain that has ever existed – and the messages exchanged between them. Evolution from the simplest animal brains into the far more powerful human brain represent a pathway through the logically simple (but very infinitely extensive) network of nodes over a period of about a billion years.

The critical feature of the model, which makes it a complex model, is that virtually all the information needed to construct the deterministic network has been irretrievably lost or is otherwise inaccessible. For this reason the model allows nodes to morph between being deterministic or complex depending on context, and what is known of their history.

Not all aspects of brain evolution are covered. The model describes the way information stored in the network of logically identical nodes is used to make decisions. It is not directly concerned with the physical form of the nodes.  However it is often useful to consider, for example, how the brain is physically constructed and that neurons occur in brains which have a limited life time. Because significant research has been published on how neural networks can learn to recognise patterns the paper does not consider alternative algorithms for such trial and error learning, but simply assumes that in evolutionary terms it is an expensive process. Instead the paper concentrates on the ways language is used by humans to construct a more effective, and significantly more efficient,  decision making network based on the same basic structure used in animal brains. 

The paper is divided into the following sections:

Monday, 15 October 2018

Natural Language: The Evolution of Grammar

A reference in the blog Babel's Dawn alerted me to an interesting paper "An anthropic principle in lieu of a "Universal Grammar" by Hubert Haider, Department of Linguistics & Centre of Neuroscience at the the University of Salzberg.

Haidar starts by asking three questions:

Among the many unanswered questions in grammar theory, the following figure prominently.
  1. First, what is it that enables children to successfully cope with the structural complexities of their mother  tongue while  professional grammarians  tend to  fail when  modelling them? 
  2. Second, what determines the narrow system corridor for human grammars? 
  3. Third, are the grammars of human languages the offspring of a single proto-grammar instantiating a "Universal Grammar" (monogenic) or are the shared traits of human grammars the result of convergent changes in the grammars of human languages of diverse ancestry (polygenic)? 
 The last of these questions is mainly concerned with debunking the ideas of Chomsky and I will pass over it in this blog post. More importantly he argues strongly that natural language and the grammars associated with them are due to normal evolutionary forces and are limited by the resources present in the human brain.  He does not detail the nature of the resources provided by  the brain - which is very relevant to my research into the evolution of human intelligence.  For this reason I quote extracts from his text below, and then give a brief example of how my model demonstrates the way in which the neural network in the brain supports the resources his approach needs to support language . 

The Many Human Species revealed by DNA studies

When the early evidence that we contained genes inherited from the Neanderthals I posted a blog "More evidence for the braided human evolutionary tree" about the possible significance of  hominins splitting into separate groups and then interbreeding at a later stage.  A couple of months ago I blogged "Why Sex is an Important Factor in the Evolution of Human Intelligece" developing the idea further in terms of the transfer of culture as well as genes. The current New Scientist (13 October 2018) contains an article by Catherine Brahir entitled "The Ghost Within" which describes how, hidden within out genome are traces of unknown species of early humans.
This article nicely summarises the recent DNA research which has revealed that there is evidence for a number of "ghost" population of the genus Homo whose genes have been inherited but which we have not yet identified in fossil remains. Parallel research into chimpanzees suggests a "ghost" population of bonobos while research into the ancestry of the domestic dog suggests that a "ghost" population entered the New World (probably when humans crossed from Asia via Alaska) but were later replaced by dogs introduced by Europeans.

If one thinks about it, the fastest way for a slow-breeding species to evolve could well be to repeatedly split into different smallish groupmean s, where each group adapts to different environments and, in effect, "tests out" new gene variants. A degree of inbreeding will encourage both good and bad gene variants - but cross breeding with other, long separated groups will help to concentrate the better gene variations, and eliminate the less satisfactory alternatives.  In the case of humans this will not only have helped shape our physical bodies but will also have helped to merge the best features of different cultures. 

Sunday, 14 October 2018

A Simple Example of how CODIL works

J M asked "How would these symbiotic - easy to understand computers - look like? how would they work?"

The computer hardware would look like a conventional computer, and in fact any commercial system would probably combine both technologies. The difference is in the way you would interact with it.

The original idea (see The SMBP Story) related to a very large commercial sales contract program in 1967.  The workings can best be explained in terms of a very simple network model.

Imagine a board with a number of light bulbs on it - each light bulb represents a commercial concept such as CUSTOMER or PETROL - and some bulbs could also have numbers  associated with them. If a customer contract referred to a discount it might be written:

QUANTITY >= 1000; DISCOUNT = 10%

This would indicate that the bulbs for CUSTOMER and SMITH were linked, as where the bulbs for PRODUCT and PETROL, etc. In addition there would be links between the linked pairs.

Similarly there would be a pricing table with statements such as:


with the appropriate bulbs linked.

To Price a delivery the nodes equivalent to the following would be switched on:

QUANTITY >= 1000

and the this would automatically turn on the lights for 

DISCOUNT = 10%  and UNIT PRice = 15

The important thing is that everything is done in terms of concepts the user can understand, and the system deduced what needs doing. The approach is application independent and can easily show the user what it is doing and why. Effectively it is a transparent system, where conventional computers are black boxes. 

In practice the experimental interpreters hold all the links in a table and for each input they automatically look up the relevant links. Instead of a very large pricing program which will price any sale, you have a table-lookup which - in effect - works out a purpose-build mini-program for each transaction. 

A detailed paper is being drafted for this blog and will appear shortly