Sunday, 28 October 2018

Does Technology affect the size of our brains?

On the Futurelearn Course Introduction to Psychologymade the distinction between genetic development of the brain (how big it is) and the cultural development - where the culture is always changing generation by generation.

DB asked  I am guessing that the evolution of technology may also relate to our brains' size?

If you think about it modern technology means that we don’t have to use our brains so intensely as in the past. I was brought up “under the counter” in a newsagent and tobacconist’s shop and as a child enjoyed serving customers – and all sales were done in your head in old £sd money, no bar codes and computer tills to help. In fact the work of the average shop assistant has been significantly deskilled in my lifetime. 

Saturday, 27 October 2018

Summary of "An Evolutionary Model of Human Intelligence"

An Evolutionary Model of Human Intelligence
By Chris Reynolds
Draft Summary (full paper to follow)

While there is a vast amount of published scientific research about the structure of the human brain, how it evolved, and what it can do, there is a significant gap in our knowledge about it. Basically we evolved in a complex environment and there is no adequate mathematical model of the “complex biological computer” in our heads. What is missing is an explanation of how the activity of single neurons evolved to support the intellectual performance of the human species.  

This paper lays the foundations for a predictive evolutionary model which suggests why there was a significant spurt in human intelligence, compared with animals, and provides an explanation for human brain information processing strengths, and also its limitations, such as confirmation bias.

The proposed model is based on very general decision making nodes which are mapped onto an infinite recursive neural network. This network can be considered the biological equivalent of the infinite numeric array of numbers which is the logical basis for the well known deterministic stored program computer model. Within the model the decision making nodes exchange messages, and in theory all nodes and messages can be broken down into collections of simpler nodes and messages. Individual nodes may be anything from a single neuron, via complete brains, to a human committee, or even man-made tools such as computers. Because the network is infinite, it is capable of representing every neuron of every animal and human brain that has ever existed – and the messages exchanged between them. Evolution from the simplest animal brains into the far more powerful human brain represent a pathway through the logically simple (but very infinitely extensive) network of nodes over a period of about a billion years.

The critical feature of the model, which makes it a complex model, is that virtually all the information needed to construct the deterministic network has been irretrievably lost or is otherwise inaccessible. For this reason the model allows nodes to morph between being deterministic or complex depending on context, and what is known of their history.

Not all aspects of brain evolution are covered. The model describes the way information stored in the network of logically identical nodes is used to make decisions. It is not directly concerned with the physical form of the nodes.  However it is often useful to consider, for example, how the brain is physically constructed and that neurons occur in brains which have a limited life time. Because significant research has been published on how neural networks can learn to recognise patterns the paper does not consider alternative algorithms for such trial and error learning, but simply assumes that in evolutionary terms it is an expensive process. Instead the paper concentrates on the ways language is used by humans to construct a more effective, and significantly more efficient,  decision making network based on the same basic structure used in animal brains. 

The paper is divided into the following sections:

Monday, 15 October 2018

Natural Language: The Evolution of Grammar

A reference in the blog Babel's Dawn alerted me to an interesting paper "An anthropic principle in lieu of a "Universal Grammar" by Hubert Haider, Department of Linguistics & Centre of Neuroscience at the the University of Salzberg.

Haidar starts by asking three questions:

Among the many unanswered questions in grammar theory, the following figure prominently.
  1. First, what is it that enables children to successfully cope with the structural complexities of their mother  tongue while  professional grammarians  tend to  fail when  modelling them? 
  2. Second, what determines the narrow system corridor for human grammars? 
  3. Third, are the grammars of human languages the offspring of a single proto-grammar instantiating a "Universal Grammar" (monogenic) or are the shared traits of human grammars the result of convergent changes in the grammars of human languages of diverse ancestry (polygenic)? 
 The last of these questions is mainly concerned with debunking the ideas of Chomsky and I will pass over it in this blog post. More importantly he argues strongly that natural language and the grammars associated with them are due to normal evolutionary forces and are limited by the resources present in the human brain.  He does not detail the nature of the resources provided by  the brain - which is very relevant to my research into the evolution of human intelligence.  For this reason I quote extracts from his text below, and then give a brief example of how my model demonstrates the way in which the neural network in the brain supports the resources his approach needs to support language . 

The Many Human Species revealed by DNA studies

When the early evidence that we contained genes inherited from the Neanderthals I posted a blog "More evidence for the braided human evolutionary tree" about the possible significance of  hominins splitting into separate groups and then interbreeding at a later stage.  A couple of months ago I blogged "Why Sex is an Important Factor in the Evolution of Human Intelligece" developing the idea further in terms of the transfer of culture as well as genes. The current New Scientist (13 October 2018) contains an article by Catherine Brahir entitled "The Ghost Within" which describes how, hidden within out genome are traces of unknown species of early humans.
This article nicely summarises the recent DNA research which has revealed that there is evidence for a number of "ghost" population of the genus Homo whose genes have been inherited but which we have not yet identified in fossil remains. Parallel research into chimpanzees suggests a "ghost" population of bonobos while research into the ancestry of the domestic dog suggests that a "ghost" population entered the New World (probably when humans crossed from Asia via Alaska) but were later replaced by dogs introduced by Europeans.

If one thinks about it, the fastest way for a slow-breeding species to evolve could well be to repeatedly split into different smallish groupmean s, where each group adapts to different environments and, in effect, "tests out" new gene variants. A degree of inbreeding will encourage both good and bad gene variants - but cross breeding with other, long separated groups will help to concentrate the better gene variations, and eliminate the less satisfactory alternatives.  In the case of humans this will not only have helped shape our physical bodies but will also have helped to merge the best features of different cultures. 

Sunday, 14 October 2018

A Simple Example of how CODIL works

J M asked "How would these symbiotic - easy to understand computers - look like? how would they work?"

The computer hardware would look like a conventional computer, and in fact any commercial system would probably combine both technologies. The difference is in the way you would interact with it.

The original idea (see The SMBP Story) related to a very large commercial sales contract program in 1967.  The workings can best be explained in terms of a very simple network model.

Imagine a board with a number of light bulbs on it - each light bulb represents a commercial concept such as CUSTOMER or PETROL - and some bulbs could also have numbers  associated with them. If a customer contract referred to a discount it might be written:

QUANTITY >= 1000; DISCOUNT = 10%

This would indicate that the bulbs for CUSTOMER and SMITH were linked, as where the bulbs for PRODUCT and PETROL, etc. In addition there would be links between the linked pairs.

Similarly there would be a pricing table with statements such as:


with the appropriate bulbs linked.

To Price a delivery the nodes equivalent to the following would be switched on:

QUANTITY >= 1000

and the this would automatically turn on the lights for 

DISCOUNT = 10%  and UNIT PRice = 15

The important thing is that everything is done in terms of concepts the user can understand, and the system deduced what needs doing. The approach is application independent and can easily show the user what it is doing and why. Effectively it is a transparent system, where conventional computers are black boxes. 

In practice the experimental interpreters hold all the links in a table and for each input they automatically look up the relevant links. Instead of a very large pricing program which will price any sale, you have a table-lookup which - in effect - works out a purpose-build mini-program for each transaction. 

A detailed paper is being drafted for this blog and will appear shortly

Will robots outsmart us? by the late Stephen Hawkins

There is a interesting article, "Will robots outsmart us?" in today's Sunday Times MagazineWhile I don't accept all Stephen's predictions I was most interested to read:
When an artificial intelligence (AI) becomes better than humans at AI design, so that it can recursively improve itself without human help, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails. When that happens, we will need to ensure that the computers have goals aligned with ours.
Later he says:
In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. 
Of course we know what happened last time a super-intelligence came into existence. About half a million years ago the Earth was populated by a great variety of animals of a comparatively low intelligence. All the higher animals had brains that worked in roughly the same way, and how much they could learn was limited because everything they learnt was lost when they died. Then one species, which we call Homo sapiens, discovered a way to recursively increase its own intelligence. It was already good at making tools but for several million years the cost of trail and error learning had limited what it could do. But then it invented a tool to boost intelligence, which we call language.  Language not only made it possible to make better tools, but also it made it possible to recursively build a better language generation by generation. So some 5000 generations later the Earth is home to a super-intelligent species ...

And are the goals of this species aligned with the the goals of the millions of other species? Of course not. Billions of animals are kept as slaves to be killed and eaten, while the homes of countless more have been, or are being, destroyed. 

If we invent a super-intelligent AI system why should it treat us with more respect than we have  shown for our animal relatives.

A new book "Brief Answers to the Big Questions," by Stephen Hawkins, is published by John Murry on Tuesday.

Thursday, 4 October 2018

Why does CODIL differ from other computer languages

This query came up on a FutureLearn Course which read "Christopher, to be honest I don't think the world needs any more computer languages or most of the ones it already has for that matter"

The vast majority of computer languages, such as COBOL, Fortran, C, Java, etc. are designed to process information on a conventional stored program computer where the memory consists of numbered boxes which contain numbers . The numbers in the boxes may represent coded data (often several different formats), numeric addresses, or coded instructions. This approach was originally designed to handle a class of well-defined mathematical tasks which humans find difficult to do quickly and accuracy, so it is not surprising that modern computers are incomprehensible black boxes when viewed by the average human being. They were deliberately designed to do efficiently things which people are bad at doing.

CODIL uses a completely different memory structure which is based on a dynamic network which attempts to mimic the working of the brain's neural network. The aim is to produce a transparent information processor (rather than a black box) which is easy for the average human to understand and use for a range of potentially complex information processing tasks. It is particularly aimed at complex tasks where a dynamically flexible human interface is advantageous - and so fills the area where conventional computers are weakest.

In CODIL the array of numbered boxes which make up a conventional computer memory is replaced by a large number of nodes, where each node consists of an on/off switch and a label (which is for the benefit of the human user). The human user defines the names of the nodes and the wires linking the nodes.