Sunday, 28 October 2018

Does Technology affect the size of our brains?

On the Futurelearn Course Introduction to Psychologymade the distinction between genetic development of the brain (how big it is) and the cultural development - where the culture is always changing generation by generation.

DB asked  I am guessing that the evolution of technology may also relate to our brains' size?

If you think about it modern technology means that we don’t have to use our brains so intensely as in the past. I was brought up “under the counter” in a newsagent and tobacconist’s shop and as a child enjoyed serving customers – and all sales were done in your head in old £sd money, no bar codes and computer tills to help. In fact the work of the average shop assistant has been significantly deskilled in my lifetime. 

Saturday, 27 October 2018

Summary of "An Evolutionary Model of Human Intelligence"

An Evolutionary Model of Human Intelligence
By Chris Reynolds
Draft Summary (full paper to follow)

While there is a vast amount of published scientific research about the structure of the human brain, how it evolved, and what it can do, there is a significant gap in our knowledge about it. Basically we evolved in a complex environment and there is no adequate mathematical model of the “complex biological computer” in our heads. What is missing is an explanation of how the activity of single neurons evolved to support the intellectual performance of the human species.  

This paper lays the foundations for a predictive evolutionary model which suggests why there was a significant spurt in human intelligence, compared with animals, and provides an explanation for human brain information processing strengths, and also its limitations, such as confirmation bias.

The proposed model is based on very general decision making nodes which are mapped onto an infinite recursive neural network. This network can be considered the biological equivalent of the infinite numeric array of numbers which is the logical basis for the well known deterministic stored program computer model. Within the model the decision making nodes exchange messages, and in theory all nodes and messages can be broken down into collections of simpler nodes and messages. Individual nodes may be anything from a single neuron, via complete brains, to a human committee, or even man-made tools such as computers. Because the network is infinite, it is capable of representing every neuron of every animal and human brain that has ever existed – and the messages exchanged between them. Evolution from the simplest animal brains into the far more powerful human brain represent a pathway through the logically simple (but very infinitely extensive) network of nodes over a period of about a billion years.

The critical feature of the model, which makes it a complex model, is that virtually all the information needed to construct the deterministic network has been irretrievably lost or is otherwise inaccessible. For this reason the model allows nodes to morph between being deterministic or complex depending on context, and what is known of their history.

Not all aspects of brain evolution are covered. The model describes the way information stored in the network of logically identical nodes is used to make decisions. It is not directly concerned with the physical form of the nodes.  However it is often useful to consider, for example, how the brain is physically constructed and that neurons occur in brains which have a limited life time. Because significant research has been published on how neural networks can learn to recognise patterns the paper does not consider alternative algorithms for such trial and error learning, but simply assumes that in evolutionary terms it is an expensive process. Instead the paper concentrates on the ways language is used by humans to construct a more effective, and significantly more efficient,  decision making network based on the same basic structure used in animal brains. 

The paper is divided into the following sections:

Monday, 15 October 2018

Natural Language: The Evolution of Grammar

A reference in the blog Babel's Dawn alerted me to an interesting paper "An anthropic principle in lieu of a "Universal Grammar" by Hubert Haider, Department of Linguistics & Centre of Neuroscience at the the University of Salzberg.

Haidar starts by asking three questions:

Among the many unanswered questions in grammar theory, the following figure prominently.
  1. First, what is it that enables children to successfully cope with the structural complexities of their mother  tongue while  professional grammarians  tend to  fail when  modelling them? 
  2. Second, what determines the narrow system corridor for human grammars? 
  3. Third, are the grammars of human languages the offspring of a single proto-grammar instantiating a "Universal Grammar" (monogenic) or are the shared traits of human grammars the result of convergent changes in the grammars of human languages of diverse ancestry (polygenic)? 
 The last of these questions is mainly concerned with debunking the ideas of Chomsky and I will pass over it in this blog post. More importantly he argues strongly that natural language and the grammars associated with them are due to normal evolutionary forces and are limited by the resources present in the human brain.  He does not detail the nature of the resources provided by  the brain - which is very relevant to my research into the evolution of human intelligence.  For this reason I quote extracts from his text below, and then give a brief example of how my model demonstrates the way in which the neural network in the brain supports the resources his approach needs to support language . 

The Many Human Species revealed by DNA studies

When the early evidence that we contained genes inherited from the Neanderthals I posted a blog "More evidence for the braided human evolutionary tree" about the possible significance of  hominins splitting into separate groups and then interbreeding at a later stage.  A couple of months ago I blogged "Why Sex is an Important Factor in the Evolution of Human Intelligece" developing the idea further in terms of the transfer of culture as well as genes. The current New Scientist (13 October 2018) contains an article by Catherine Brahir entitled "The Ghost Within" which describes how, hidden within out genome are traces of unknown species of early humans.
This article nicely summarises the recent DNA research which has revealed that there is evidence for a number of "ghost" population of the genus Homo whose genes have been inherited but which we have not yet identified in fossil remains. Parallel research into chimpanzees suggests a "ghost" population of bonobos while research into the ancestry of the domestic dog suggests that a "ghost" population entered the New World (probably when humans crossed from Asia via Alaska) but were later replaced by dogs introduced by Europeans.

If one thinks about it, the fastest way for a slow-breeding species to evolve could well be to repeatedly split into different smallish groupmean s, where each group adapts to different environments and, in effect, "tests out" new gene variants. A degree of inbreeding will encourage both good and bad gene variants - but cross breeding with other, long separated groups will help to concentrate the better gene variations, and eliminate the less satisfactory alternatives.  In the case of humans this will not only have helped shape our physical bodies but will also have helped to merge the best features of different cultures. 

Sunday, 14 October 2018

A Simple Example of how CODIL works

J M asked "How would these symbiotic - easy to understand computers - look like? how would they work?"

The computer hardware would look like a conventional computer, and in fact any commercial system would probably combine both technologies. The difference is in the way you would interact with it.

The original idea (see The SMBP Story) related to a very large commercial sales contract program in 1967.  The workings can best be explained in terms of a very simple network model.

Imagine a board with a number of light bulbs on it - each light bulb represents a commercial concept such as CUSTOMER or PETROL - and some bulbs could also have numbers  associated with them. If a customer contract referred to a discount it might be written:

QUANTITY >= 1000; DISCOUNT = 10%

This would indicate that the bulbs for CUSTOMER and SMITH were linked, as where the bulbs for PRODUCT and PETROL, etc. In addition there would be links between the linked pairs.

Similarly there would be a pricing table with statements such as:


with the appropriate bulbs linked.

To Price a delivery the nodes equivalent to the following would be switched on:

QUANTITY >= 1000

and the this would automatically turn on the lights for 

DISCOUNT = 10%  and UNIT PRice = 15

The important thing is that everything is done in terms of concepts the user can understand, and the system deduced what needs doing. The approach is application independent and can easily show the user what it is doing and why. Effectively it is a transparent system, where conventional computers are black boxes. 

In practice the experimental interpreters hold all the links in a table and for each input they automatically look up the relevant links. Instead of a very large pricing program which will price any sale, you have a table-lookup which - in effect - works out a purpose-build mini-program for each transaction. 

A detailed paper is being drafted for this blog and will appear shortly

Will robots outsmart us? by the late Stephen Hawkins

There is a interesting article, "Will robots outsmart us?" in today's Sunday Times MagazineWhile I don't accept all Stephen's predictions I was most interested to read:
When an artificial intelligence (AI) becomes better than humans at AI design, so that it can recursively improve itself without human help, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails. When that happens, we will need to ensure that the computers have goals aligned with ours.
Later he says:
In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. 
Of course we know what happened last time a super-intelligence came into existence. About half a million years ago the Earth was populated by a great variety of animals of a comparatively low intelligence. All the higher animals had brains that worked in roughly the same way, and how much they could learn was limited because everything they learnt was lost when they died. Then one species, which we call Homo sapiens, discovered a way to recursively increase its own intelligence. It was already good at making tools but for several million years the cost of trail and error learning had limited what it could do. But then it invented a tool to boost intelligence, which we call language.  Language not only made it possible to make better tools, but also it made it possible to recursively build a better language generation by generation. So some 5000 generations later the Earth is home to a super-intelligent species ...

And are the goals of this species aligned with the the goals of the millions of other species? Of course not. Billions of animals are kept as slaves to be killed and eaten, while the homes of countless more have been, or are being, destroyed. 

If we invent a super-intelligent AI system why should it treat us with more respect than we have  shown for our animal relatives.

A new book "Brief Answers to the Big Questions," by Stephen Hawkins, is published by John Murry on Tuesday.

Thursday, 4 October 2018

Why does CODIL differ from other computer languages

This query came up on a FutureLearn Course which read "Christopher, to be honest I don't think the world needs any more computer languages or most of the ones it already has for that matter"

The vast majority of computer languages, such as COBOL, Fortran, C, Java, etc. are designed to process information on a conventional stored program computer where the memory consists of numbered boxes which contain numbers . The numbers in the boxes may represent coded data (often several different formats), numeric addresses, or coded instructions. This approach was originally designed to handle a class of well-defined mathematical tasks which humans find difficult to do quickly and accuracy, so it is not surprising that modern computers are incomprehensible black boxes when viewed by the average human being. They were deliberately designed to do efficiently things which people are bad at doing.

CODIL uses a completely different memory structure which is based on a dynamic network which attempts to mimic the working of the brain's neural network. The aim is to produce a transparent information processor (rather than a black box) which is easy for the average human to understand and use for a range of potentially complex information processing tasks. It is particularly aimed at complex tasks where a dynamically flexible human interface is advantageous - and so fills the area where conventional computers are weakest.

In CODIL the array of numbered boxes which make up a conventional computer memory is replaced by a large number of nodes, where each node consists of an on/off switch and a label (which is for the benefit of the human user). The human user defines the names of the nodes and the wires linking the nodes.

Sunday, 30 September 2018

How plans for a user-friendly computer were rubbished 50 years ago

In 1968 David Caminer and John Pinkerton (who were responsible for the world's first business computer, the LEO I, and who were directors of English Electric Computer) decided to fund research into a project to build inherently user-friendly computers and it was estimated that the market for such systems would be several hundred million pounds a year.

However, as a result of the government inspired merger to make the UK computer industry more competitive, ICL was created, and the project was closed down with no serious attempt to assess the successful research into CODIL which had already been carried out.

This was, of course, about 10 years before the first personal computers, and it is interesting to speculate what might have happened if the research had not been so rudely interrupted. Perhap the UK based idea would have been successful and the first personal computers would have been inherently friendly. This would have meant that there would have been no need for the hard to use MS-DOS operating system - and no one would have heard of Microsoft.

To get details of what happened read 
To see how the idea originated read

This is relevant to this blog because the rubbished 1968 proposal is orign of the model of human working memory at the heart to the current research into human short term memory.

Monday, 24 September 2018

Can you model for the impact of language, writing, etc. ???

Your Questions Answered

JS asked; "Can you model for the impact of language, writing, print, radio, TV and the internet?"

I am basically interested in modelling the flow of information within brains and, using a language, between human brains. It is important to note that the model is not concerned with the physical form of the decision making nodes or the messages. An eye is a decision making node which receives photons and converts them into messages to neurons in the brain. In the same way books, TVs and computer systems can be considered to be integral part of the overall network model. 

One of billions of possible complex information flow examples: The node “William Shakespeare” generated a message “Macbeth” which was sent to by the book to actor nodes living 400 years later – and the resulting performance ended up in your brain via a TV and your eye. 

My model is (at present) only an initial qualitative model – but the model shows how improved information tools can have an effect on the knowledge stored in the brains of modern humans. It is an open question whether the model could be developed to measure impact by, for instance, saying what percentage of a given individual’s knowledge came from the use of the internet.

Saturday, 22 September 2018

How far has computing advanced in the last 50 years?

Computers have changed significantly over the last 50 years - but have some of the early problems been solved. The following brief excerpts came from a survey I prepared in 1968 about the difficulties that were being reported when trying to computerise complex tasks involving both computers and people:

J Nievergelt wrote:
“We are pushing against the limits of complexity that we know how to handle. ... However, we also run into problems that are commputationally light but also so complex that we do not know how to design, document, and debug them. This kind of problem is relatively new. – this complexity barrier may well be the most important limitation that the computer field will be subject to in the immediate future.”

Wesley Davis wrote:
“The ultimate success or failure of computer systems at present is overdependent on the problem definition and system design stage, The desire to simplify systems leads to a conflict between the needs of people and the computer specialist's interest in containing the project within present methodology. ... for its [computer system] purpose is to make a company responsive, flexible, and aware of its commercial situation. It is unlikely to do this without these characteristics being implicit in its design.

M A Jackson wrote:
“We should be positively looking for and developing programming methods that do a1low inconsistency, redundancy, ambiguity and incompleteness; we should recognise that these seem to be vices only because the error-prone techniques of procedural programming make them so.”

Seymour Papert wrote:
“Machines can't think,” said he, “because stupid humans don't know how to teach them to think. We try and teach them as we think machines should be taught, and it doesn't work.”

Computers have changed enormously since 1968, and we nearlly all have, in our pockets or bags, a mobile phone which is thousands of times more powerful than the computer shown in the above picture. Computers now provide a very wide range of usebuf services, Dispite all the advances, have they really solved the problems posed by complex human systems?
What do you think?
See a transcript of the original 1968 suvey

The Evolutionary Model of Intelligence in 200 words!

Picture from HITXP blog
My “biological computer” model of the evolution of human intelligence involves information flowing between “decision making nodes” in a infinite recursive network. Every node can receive, process, and transmit messages and may consist of clusters of simpler nodes. The simplest nodes represent individual neurons, with groups of interlinked neurons up to complete brains, individual animals, and extending to social groups of animals or humans exchanging information (including large organizations) and tools made by humans – such as the Internet. “Animal” nodes have a limited lifespan and (if the brain makes good decisions) can pass genetic information to new, initially ignorant, “animal” nodes.

The driving mechanism is based on CODIL, a computer language that mimics human working memory. It suggests how information can be used to make decisions within brains, and how it can be passed between brains so that it is not lost on death. It identifies limits on animal intelligence, predicts failings of the human mind, and identifies key tipping points in the development of tools and language, and brain size changes. Human intelligence results from recycling cultural information through the network over many thousands of generations.

Of course such a short text only scratches the surface, and further information on my research, and supporting details will be appearing on this blog, if it is not already here. 
If you can't find what you want to know why not 

Sunday, 16 September 2018

If only this skull could still talk ...

If only this skull
could still talk
I am always happy to answer questions about how my research relates to a particular research paper and in a discussion on the FutureLearn course "A Question of Time" Paul asked about "On the antiquity of language: the reinterpretation of Neandertal linguistic capacities and its consequences" by Dan Dediu and Stephen C. Levinson. This paper concludes that Neanderthals and Denisovans may have had very similar language capacities to us - they too had complex tool making technologies. Paul asked:
When did the conditions arise to allow more rapid evolution of information (technical, such as for tools but also hunting & food gathering & processing techniques, cultural info etc.) ? After all, we possess much the same genes as our ancestors did 60,000 years ago, but our world is significantly different due to evolution of information.
Dediu and Levinson's paper looks in detail at the relevant literature about the discover of human fossils, evidence for tool making , etc. while my approach starts in a very different way by looking at how a network of neurons might evolve into an intelligent brain. The immediately relevant parts of my model are as follows:

Sunday, 9 September 2018

Mental Health and the Brain Model

Because of my long term interest in mental health matters I recently decided to do a FutureLearn course on Psychology and Mental Health and the reading material included a paper by Peter Kinderman entitled A Psychological Model of Mental Disorder. This made me think that I should be looking at my evolutionary model of human intelligence to see whether it might be able to model some mental health problems. The model already predicts some weaknesses in how the human mind works in areas such as confirmation bias - so could it suggest possible causes for mental illness.

The evolutionary model is based on the idea that at the neuron level our brain works in exactly the same way as any other mammal but we have developed a tool, which we call language, to bridge over some of the weaknesses due to the way the animal brain has evolved. Some of these inherent weaknesses could underlie some mental health problems because the genetically evolved mechanism  is being asked to do a lot of extra work.

The following can be considered draft discussion notes looking at some of the areas which my model suggests there issues which could adversely affect mental health.

Thursday, 6 September 2018

What is Intelligence?

I was exploring the blog Beautiful Minds and came across a post "IQ and Society - The deeply interconnected web of IQ and societal outcomes" by Scott Barry Kaufman. This started with a definition of "Intelligence" which was published in the Wall Street Journal in 1994. I realised that I had started this blog without defining what I meant by intelligence - and I feel that the following definition served the purpose:
Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—“catching on,” “making sense” of things, or “figuring out” what to do.
Now I am not claiming that my attempt to model human intelligence demonstrates all the above properties - but I see the definition as a goal, and my model as being a step in the right direction.

The body of Scott's blog was doubly interesting to me because he was discusses the link between IQ ans social and educational factors while I have just been involved wit a FutureLearn course "Psychology and Mental Health" which explores and extends the "Nature and Nurture" debate. In many ways it looks as if IQ and mental illness are both correlated with one's life experiences and fit in with my approach to intelligence - which suggests that it  is mainly due to exchange of culture between humans, refined over many generations. On this basis someone with a high IQ will tend to have absorbed better quality cultural information when younger, while poor mental health will often be due to poor or disrupted cultural interactions and events.

Wednesday, 5 September 2018

The Cultural Origins of Language

The Scientific American has just circulated a very readable article entitled "The Cultural Origins of Language" and highlights four main points. The aim of theis post is to relate each of these points to the evolutionary model I am developing.

Human communication is far more structured and complex than the gestures and sounds of other animals.

There is no disagreement on the points made here. The article points out that "Noam Chomsky, the extraordinarily influential linguist at the Massachusetts Institute of Technology, was, for decades, rather famously disinterested in language evolution, and his attitude had a chilling effect on the field." This is very relevant to my research. In the mid 1960s I started to look at how CODIL might support natural language - and took one look at the work of Chomsky and decided that CODIL and Chomsky's theories were incompatible and abandoned research in that direction. It is interesting that at about the same time I considered whether CODIL had anything to do with neural networks - and was discouraged by the views of Minsky. 

Scientists have, however, failed to find distinctive physiological, neurological or genetic traits that could explain the uniqueness of human language.

Language appears instead to arise from a platform of abilities, some of which are shared with other animals.

The article says: "The problem is that after looking for a considerable amount of time and with a wide range of methodological approaches, we cannot seem to find anything unique in ourselves—either in the human genome or in the human brain—that explains language."  and later "Findings from genetics, cognitive science and brain sciences are now converging in a different place. It looks like language is not a brilliant adaptation. Nor is it encoded in the human genome or the inevitable output of our superior human brains. Instead language grows out of a platform of abilities, some of which are very ancient and shared with other animals and only some of which are more modern."

This is completely compatible with my work in that my model for the evolution of human intelligence start with the key assumption that at the biological level animal and human brains work in exactly the same way. The fact that my model predicts a number of weaknesses in our mental armoury which are compatible with an evolutionary approach suggests that this starting assumption has some validity.

Intriguingly, the intricacy of human language may arise from culture: the repeated transmission of speech through many generations.

The article discusses research on animals and concludes "The list of no-longer-completely-unique human traits includes brain mechanisms, too. We are learning that neural circuits can develop multiple uses. One recent study showed that some neural circuits that underlie language learning may also be used for remembering lists or acquiring complicated skills, such as learning how to drive. Sure enough, the animal versions of the same circuits are used to solve similar problems, such as, in rats, navigating a maze." 

While I agree with what the article says about the importance of culture being passed, by, language from generation to generation there is a big difference. The article avoids discussing how the neural circuits of the brain drive language - while my model suggests a mechanism - and furuther information will be published on this blog shortly. However the underlying idea can be demonstrated by the following CODIL statement:

MURDERER = Macbeth; VICTIM = Duncan; WEAPON = Dagger.
At the neuron level each word is the symbolic name of a node *which could be very complex when examined in details), while the punctuation characters indicate the way the nodes are connected. A human reader of such a statement could easily translate it into a variety of different natural language sentences, and the CODIL language is powerful enough go most (if not all of) the way to allowing such transformations to be processed. Thus the model has the ability to represent information in a neural net and the potential to generate natural languages style sentences.

Wednesday, 29 August 2018

The Evolution of Speech - and the Information Flow model of Human Intelligence

In considering the evolution of human intelligence I will obviously need to address the relationship between speech and language and a recent article "Why Human Speech is Special" by Philip Lieberman helpfully reviews what is known about the evolution of speech, and includes some useful references.
I will be discussing the role of language and speech in some detail when I complete the relevant section of my report on the last 5 million years of human evolution. However it would be helpful at this stage to use the subject to highlight the difference in approach my model takes.
My approach is to identify the mathematical logic used by the brain to store the information it collects and to make decisions, and it is not directly concerned with the physical form of the messages or the organs which detect/generate the message. In effect it assumes that all neurons use the same messaging logic so that, in theory at least, any part of the brain, at any level, can exchange messages with any other part. Of course some will be assigned special functions - such as processing sight, or controlling movement - and have different inbuilt connections at the generic level. As with any other organ of the body, the relative size will depend on the evolutionary demands on the species and there will be accompanying genetic changes. If evolutionary pressures favour changes to the vocal organs one can expect changes in the parts of the brain most intimately involved - but my model predicts that the internal communication language that the brain uses will remain unchanged.
Of course no-one can deny that the fact that language plays an important part of the evolution of human intelligence. However in information flow terms there are many different ways that animals might communicate if the evolutionary pressures are right. Once an effective communication route has opened the language would evolve at a rate appropriate to the evolution of culture - which is far faster that genetic evolution. This raises the question which came first? Evolution often develops new features by modifying existing features, and the important thing is to understand the evolutionary pressures that brought on the changes. 
 In the case of speech there appear to be two alternative for the origins of :an effective communications language
  1. A long-drawn-out process where a simple proto-language was proving to have survival value but there was no really effective way of improving communication - apart from evolving new facilities over a genetic timescale.
  2. A much more rapid process where a previously evolved vocal system evolved (for example) to aid in hunting by imitating animal calls turns out to be the basis for a new form of communication.
On the evidence I have so far the information flow model would suggest that the later is the more likely. If you know of any evidence which points either way please feel free to comment below.

Science is about asking the Right Questions

... And then investigating, questioning and debating possible answers
This web site asks about the origins of human intelligence and explores a possible solution that many will find counter-intuitive. It starts from the assumption that the human brain is genetically little more than a rather large animal brain. The difference is that humans developed a tool (which we call language) which allowed us to efficiently copy tool-making and other skills from one individual to another, and our "superior" intelligence compared with other animals is almost entirely due to the extensive body of cultural information which language makes possible. Some of the deficiencies of the human mind - such as the small size of the working memory, an unreliable long term memory, and the problem of confirmation bias are a direct result of our animal brain lurking underneath an outer coating of cultural learning.

Well, it's an interesting idea - but am I right? As a scientist I welcome critical comments and questions which challenge my theory, and also information on other research that supports the idea or suggests other exciting avenues to explore. Your comments below, or under a relevant blog posts, will help me to either strengthen the theory - or perhaps blow my theory out of the water.


If you are asking me "How does your evolutionary model cope with the research on a particular line of research?" please include a link to a source describing the research which is not behind a pay wall. 

In each case I will post a "Q&A" reply post attempting to give an answer (or to admit defeat!!!).

To see the questions I have answered search for
"Your Questions Answered"

Tuesday, 28 August 2018

Why Sex is an Important Factor in the Evolution of Human Intelligence

A paper in Nature about the DNA in the above bone fragment, found in the Denisova Cave, Russia, has been widely reported in the scientific press over the last few days. The fragment is about 50,000 years old and comes from a girl about 13 years old. The girl's mother was a Neanderthal while the father was a Denisovan with some Neanderthal ancestry. It has been known for some years that there was some interbreeding between Homo sapiens, Neanderthals and Denisovans - and that possibly other species of hominins have been involved - but the finding of a bone from a first generation child is a surprise because early hominin remains are so rare.

The discovery emphasises the importance of sex in the development of human intelligence. Let me explain:

Monday, 27 August 2018

Brain Storms - Some of the Background Blog Posts

When I started thinking about the possible links between the computer Language CODIL and the human brain a few years ago I started by posting a series of "Brain Storms"on my blog Trapped by the Box and some of the more interesting ones are listed below.

  1. Introduction
  2. The Black Hole in Brain Research
  3. Evolutionary Factors starting on the African Plains
  4. Requirements of a Target Model
  5. Some Factors in choosing a Model 
  6. CODIL and Natural Language
  7. Getting rid of those Pesky Numbers
  8. Was Douglas Adams right about the Dolphins?
  9. The Evolution of Intelligence - From Neural Nets to Langauge
  10. The Limitations of the Stored Program Computer
  11. An Evolutionary Model of the Brain's Internal Language
  12. Step outside the Box to understand the Evolution of Intelligence
  13. How the Human Brain works - Concept Cells, Memodes and CODIL
Some other early posts may also be of interest.
Over the last couple of years I have been attempting to bring these, and other ideas, together in a paper, and this has been slow work because it involves many different disciplines and I know some people will find the ideas controversial.  Two developments have helped in this process.

The first was doing a FutureLearn course on"Decision Making in a Complex and Uncertain World" put on by the University of Groningen last year. Doing this highlighted the importance of making a clear distinction between complex uncertain systems and complicated deterministic systems.

The second relates to the need to define a mathematical framework which would contain the model. My initial idea of an infinite recursive network really had too many degrees of freedom - but I found a good starting point was to map evolution onto a network with genetic information being passed through the generations. The complete network was a complicated but fully deterministic one, but because the individual animal nodes only had a short lifetime all that could be seen was a window onto the current living generation - which was complex because the paths to it were invisible.

As a result I am currently drafting the paper, in a number of sections, and progress will be reported on this blog, together with notes on relevant topics, and answers to any questions you ask about how my model fits in with other research.

Sunday, 19 August 2018

Modelling how the Human Brain Works

On the FutureLearn course "Psychology and Mental HealthI am currently following I have just reached a section which starts:
The New Scientist has a very good, user-friendly, account of the human brain and how it works.
I would be the first to agree that the New Scientist is good at explaining science in easily readable terms – having been a fan since I brought a copy of the very first issues when I was a student. Its web site has good descriptions of the brain, and its components, and other aspects of modern brain research. However there is a glaring omission – there is no overall description of “how it works” which explains how the neurons in the brain work together to support intelligent human mental activities.
The New Scientist is not alone. The Scientific American has just published a special issue “Humans: Why we’re unlike any other species on the planet” which fails to answer the same question. In fact you can read book after book about the brain, and also many research papers, and while there are a number of purely descriptive surmises, the best answer you can get is that there is currently no adequate predictive model of how the brain’s biological computer actually works to convert activity at individual cell level into a system capable of doing what the human brain can obviously do.
I am currently working on "An EvolutionaryModel of Human Intelligence “ which attempts to show how activity at the neuron level supports the high level mental activities which characterize the human species and it is perhaps appropriate to add my ideas on the subject with outline details of my research.

Monday, 30 July 2018

The Unlikely Origins of my current Evolutionary Research

The origins of my current research were anything but planned. Between 1959 and 1962 I studied for a Ph.D. in theoretical organic chemistry (no computers then), and then worked as an information scientist - where I read about how computers might help - but had no access to them. However I came across the ideas of Vannevar Bush in "As we may think" and thought I ought to know more about computers.
In 1965 a casual conversation about salaries led me to decide to switch careers and work with computers. Shell Mex & BP had one of the most advance commercial computer systems inn the UK and my first day's experience dramatically showed me how computer systems can go wrong. As a result I became very interested in the strengths and weaknesses inherent in the stored program computer - and this interest led to my coming up with an unorthodox solution to getting people and computers to "understand" each other.
Details of my experiences at Shell Mex & BP, and of the design study that eventually led to the design of CODIL (a COntext Dependent Information Language) have been documented in a note, "The SMBP Story" that I have written for the archives of the LEO Computer Society. A further note is planned shortly extending the story to describe the later research, supported by the LEO computer pioneers David Caminer and John Pinkerton. It will also mention why the initial project closed following the formation of ICL.
The story also includes much information on what went on inside large commercial computer installations in the mid 1960s

Mind and Conciousness

As a scientist  I feel I must always be open to different ways of looking at my research and as a result I have decided to do the Future Learn Course "Matter and Mind: A Philosophical Understanding" as I am sure that the views raised by the course, and in student discussions, will be thought-provoking. It therefore seems to be a good idea to record what I currently understand by the terms "Mind" and "Consciousness" before I start the course.
Google provides the following definitions, which I find satisfactory.
  1. the element of a person that enables them to be aware of the world and their experiences, to think, and to feel; the faculty of consciousness and thought.
  2. a person's ability to think and reason; the intellect
  1. the state of being aware of and responsive to one's surroundings
  2. a person's awareness or perception of something
The brain model I am using assumes a network of neurons recursively organised into higher level nodes which hold the information the brain has about the environment in which it lives. Messages come in from the sense organs which are passed into the network and most rise until a point where they are recognised and can be handled automatically. However some more important messages reach the effective top of the network.
In this framework the "mind" can be considered to be the sum total of knowledge held in the network, while "consciousness" is the awareness of the active messages that have risen to the top - in effect the messages in the brain's "working memory." Of course human working memory has a limited capacity of about half a dozen items, and we can get confused, and forget what we were doing, if too many messages become active. In addition a very important message (say someone shouting "Fire" in your ear) can become dominant and trigger appropriate emotional responses. In addition to responding to external inputs we can simply think about a task and guide our "consciousness" window on a tour of the "mind" - in the process remembering the past or reviewing the present.
There are two immediate conclusions from this model:
  • The "mind" is information stored in a network of living cells - and that this information is lost when the brain dies.
  • While human brains may support a wider and deeper "mind" and have a wider "consciousness" window there is no reason to believe the brains of higher animals are any different in the way they function.

Sunday, 29 July 2018

Comments on the "A DARPA Perspective on Artificial Intelligence" video

 My attention has been drawn to the following recent video by the web site Conscious Entities which is useful in explaining some of the background to my research.

 This video discusses the history of AI into three stages
  1. Wave 1 AI is based on handcrafted knowledge where experts took knowledge that they had about a particular domain and they described in in rules that they could fit in the computer. The computer was then used to explore the implications of those rules. Applications included ches s. This approach can be very successful when the rules are well known in advance. Such systems have no learning capability and poor handling of uncertainty.
  2. Wave 2 AI is based on statistical learning and has been very successful in areas such as voice recognition and analysing photographs. However it is very dependent of specially engineered statistical techniques appropriate to the particular domain, often using large data sets. Such approaches, while successful in doing the predefined task lack reasoning capabilities. A mathematical techniques, referred to as neural nets, is used to distinguish between different patterns.
  3. We now need a third wave of AI technology linked to contextual adaption to produce systems than build explanatory models which allow them to characterize real-world phenomena because it is clear humans are doing things in a different way.
This is very relevant in explaining the problems I had with my own research.

In 1967 I was asked to come up with ideas as to how one of the most complex commercial applications (a contract procing system dealing with about 250,000 customers and 5000 products) could be moved to the next generation of computers. I saw the market place as posing complex real-world problems and that what was needed was a system that could dynamically interact with the sales staff. I concluded that the top priority was to have a system that the sales staff could understand and control - and this required the system to be able to explain what it was doing in terms that the sales staff could understand - which meant using a common language and having a communication "window" which did not overload the human short term memory. Within the year I have discovered that my "specialist" proposal could be generalized to become task independent, and potentially able to cope with poorly defined and fuzzy information. CODIL (COntext Dependent Information Language) was created. By the early 1970s I found a model which had started as a solution to a complex commercial problem could handle a wide range of tasks including a powerful heuristic problem solver called TANTALIZER.
The problem was I was attempting to model how people processed a wide range of information processing tasks on a computer - so what I was doing must be "Artificial Intelligence." Unfortunately what I was doing was conceptually very different to what the video called "Wave 1 AI" and this made getting finance very difficult. The work finished when a new boss was appointed who was an enthusiastic supporter of 1970s style AI.
Once I had retired I could look at the problem again - and discovered that there are no predictive models of how the neurons of the human brain support our more intelligent human activities. However I then came up with a problem with "Wave 2 AI" as I was interested in networks of neurons and how they work - but this is very different to how A.I. uses "Neural nets" - which is a very sophisticated mathematical technique - needing much human engineering support. The difference between the approaches is such that there appears to be very little similarity between a biological network of neurons and a the mathematically complex neural nets of A.I.
So on to "Wave 3 AI" which will be concerned with reasoning and context - which is what I was proposing in 1967. Perhaps after 50 years my idea will prove to have some relevance.