On the FutureLearn course "Psychology and Mental Health" I am currently
following I have just reached a section which starts:
I would be the first to agree that the New
Scientist is good at explaining science in easily readable terms – having been a
fan since I brought a copy of the very first issues when I was a student. Its
web site has good descriptions of the brain, and its components, and other
aspects of modern brain research. However there is a glaring omission – there
is no overall description of “how it works” which explains how the neurons in
the brain work together to support intelligent human mental activities.
The New Scientist is not alone. The
Scientific American has just published a special issue “Humans: Why we’re
unlike any other species on the planet” which fails to answer the same
question. In fact you can read book after book about the brain, and also many
research papers, and while there are a number of purely descriptive surmises, the
best answer you can get is that there is currently no adequate predictive model
of how the brain’s biological computer actually works to convert activity at
individual cell level into a system capable of doing what the human brain can
obviously do.
I am currently working on "An EvolutionaryModel of Human Intelligence “ which attempts to show how
activity at the neuron level supports the high level mental activities which characterize
the human species and it is perhaps appropriate to add my ideas on the subject
with outline details of my research.
I would suggest that the current mystery
about how the brain works is simply a repeat of history. We believe we are
something special and for many centuries we believed that the Earth was the
centre of the Universe. Before Darwin we believed that the physical human body
was a special creation – but we now know that biologically we are animals with
very similar genes to our nearest ape relatives. But surely we must all think that
our brain is something exceptional – and if we try hard enough we might find the
magic genes which makes our brain unique. The New Scientist, by the wording on the cover of its Instant Expert book, clearly the brain is the most complicated object in the known universe.
Of course the very detailed research that
has been carried out throughout the world on the brain has proved very valuable in fields such as
medicine and provides intricate details of some of the internal workings. But
digging deeper and deeper to try and find the philosopher’s stone of
intelligence will not succeed if the “marvellous” biological feature that makes
us fundamentally different to animals does not exist. Perhaps the solution is
to stand well back – and make no assumptions about how clever we humans think we
are.
Before exploring this further let us
consider an analogy. The basic function of the brain and the stored program computer
have one thing in common. Both accept information, process it, and then use
the result. In a computer the input is a
program of program of rules and some data which corresponds to those rules. The
output is processed data in a form that is more useful than the original data.
There needs to be a processor to interpret the rules and a media on which the
rules and the data are recorded. In terms of understanding the basic principles
of how a computer works it does not matter whether the computer is made of
clockwork (as Babbage tried to do), or valves, or some kind of fancy
electronics. It also does not matter whether the data is stored on paper tape,
in an electronic delay tube, an array of ferrite rings, or a modern memory
stick. The output data might appear on a video screen, be printed on paper, or
used to control some mechanical device – perhaps a robot. Of course the usefulness of a particular
computer may be critically dependent on the components from which it is made –
and what it does will depend critically on the program – but in every case the
underlying concepts are is the same.
So let us forget about the particular
chemical and biological structures which make up the brain and ask what are the
underlying concepts for a “biological computer.” We know that every animal
brain accepts messages from the environment through its sense organs, and its
output are messages which control the animal’s body in a way which, hopefully,
will maximize the chance of survival. While the brain may have some minimal
built in program - called instinct, in effect each animal brain has to build its own program, and
learning is a time consuming process. In addition the brain needs a lot of
energy to function. When the animal dies the program it has carefully built over
its life is lost. In effect the high cost
of learning put a cap on the size of a brain as no species is going to evolve a
brain bigger or cleverer than it needs to maximize survival chances.
Humans appear to be exceptions but while they use their brains in a
different way there are no significant differences at the biological l(i.e. genetic) level. Of
course human brains are bigger and may pack in more neurons, but is a bigger
brain in an animal which uses its brain a lot any more significant, in
evolutionary terms, than a giraffe’s long neck or an elephant’s trunk. If
humans use their brain more than other animals one would expect the brain to
get bigger. But simply increasing the size does not automatically alter the
way it works. The important thing that makes humans different is that when a human dies a
significant amount of the information it has learnt during its lifetime can be
passed to the next generation, using language.
So let us assume that (apart from size
factors) our brain works in the same way as a typical animal brain – with no
“aren’t humans super-special” features – and that the key issue is how to understand how humans
learnt to transfer information from one generation to another.
This is where my earlier research becomes
relevant, although when it started it had nothing to do with the brain – but a
lot to do with how we use our brains to carry out complex tasks. In 1967 I was
a systems analyst looking at one of the most complicated commercial computer
applications then in use and I came up with an unconventional suggestion as to how
this might be changed to allow the sales staff to take control once computer
terminals became available. Within a year I was working for a computer
manufacturer on the design of a novel type of computer. The idea was to have a
computer with a user-friendly symbolic assembly language which, if built, would
allow the human and the computer to work
together on tasks which were difficult to predefine. But this research led to
the development of the CODIL computer language which was tested on a
wide range of applications, including a heuristic problem solver, TANTALIZE, which solved
15 consecutive New Scientist Tantalizers (a weekly brain teaser for humans) and a version, MicroCODIL was trial-marketed as an educational
package and received many favourable reviews. Unfortunately the computer industry believed
that the way forward was with the incomprehensible black boxes which we all now
take for granted, and active research on CODIL stopped in 1988..
A few years ago I decided to reassess the
CODIL research in the light of modern research. This suggested that the approach
was best understood in terms of a neural network (which I had originally considered) and that CODIL could be considered as a language for
transferring information between two neural nets – one the brain of the human
user, the other the CODIL computer. In addition the CODIL interpreter imitated,
to a useful extent, the activities of the human brain’s working memory.
So CODIL is a powerful (but highly
unconventional) programming language with a user-friendly interface based on
the human brain’s working memory, which works by interchanging information
between different neural nets. This immediately raises the possibility that it
could be a model for the way human language developed with a proto-natural
language passing information between parents and their children.
I am currently drafting a series of
detailed reports, which should appear over the next month or two describing a
pathway, starting with single neurons, to explain the evolution of human
intelligence. This pathway suggest the
logical way that information is
stored and processed within brains and
in social interactions between animals/humans In effect it defines the logical structure of the
“biological computer” in our head. . It does not deal directly with the physical
mechanisms that have evolved to implement these pathways
The main reports will deal with the
following topics and links will be added as the notes are completed:
Part
1 – A Re-assessment of the CODIL Language System
The reassesses
the original CODIL research with an emphasis on the basic decision making
process, the human interface modelling human working memory, and the way
information can be organized into a network.
Part
2 – An Information Flow Model of how the Brain and Human Culture Evolved
In a stored program computer information
is held in a very large numerically addressed array – while the brain uses an
associatively addressed network of interlinked nodes which behave in a way
compatible with the theory of evolution. The model starts by considering the
flow of genetic information in a network where every animal that every lived is
represented as a node, with input links from parents and output links to
offspring. This is expanded to include both
the network of individual neurons in each individual animal's brain and the network
involving the exchange of information between separate animals. This can be
generalized into an infinitely recursive network where a node could be a single
neuron, a group of neurons working together, a whole brain, a herd of social
animals exchanging information about the location of food or predators, William
Shakespeare writing a play, or a group of humans on a committee such as the
Security Council of the United Nations. In mathematical terms all nodes are
equivalent – but the amount of information they represent, and the complexity
of the messages they exchange, depends on the depth of recursion and the number
of links involved.
Part
3 – The CODIL Model as a Window onto the Infinite Network
In effect the
infinite recursive network is rather like a sheet of paper for recording the
information that a brain knows while the CODIL model is rather like a pen which
write the information into the network. By linking the two together, we have a
dynamic neural net model which can morph from trial and error learning of
patterns, via set processing of possibly fuzzy or incompletely known
information, to handling sophisticated rule-based tasks. Because there is no
advantage in an animal having a more powerful brain than it can use in a
lifetime scaling one would expect weaknesses to become apparent once humans
started to use their brains more intensively. The model suggests reasons for
limitations in human long term memory and handling negative ideas.
Part
4 – Huan Evolution – the Importance of Tools and Language
The key question
is, of course, what happened to make humans able to use information in a far
more creative way than any other animal, and can the above model help to explain
it? The use of tools undoubtedly played an important part and in evolutionary terms tools
are only significant if the knowledge of how to make and use them is passed
from generation to generation. Research into early man shows that tool making
and brain size increased slowly over a period of about 5 million years – which is
what one would expect from normal genetic evolution. Very approximately 100,000
years ago the rate of tool making increased – and has been increasing on an
exponential scale ever since. What one might expect is that the brain started
to increase in size to be able to handle the extra information – but if anything the human brain has
started to get smaller, suggesting the brain is now bigger than we need in order to survive..
The above model
suggests that there is a major tipping point. Learning by language is quicker
than trial and error learning and also uses less memory – allowing our
unmodified “animal brain” to hold more information about new and better tools –
removing the need for further increases in size. What is more, once the
language pathway has been used there are other tipping points that accelerate
the process. Now of course most information is held communally and no
individual needs to know more than a tiny fraction of, for example, the
contents of the World Wide Web.
Concluding
Remarks
The model outlined above suggest a path by
which human intelligence evolved by starting with the counter-intuitive assumption that
the human brain is just like any other animal brain – except a bit bigger and perhaps
more efficient. In evolutionary terms it is no more unusual than the giraffe’s
long neck or the elephant’s flexible nose. The crucial difference is that when
animals die all the information that had been stored in their brains is lost while
humans have invented a special tool, called language, which allows significant
amounts of knowledge to be exchanged between generations. In effect the model suggests
that human intelligence is almost entirely due to the vast body of cultural
information shared between billions of people over perhaps 1000,000 years.
IIf I am correct and this model is significantly ne research I am left with a dilemma. If I was
aged 29 (the age when I started the CODIL research) and knew what I do now, I
would be looking for a better job, submitting research proposals, and perhaps
writing a book. Over fifty years later, and in deep retirement, my options are
far more restricted. SO which of these options should I take:
1.
Work on the assumption that to
the establishment all counter-intuitive ideas are wrong – so perhaps I should put the voluminous
project notes in the waste paper collection, forget about research and take
a cruise round the world to relax.
2.
Assume that it was no more than
an interesting idea – but that at least some of the paperwork about attempting
to build an inherently human-friendly computer could be worth archiving for
those interested in the history of science,
3.
Continue with the blog, filling
in the gaps, answering questions, commenting on how my model compares with
other research, etc. in the hope of finding younger people who would like to
explore the ideas further while I am still able to give help and advice. If
there is enough interest appropriate arrangements could be made for archiving
any relevant research notes.
So dear reader, having got this far, why not comment below to let
me know what I should do – or you could ask a question about
how my model will relate to other brain-related research– and if you include accessible
links to relevant online material I will blog post a reply. After all the best test of my research is how it stands up to critical questioning.
No comments:
Post a Comment