Monday, 15 October 2018

Natural Language: The Evolution of Grammar

A reference in the blog Babel's Dawn alerted me to an interesting paper "An anthropic principle in lieu of a "Universal Grammar" by Hubert Haider, Department of Linguistics & Centre of Neuroscience at the the University of Salzberg.

Haidar starts by asking three questions:

Among the many unanswered questions in grammar theory, the following figure prominently.
  1. First, what is it that enables children to successfully cope with the structural complexities of their mother  tongue while  professional grammarians  tend to  fail when  modelling them? 
  2. Second, what determines the narrow system corridor for human grammars? 
  3. Third, are the grammars of human languages the offspring of a single proto-grammar instantiating a "Universal Grammar" (monogenic) or are the shared traits of human grammars the result of convergent changes in the grammars of human languages of diverse ancestry (polygenic)? 
 The last of these questions is mainly concerned with debunking the ideas of Chomsky and I will pass over it in this blog post. More importantly he argues strongly that natural language and the grammars associated with them are due to normal evolutionary forces and are limited by the resources present in the human brain.  He does not detail the nature of the resources provided by  the brain - which is very relevant to my research into the evolution of human intelligence.  For this reason I quote extracts from his text below, and then give a brief example of how my model demonstrates the way in which the neural network in the brain supports the resources his approach needs to support language . 

The Many Human Species revealed by DNA studies

When the early evidence that we contained genes inherited from the Neanderthals I posted a blog "More evidence for the braided human evolutionary tree" about the possible significance of  hominins splitting into separate groups and then interbreeding at a later stage.  A couple of months ago I blogged "Why Sex is an Important Factor in the Evolution of Human Intelligece" developing the idea further in terms of the transfer of culture as well as genes. The current New Scientist (13 October 2018) contains an article by Catherine Brahir entitled "The Ghost Within" which describes how, hidden within out genome are traces of unknown species of early humans.
This article nicely summarises the recent DNA research which has revealed that there is evidence for a number of "ghost" population of the genus Homo whose genes have been inherited but which we have not yet identified in fossil remains. Parallel research into chimpanzees suggests a "ghost" population of bonobos while research into the ancestry of the domestic dog suggests that a "ghost" population entered the New World (probably when humans crossed from Asia via Alaska) but were later replaced by dogs introduced by Europeans.

If one thinks about it, the fastest way for a slow-breeding species to evolve could well be to repeatedly split into different smallish groupmean s, where each group adapts to different environments and, in effect, "tests out" new gene variants. A degree of inbreeding will encourage both good and bad gene variants - but cross breeding with other, long separated groups will help to concentrate the better gene variations, and eliminate the less satisfactory alternatives.  In the case of humans this will not only have helped shape our physical bodies but will also have helped to merge the best features of different cultures. 

Sunday, 14 October 2018

A Simple Example of how CODIL works

J M asked "How would these symbiotic - easy to understand computers - look like? how would they work?"

The computer hardware would look like a conventional computer, and in fact any commercial system would probably combine both technologies. The difference is in the way you would interact with it.

The original idea (see The SMBP Story) related to a very large commercial sales contract program in 1967.  The workings can best be explained in terms of a very simple network model.

Imagine a board with a number of light bulbs on it - each light bulb represents a commercial concept such as CUSTOMER or PETROL - and some bulbs could also have numbers  associated with them. If a customer contract referred to a discount it might be written:

QUANTITY >= 1000; DISCOUNT = 10%

This would indicate that the bulbs for CUSTOMER and SMITH were linked, as where the bulbs for PRODUCT and PETROL, etc. In addition there would be links between the linked pairs.

Similarly there would be a pricing table with statements such as:


with the appropriate bulbs linked.

To Price a delivery the nodes equivalent to the following would be switched on:

QUANTITY >= 1000

and the this would automatically turn on the lights for 

DISCOUNT = 10%  and UNIT PRice = 15

The important thing is that everything is done in terms of concepts the user can understand, and the system deduced what needs doing. The approach is application independent and can easily show the user what it is doing and why. Effectively it is a transparent system, where conventional computers are black boxes. 

In practice the experimental interpreters hold all the links in a table and for each input they automatically look up the relevant links. Instead of a very large pricing program which will price any sale, you have a table-lookup which - in effect - works out a purpose-build mini-program for each transaction. 

A detailed paper is being drafted for this blog and will appear shortly

Will robots outsmart us? by the late Stephen Hawkins

There is a interesting article, "Will robots outsmart us?" in today's Sunday Times MagazineWhile I don't accept all Stephen's predictions I was most interested to read:
When an artificial intelligence (AI) becomes better than humans at AI design, so that it can recursively improve itself without human help, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails. When that happens, we will need to ensure that the computers have goals aligned with ours.
Later he says:
In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. 
Of course we know what happened last time a super-intelligence came into existence. About half a million years ago the Earth was populated by a great variety of animals of a comparatively low intelligence. All the higher animals had brains that worked in roughly the same way, and how much they could learn was limited because everything they learnt was lost when they died. Then one species, which we call Homo sapiens, discovered a way to recursively increase its own intelligence. It was already good at making tools but for several million years the cost of trail and error learning had limited what it could do. But then it invented a tool to boost intelligence, which we call language.  Language not only made it possible to make better tools, but also it made it possible to recursively build a better language generation by generation. So some 5000 generations later the Earth is home to a super-intelligent species ...

And are the goals of this species aligned with the the goals of the millions of other species? Of course not. Billions of animals are kept as slaves to be killed and eaten, while the homes of countless more have been, or are being, destroyed. 

If we invent a super-intelligent AI system why should it treat us with more respect than we have  shown for our animal relatives.

A new book "Brief Answers to the Big Questions," by Stephen Hawkins, is published by John Murry on Tuesday.

Thursday, 4 October 2018

Why does CODIL differ from other computer languages

This query came up on a FutureLearn Course which read "Christopher, to be honest I don't think the world needs any more computer languages or most of the ones it already has for that matter"

The vast majority of computer languages, such as COBOL, Fortran, C, Java, etc. are designed to process information on a conventional stored program computer where the memory consists of numbered boxes which contain numbers . The numbers in the boxes may represent coded data (often several different formats), numeric addresses, or coded instructions. This approach was originally designed to handle a class of well-defined mathematical tasks which humans find difficult to do quickly and accuracy, so it is not surprising that modern computers are incomprehensible black boxes when viewed by the average human being. They were deliberately designed to do efficiently things which people are bad at doing.

CODIL uses a completely different memory structure which is based on a dynamic network which attempts to mimic the working of the brain's neural network. The aim is to produce a transparent information processor (rather than a black box) which is easy for the average human to understand and use for a range of potentially complex information processing tasks. It is particularly aimed at complex tasks where a dynamically flexible human interface is advantageous - and so fills the area where conventional computers are weakest.

In CODIL the array of numbered boxes which make up a conventional computer memory is replaced by a large number of nodes, where each node consists of an on/off switch and a label (which is for the benefit of the human user). The human user defines the names of the nodes and the wires linking the nodes.