Navigating an endless ocean of complexity

published 14 Oct 2013
Photo: 
dagnyg (flickr)

This is the second installment of my article about the brain and its capacity for dealing with complexity. The first part looked at some of the early roadblocks encountered by Artificial Intelligence researchers. This article looks at the nature of complexity itself and the brain's architecture....


Alan Turing, one of the greats of computer science, who formalised the theoretical model of computability image: Wikipedia

The brain is, to a close approximation, an information processing device, which can be modelled as a Turing machine, a standard model of a computer. There’s no reason to suppose it is something else and we know enough about it to know that it can’t really be anything else. If the brain can be modelled as a computer, basic evolutionary theory tells us that a selfish, genetic-selection based algorithm is what must drive it. The brain is thus a computer running an algorithm which optimises gene-survival. In any given situation, it will make those choices that most improve the probability that the organism’s genes will proliferate.

"the brain is a computer running an algorithm which optimises gene-survival"

This line of reasoning presents the brain as homo economicus. Actually, since we’re talking about probabilities, it would be more accurate to describe it as a self-interested agent with rational expectations than the traditional concept of the rational actor, but that’s beside the point.

There are two problems which get in the way of the rational actor model. The first problem is empirical: a huge wealth of experimental evidence quite unambiguously demonstrates that humans do not make decisions by rationally calculating probabilities against a self-interested ‘utility function’.  This empirical evidence also contradicts the various weaker versions of the model, where humans are supposed to act as if they had rationally calculated probabilities, or where the rationality is present at an aggregate, collective level.

In this case, the empirical refutation is the minor objection. The really major problem is that any such rational calculation of outcome probabilities is entirely impossible, given the complexity of the environment. There are simply too many variables: far, far too many variables. There will never be a computing device which would be capable of running an optimisation algorithm and rationally calculating and comparing the probability distributions of fitness outcomes of the various competing choices that human brains face.

"there are problems which, although theoretically solvable, will never be solved by a computer within the life of the universe"

Complexity theory is one of the major theoretical contributions of computer science. It tells us that there are problems which, although theoretically solvable, will never be solved by a computer within the life of the universe. These problems are known as intractable. They are a general class of problems where the number of possible solutions grows much faster than the number of independent choices involved. When we encounter examples of such problems when there are a large number of independent choices with a large number of options, the possible solutions that need to be evaluated in order to find the best one will be so large that it could take longer than the lifetime of the universe to complete, regardless of increases in computing power.

Claude Shannon, a theoretical giant of computer science who completed the first analysis of complexity and chess image: Wikipedia

The game of chess provides a simple example of this principle in action. There are approximately 10120 ways (the Shannon number) in which the first 40 moves for each side can be played. That’s a very big number. To put it into context, if you took every atom in our galaxy and turned it into a whole new universe then added up all the atoms, you still wouldn’t be close. Chess can thus be considered to be an intractable problem. Although there are a reasonably small number of possible moves for each player in any given position (the average is estimated to be 30), the number of possible combinations of moves is so large that it will never be possible to test them all to find the best solution.

Chess computers only ever attempt to calculate 7 or 8 moves ahead of the current position and, even then, use lots of clever tricks to reduce the number of positions that they have to check. They supplement these calculations with general rules (known as heuristics) for evaluating positions, that capture some of the more strategic aspects of the game. It is only when it comes to endgames with 7 or less pieces on the board, that they can fully calculate the position and play a perfect game. Calculating even these simple endgames is only possible due to a clever trick, where positions are worked backwards from all possible checkmates, which radically reduces the number of possibilities that have to be checked. Chess computers are far from perfect, just slightly less far than the best humans.

The intractability of chess is not absolute. If Moore’s law, which says that computer processing speeds will double every 18 months, and has been more or less accurate for the last 50 years, continues to hold for a couple of millennia, it is possible that computers will be able to fully calculate chess, but for a very long time to come, it can be considered intractable, an unsolvable problem.

The complexity of calculating chess moves is a simple example of what is known as the combinatorial explosion. Chess involves a sequence of independent choices, with each choice changing the situation in which subsequent choices have to be made. The order in which the choices are made is important. Problems with such characteristics tend to suffer from combinatorial explosion as, when searching for the best sequence, the number of possible permutations of choices that have to be considered grows exponentially with each step into the future.

Complex Calculations image: Malias (flickr)

Chess, although intractable in practice, is a decision making problem that is extremely trivial when compared to the problems that human brains face. The choices that humans face operate at many different levels of abstraction. There are long-term choices about overall life strategy, career, relationships and personal ethos; medium term choices about where to live, where to work, what hobbies to pursue; to short term, immediate decisions: what will I do today, what will I do now; and micro-decisions about which muscles to contract or which hormones to produce. And the decisions that operate on all of these levels cannot be considered in isolation from one another – any given choice may impact upon goals at multiple different levels and timeframes and the consequences at all of these levels would have to be evaluated in order to arrive at a rational decision.

But it looks so delicious! image: Vera Oliviera (flickr)

For example, consider the simple problem of “should I eat this” that a human faces when confronted with an item of food. A rational evaluation of this question might start from the immediate questions: “is it actually edible?”, “am I hungry”, “is it likely to be tasty?”, “how much does it cost?”, “are there other options available and, if so, how do they compare to this item?”; From there, we might consider the short term future: “when is the next time that I will get a chance to eat?”, “how long will it be before I will need to eat again?” and we can move onto slightly longer term considerations such as “how will this impact upon my dietary plans?”, or “does it use ingredients that I am allergic to or ethically opposed to?”, “will I have to limit my dietary inputs over the coming period in order to compensate for eating this” and “what are the constituent nutrients of this food item and how do they contribute to the daily or weekly nutrient mix that is required in order to sustain my body?” and “is eating this item likely to spoil my enjoyment of a planned dining experience in the foreseeable future? “. Relevant concerns can stretch out all the way to the life-span long-term: “does this food item contribute to long term heightened risks of heart disease or diabetes or cancer and, if so , how solid is the evidential link?”. Considerations may also extend beyond the impact on the individual in question: “is this food item ethically produced?”, “does it use ingredients that are imported from a country whose policies I am opposed to and boycotting as a result?” and can even extend to time-spans that go beyond the individual’s life: “does this food item depend upon a process that is particularly environmentally destructive and may impact long term habitat destruction, global warming or species extinction?”

This is just a small sample of the type of calculations that an individual would have to make in order to make a rational decision as to whether to consume a particular food item. These are not fanciful examples either – it is manifestly the case that the answers to all of these questions can and do influence real people’s choices in the real world. It is also the case that a decision like this has to be made simultaneously with a very wide and unpredictable range of other decisions that are unrelated: how to manage the social interactions surrounding the acquisition of the food item; what method to use for payment and so on. Events may also occur at any stage during such deliberations which influence the calculations and require complete recalculation: perhaps the individual sees a newspaper headline which reports new findings on the nutritional value of the food item, or perhaps he encounters an acquaintance who invites him to lunch.

Even if we were to ignore all of the unpredictability of the environment, and focus purely on the food item consumption decision in isolation, it is far from the case that the brain can devote its processing power entirely to that matter. Throughout the calculation process, the brain must maintain the autonomic system and must translate the high-level decisions into low-level actions on the muscles and so on – and these translations are neither automatic or straightforward.

Even so, if we were to ignore such matters and imagine that the food consumption decision is the only thing we need to worry about, the number of variables involved is staggeringly large – and they interact in non-linear, complex ways. For example spending money on the food item means that the money is not available to spend on other stuff – which may set in motion a chain reaction of recalculations on other, unrelated matters.

"human decision making is so far into intractability that it is hard to express just how impossible the calculation challenge is"

The calculations that such a real world decision entails are so much more complex and so much more difficult than those facing a chess player that it is difficult to even put them beside one another. On the one hand we have a decision with calculations that extend over a relatively small number of discrete and well-defined steps. On the other hand we have decision with an open-ended and potentially extremely large set of calculations, each of which is suffused in uncertainty, which may take in consequences that are many years into the future and may touch upon any aspect of the environment. If chess is intractable, this single, trivially simple, example of human decision making is so far into intractability that it is hard to express just how impossible the calculation challenge is.

In short, there will never, ever, ever, be a computing device that could perform a rational calculation over such a problem. It is so far off into intractability that it makes the number of atoms in the known universe look like a trivially small number.


The Brain

Brains! image: Ars Electronica (flickr)

While the brain may be, to a very close approximation, a computing device, it is a very special one that is quite different to the computing devices that humans manufacture. It is principally made up of two types of cells. Neurons are specialised cells that have evolved for the purpose of transmitting electrical signals and glial cells, which provide electrical insulation and serve to ensure that electrical current doesn’t leak from neuronal circuits. Even very simple creatures have large numbers of neurons in their brains. A cockroach has about a million, a cat has about a billion and a human brain has some 86 billion. These are very large numbers – but it is not the number of neurons which makes brains special. The fastest modern computers have more transistors than brains have neurons. What makes the brain so different is the arrangement of the neurons, the topology of the neural networks that comprise them.

When humans design logic systems, in either hardware or software, a basic principle is that functionality is divided into components, each of which does a relatively simple job. These components are combined together in such a way that the connections between components are extremely simple. A second important general principle is that logic flows in one direction only and does not cross layers: the output of one component forms the input for the next component. Feedback connections (where the output of a component feeds back into the input of that component) and cross layer communication is avoided. This allows each component to be considered and analysed in isolation without having to consider the state of the whole system.

If you consider, for example, the computerised transmission of a web-page from a server to a browser, taken as a whole, it is a ludicrously complex system. But if you break it down into its constituent layers, each piece is fairly simple and does only a small number of things and the communications between layers is generally extremely simple. Starting from the application software, you can go all the way down to the physical wires and transistors without finding anything that’s too hard to understand. I’m no genius, but I can more or less understand the whole thing, from the software components at the top all the way down to the quantum behaviour of the transistors at the bottom.  This is only possible because each of the components is engineered so that it can be considered in isolation.

A simple model of the Neuron image: Bruce Blaus, Wikipedia

The brain is, on the other hand, completely different. Whereas logic gates in computers have two inputs and one output, neurons typically have thousands of inputs from other neurons. The connections between neurons are called synapses and there are several hundred trillion of them in a typical adult brain. While there is some clustering of neurons into functional units, this is a loose arrangement and individual neurons can have different roles in quite different functional units of the brain. Furthermore, connections between neurons do not follow a simple layered model. Neurons can have connections that sprawl out across the brain and connect to other neurons in multiple different regions of the brain. These connections can go in both directions: most connections are ‘forward’ – starting from the sensory inputs and up through the brain, but there are multiple feedback loops which go from higher parts of the brain back down to lower parts. Feedback loops give rise to non-linear dynamics and can cause race-conditions, cycles and chaotic dynamics. All of that complexity would be present if we were to consider the neuron as purely an electric channel like a piece of metal wire, but they are much more complex than that. In conducting metals, there is a general pool of electrons which are unattached to any specific atom and are, in effect, available for transmission of electrical signals as a current. Within neurons chemicals – charged ions – rather than free electrons are the carriers of signals and there are gaps between neurons which are populated by a bewildering soup of neuro-transmitting chemicals (such as serotonin and glutamate). The chemical composition of this inter-neuronal medium can change, effectively modifying the network’s topology as different pathways become stronger or weaker depending on changes in the concentrations of these chemicals. Changing the composition of certain chemicals in this inter-neuronal medium is, incidentally, the main way in which psychiatric drugs such as SSRIs function – a very crude mechanism indeed.

"the network is the programme and it evolves constantly throughout its life"

Another important difference with computer architecture is that the brain is not a general purpose computer on which programmes can be loaded – the network is the programme and it evolves constantly throughout its life – new connections grow, strengthen, weaken and disappear and whole neurons with all of their connections can die.

Even if we were able to create a complete and totally accurate model of the network of neurons in a brain we wouldn’t really be able to say much about how it would behave. The complexity is not only far beyond our engineering ability, it is far beyond our analytic capabilities. Even just considering one aspect of this complexity - the multiplicity of feedback loops - pushes the whole thing into a space of uncertain behaviour. So even though the fastest computers probably have as great brute calculation force as the human brain, the extraordinary complexity of its evolving architecture places the brain far above and beyond anything that humans can currently engineer. It is truly one of the great wonders of the universe – it is almost miraculous that it works so well.

However, despite all of this, one thing remains clear. No matter how marvelous its sophistication, no matter how subtle, rich and complex its architecture may be, the brain is not complex enough to overcome the intractability of the problems that it must deal with. It could be a billion times bigger and a billion times more complex and it still wouldn’t get close. No matter how big it grew, it would still spend all of eternity trying to calculate whether to eat a single food item. The search space is simply not amenable to rational calculation.

Complex wiring image: Mark Skipper (Bitterjug - Flickr)

Thus we have arrived at an apparent paradox. If the brain is a computer, then evolutionary theory tells us that it must be running an optimisation algorithm, with gene-proliferation as the fitness function. However, simple mathematics tells us that this is impossible. Yet the brain manifestly does navigate the great complexity of its environment in an incredibly sophisticated manner and many brains do manage to guide the organism to successfully propagating their genes.

This paradox has given birth to much speculation as to the nature of the brain and has caused some people to reject the model of the brain as a computing device. For example, the physicist Roger Penrose, has written several widely-read books which have postulated that quantum mechanisms (macro-scale quantum coherence to be specific) are responsible for the brain’s ability to come up with answers to problems that are non-computable. Others have injected God into the gap between the power of the brain’s circuitry and the mind-boggling complexity of the problems that it solves. These speculations are, however, profoundly wrong. The answer is really quite simple. The brain is a computing device and it is basically running a survival focused optimisation algorithm, but it does not work by rationally calculating probable outcomes, except in a very small number of cases and even in those cases, the calculations are superficial and mostly depend on the core function of the brain, which is not rational calculation. What that core function is, is the subject of the next installment....

Comments (4)

Jock Ular

Not sure I agree that there's a paradox. It appears to me that you've set this up so that instead of the brain merely having to search for local optima it has to search for the global optimum. There tends to be an over-privileging of GA's and other "evolution inspired" methods by computer scientists. In reality evolution does a real hack-job in reusing old frameworks, solving immediate problems (in the timescale considered) with fixes which are architecturally poor. See, e.g. spandrels, cheetahs, appendices

Also, in the contrast between brains and computers one strong point made the the Churchlands is that the brain is massively parallel.

chekov's picture
chekov

I said apparent paradox! I just published the final part of this series http://www.chekov.org/blog/great-generalisation-machine and it's definitely of the hack-job variety, reusing old frameworks and so on.

But, as an aside, looking for local maxima as against global maxima doesn't help to reduce the complexity of the search space. If you consider a binary immediate choice problem (e.g. eat this / do not eat this) - you're not in a planning domain where there are different maxima, you've got to figure out which of the binary choices will better propagate the genes. The problem is that there are so many factors involved in evaluating the decision and they're all so full of uncertainty that it's way off into intractable-land whichever way you look at it.

As for GAs and other evolution inspired algorithms - they're just using natural selection as an approach to traversing impossibly large search spaces, I don't think anybody believes that they're emulating how the brain works.

Anonymous

Is the genetic imperative still as strong in modern man?
...given modern society...there's no real 'survival of the fittest' anymore.......ie me no hunt, me no gather...
me no fight the wild animals....me sit and wait for the takeaway to arrive!

chekov's picture
chekov

Gene distribution frequencies are constantly in flux - which means that the aggregate species-genome is evolving all the time. You don't need harsh individual-survival genetic pruning for evolution, differential reproduction rates will do it. If a particular gene-sequence produces behaviours which generate a higher reproduction rate than the alternatives, that gene sequence will grow in frequency through the population.

An interesting example of this in practice is with autistic spectrum disorders. They are becoming more common. I think that the most plausible reason is that the human species is becoming more autistic in the aggregate because the reproduction rate of people with autistic spectrum disorders is increasing, probably because the much larger social niches that are available to such people in technology and science related industries.