The Whole Greater than the Sum of the Parts

The idea of Swarm Intelligence is based on an interesting proposition. It is that by combining large numbers of simple things, we can make something of which the complexity outweighs that of the uncombined components. This should not be a new concept: a large number of silicon switches and bits of metal can be combined to make an iPhone. Even the bricks that make your house are arranged in such a way that they stop being bricks and start being a place to live, or perhaps a home. More fundamentally, everything in our world is made of atoms. But there is a distinction between a whole swarm and a whole house. If you looked at a house, you would not be shocked to realise that it stood up because of the interactions between the bricks - that is, one brick sits on top of another - and that the house’s construction stems from human intention. But you may be more surprised to learn that the construction of a termite mound comes as a result of the local interactions between unthinking termites. The termites do not collectively decide to build it, and they are oblivious to the fact that it is being built. For this reason we say that the mound-constructing behaviour of the termites emerges from the local interactions of the agents; a house, on the other hand, is just a house. This emergent behaviour is fascinating because the swarm as a whole can have complex aims and actions, but they rely entirely on the agents which are clueless. It would be hard to anticipate the behaviours by looking at the agents. As with the example of the fish-shoal-superorganism: each fish only knows that it doesn’t like sharks and that it should jolt off when its neighbour does; the collective swarm knows how to dazzle and confuse the sharks for the good of every fish.

The idea that a whole can be greater than the sum of its parts is sometimes referred to as holism. [12] Holism’s main focus is that in order to understand a complex system you must look at the system itself and not its constituent parts. This is counter to reductionism, which is the way that scientists normally look at things. Reductionism involves breaking the system down into smaller and smaller parts to gain a greater understanding of the whole. [13] Reductionism certainly has its merits, as the overwhelming majority of the science we know today can be attributed to it. But as we dismantle systems that are more and more complex, it gets harder and harder to work out how the building blocks piece together to form the functioning whole. In these cases, a holistic approach can be useful - for example ecological scientists interested in the natural world study ecosystems rather than looking for smaller and smaller chunks.

We have decided in a practical sense that reductionism is good if you can use it; otherwise we can turn to holism. But it remains to be seen which is more right. Going back to our example of flocking birds, there is a question. If we had complete knowledge of one bird, could we predict the precise patterns and behaviours the flock would display when wheeling around the sky? Reductionism says yes; holism says no. This relates to a bigger question of determinism versus indeterminism in science. A deterministic system is such that if everything about its current state is known, its entire history is deducible and its entire future is predictable. Karl R. Popper referred to such systems as perfect clocks, reflecting our tendency to say that something regular and reliable is running like clockwork. On the other hand, an indeterministic system is one where total knowledge about the present is not enough to ascertain the past or predict the future. There are some inherently unpredictable elements in such systems, and with reference to the “vagaries of the weather”, Popper called them clouds. [8] We can thus form a new question: if we record the movements of a flock of birds, and then return the flock to exactly the same initial conditions (in an ideal world where this is possible), will the flock move in an identical manner the second time around?

In the time after the Newtonian revolution, the universe became widely held as one big clock. If all interactions were governed by Newton’s laws of nature, how could anything unpredictable ever happen? I am not going to go into quantum physics here, but it took until the quantum revolution in the twentieth century before scientists and philosophers widely began to recognise that chance is built into the fabric of the world. When it was discovered that physical interactions occur according to statistical probabilities, we realised that the world must be indeterministic. And so we have an answer to our question. The movements of the flock and its replica would be different. It follows that given complete knowledge of a single bird we could not predict the precise movements of a flock. This is because at some level chance is involved. In the short term, we can make a good approximation of the position of the flock. But in the long term the chance effects are so amplified that we cannot tell where each bird will be or in what direction it will move. It is interesting to note that Popper’s prime example of a cloud is a swarm of gnats. The flock is on some level a chaotic system, meaning that small differences in conditions (perhaps due to chance - though this is not strictly chaos theory) lead to wildly different outcomes in the long run. [14] Therefore a reductionist attitude cannot tell us about the position of the flock, and we might have to look to holism.

This sets up a hurdle when designing artificial swarm systems. A holistic approach is useful for analysing the movements of swarms: for example if a lot of starlings were thirsty we could deduce that they would settle by a lake, or we could predict the ‘bubble’ shape that a shoal of fish would form around a predator. But when creating our own nature-inspired system for a certain task we need the desired behaviour to emerge, so we need the right set of rules governing local interactions. The holistic approach gives little help in finding these rules; a blind trial and error approach would take a very long time. This explains why scientists designing systems inspired by natural swarm phenomena rely on careful observation of the local interactions in the original biological system. These interactions have been perfected by natural selection over the millennia, so they provide a good head start for humans designing the artificial interactions.

Chance, or randomness rather, plays another important role in the world of swarms. Let’s go back to the earlier example of ants finding the shortest route by laying pheromone trails which evaporate. Take the case where two ants take two different routes: one long and one short. If the next ants to arrive on the scene strictly only followed pheromone trails already laid, then they would have to choose between these two routes depending on the strength of the pheromone, and a third, shorter route might be missed. Moreover, ants would never stray from the pheromone trail they were on, so the routes would never be improved. This would be problematic because very short routes would rarely be found. But in the real world a whole number of factors work to avoid this problem - random factors. For example part of a pheromone trail might be washed away by a drop of water, or an ant (with a blocked nose) may fail to smell that a pheromone trail is there. Factors of this sort mean that whilst most ants do follow the trails set out, others will randomly stray off the beaten track and make new paths. Of course these paths might be very long, but then the pheromone will evaporate and they will be forgotten. Chance might have an ant strike lucky. It is interesting to see that swarms rely on their random character to find optimal solutions to problems. You’d be surprised if a human, late for work, stopped to roll a die at every crossroads in the hope of getting there faster. But swarms are of a different nature; each agent is simple and expendable, and for the ants the best way of doing things may seem counter-intuitive to us.