Applying the Principles

Swarms are not just an interesting phenomenon in nature. We can use the ideas learnt from studying natural systems like ant colonies to develop and improve our own technological systems. The general swarm principle is that from simple local interactions, complex behaviour can emerge. Using this principle to design a technological system can make things much simpler, as we don’t need to deal directly with the complexity of the system as a whole. Other advantages are that the system is distributed so when individual components fail the system can survive, and the system is self-organising so there is no leader to be relied on.

So far the most important application of Swarm Intelligence has been in optimisation algorithms. Combinatorial optimisation is used to find the best element in a set for some purpose. It is extremely useful in the real world, as when Google Maps is trying to find the shortest route from one postcode to another. The Travelling Salesman Problem is often used as a touchstone for optimisation algorithms. The problem is this: given a set of cities and the distance between every pair of cities, what is the shortest path that visits each city exactly once? It is by no means easy to solve, and the number of possible routes increases faster than exponentially with the number of cities. For n cities, there are n! route possibilities, meaning for 15 cities there are more than a trillion routes. Interestingly a number of other complex problems like assembling DNA sequences and designing silicon chips reduce to a modified version of the Travelling Salesman Problem, making it a very lucrative problem to solve efficiently. Dr Marco Dorigo, one of the founders of the Swarm Intelligence field, made a link between this problem and that uncanny ability of ants to find the shortest route from their nest to food. Ant Colony Optimisation was born. It’s important to stress that this is a heuristic technique, meaning that it quickly gets satisfactory solutions though they may not be optimal; this is usually preferable because for larger numbers of cities getting the optimal solution by brute force (finding the length of every possibility and comparing to see which is shortest) becomes intractable.

It works like this: virtual ants are dispersed at random cities and each makes a tour, visiting every city once. Nearer cities are favoured when deciding which to visit next, but otherwise the decision is random. When the tour is complete, virtual pheromone is deposited along the route. The thickness splashed down is inversely proportional to the total length of the tour, so shorter tours get more pheromone. The ants then make another tour, but this time ants favour paths with thicker pheromone trails as well as nearer cities. As with the real-life foraging ants, pheromone evaporates to avoid the process settling prematurely on poor solutions. Dorigo found that when this process is iterated, near-optimal solutions are found. There is a specific advantage to Ant Colony Optimisation over competing heuristic techniques: flexibility. The algorithm can be run continuously, the ants just keep going round exploring different paths, and they can respond to changes in real time. If there’s been an accident and a road is closed between two cities, then backup plans already exist in the virtual pheromone trails, and the ants start to select different routes from an effective pool of alternatives. [6] [15]

A number of companies now use variants of Ant Colony Optimisation to run things more efficiently. For example, ‘American Air Liquide’ uses an algorithm to route trucks delivering gas from plants to customers. In the past the drivers had simply collected gas at the plant closest to the customer, but now they were instructed to drive to the plant which would result in the cheapest delivered price (as prices fluctuate and vary from plant to plant). It didn’t make sense to the drivers, who were sometimes being asked to drive much further afield, but the company reported big savings. [5]

The flexibility of ant-based methods is also attractive to communications companies. Take telephone networks: conditions change unpredictably, transient surges of traffic occur during TV phone-in competitions, local switching stations are swamped during pop concerts. When a phone call is routed, it traverses switching stations, or nodes, on its way to the other end. There is a routing table at each node directing calls to another node depending on their ultimate destination. Calls should avoid busier nodes to reduce the strain on the network and speed up the user’s connection. Ant-based approaches to the problem have been developed. One approach, designed by researchers at the University of the West of England and HP’s labs in Bristol, involves ant-like agents spreading through the network. When they travel from one node to another, they alter the routing table score for that pair of nodes; this is the pheromone’s analogue. A fast journey means the score is greatly increased, but a slow one only adds a little. Evaporation is implemented by the routing table entries diminishing over time. The weightings of evaporation and reinforcement are such that a slow, busy route with more agents receives a lower score than a fast route with fewer agents. Phone calls favour paths between nodes with higher scores. This provides a flexible system where a previously fast route that gets congested will quickly be dropped in favour of faster alternatives. British Telecom have applied an ant-based routing technique to their telephone network, but the notoriously unpredictable internet may be the ultimate frontier. Dorigo and another researcher Gianni Di Caro have developed ‘AntNet’ for this purpose. It is similar to the routing method just described, but with some improvements. Packets of information hop from node to node, and they leave a trace indicating the speed of that packet’s entire journey, not just the journey speed from the first node to the second. Other packets favour paths with stronger traces. In tests AntNet has outperformed all existing routing protocols, including the internet’s current protocol ‘Open Shortest Path First’. AntNet is better at adapting to changes in the volume of traffic and has greater resistance to node failures. Routing companies have shown interest in AntNet, but its use would require replacing current hardware at huge expense. [6] [15]

Following on from the ant algorithms, the swarm principle has also been used to develop a new kind of internet search engine. Search queries fall into three categories: navigational, perhaps to visit Facebook; informational, to find out who invented the light bulb; and transactional, to buy something off eBay. Two Spanish researchers sought to build a search engine that yields a greater proportion of search results relevant to the user’s query than the current leading search engines. It is swarm-based, but this project departs from the Swarm Intelligence mainstream as virtual agents are not employed; we, the humans, are agents. The backbone of their idea is that when a human searches for something on the web, it is analogous to an ant foraging for food. The piece of food is the chunk of information, so it is called information foraging. Wouldn’t it be easier for a second person to find the same chunk of information if the first laid a pheromone trail? This method seems less wasteful than the current method of users browsing page snippets and clicking links until they find what they are looking for (or indeed give up). Underpinning this process is the idea that when a human clicks a link, they are issuing a ‘relevance judgement’: they are deeming that website relevant to their query.

In this swarm model, the path from the query to the clicked page is marked with pheromone. The more pheromone deposited on a path, the higher up the search listings that page will appear. Pheromone evaporates, like in other swarm-based systems, to prevent mediocre query-to-webpage paths being settled on. The researchers also use a stochastic mechanism to prevent initially popular pages becoming too dominant. There is a risk that a webpage listed at the top of the search results would be reinforced disproportionately to its relevance, and a very relevant webpage on the second page of results might go unnoticed. To avoid this, a random nature is built into the model, whereby heavily-hit results are more likely to appear at the top of the search listings, but less-visited sites can still appear by chance, be clicked on, and thus reinforced. This swarm-based method has a key advantage over other similar techniques that ‘learn’ relevance from user clicks: it works in real time, so it can rapidly respond to the changes in trends of what people are after; other techniques need to be retrained off new data that’s been recorded rather than adapting on the fly. Evaporation also incorporates the decay of user interest in a natural way. The researchers tested their search engine with real users, using Yahoo as a control, and the results were promising. Successive users completed tasks successively faster using the swarm engine, as the search engine learnt from the experience of previous users. Later on when it was more trained, the users of the swarm engine also tended to complete harder tasks considerably faster than the users of Yahoo. The swarm principle is used here to enhance an activity that people carry out every day. The researchers point out a limitation: certain results under certain queries could be boosted by a malicious ‘swarm’ of users. But Google has had similar troubles and some form of detection system could be designed to alleviate the problem. This swarm-based search engine exemplifies the diverse set of applications that the swarm principle offers. [4]

The most clear-cut application of the swarm principle in technology is swarm robotics. Dorigo turned his attention from swarm algorithms to a ‘Swarmanoid’ project. The idea is that a swarm of small, cheap robots can cooperate to perform tasks more effectively than one larger, more expensive robot. In the swarm there are three kinds of robot: hand-bots, foot-bots, and eye-bots. The eye-bots fly to explore an environment and locate objects of interest. The hand-bots, capable of climbing, are carried to the objects by the foot-bots and pick them up. The robots use light signals to convey information to their neighbours; a red light might indicate that an object has been found, for instance. This system is robust, because if one robot breaks the others carry on unhindered. It is also flexible and scalable (it can work with five robots or fifty), because it relies entirely on local interactions between the robots and there is no central coordination. In principle the robots are suited to exploring unfamiliar environments, such as a burning building; Dorigo says that in a more advanced state the robot system could be used to rescue people or possessions. [15] It is important to stress that we live in the very early days of swarm robotics, and it still needs a lot of work to get it further off the ground. But this leads me to think that it has a lot of potential for the future, and is the main reason I would argue that the entire Swarm Intelligence field has great future potential.

Currently we do not know how human consciousness arises, but it’s a nice idea that it might emerge from the local interactions between neuron cells firing in the brain. Dr Vito Trianni, of the Institute of Cognitive Sciences and Technologies in Rome, is a proponent of ‘swarm cognition’, and suggests that this might be where consciousness comes from. [15] In a human brain there are an estimated 90 billion neurons and a quadrillion synaptic connections (that’s 1015). [16] We could liken the brain to a swarm and the neurons the agents. A team at Manchester university led by Professor Steve Furber is basing the design for a new kind of computer on the structure of the human brain. The computer is called SpiNNaker - a contraction of ‘spiking neural network architecture’. Traditional computers work by completing a sequence of operations one after the other. Computers have been made with dual core processors so two sequences of operations can be completed at the same time, and the operations occur in step with the system clock. More expensive four or even six core processors have become available recently. Generally more cores means higher performance, and this is called parallel processing. But the human brain works in a different way: neurons fire throughout the brain when they are stimulated, and signals cascade through the brain. The neurons do not fire in step with a ‘system clock’. Amid the many billions of brain cells functioning simultaneously, there are not just one, two, four or six sequences of operations occurring at once. The brain also has a high level of redundancy: getting hit on the head may cause the loss of countless neurons, but probably won’t spell death for a human.

Furber’s team designed a chip with 18 cores for the SpiNNaker project, and each chip can be linked to six other chips facilitating a massively parallel network. The cores send packets of data to each other to mimic the signals sent from neurons as electrical impulses. The chips are said to be asynchronous, because they are not governed by a global clock signal; instead the cores simply act when interacted with, facilitated by handshake signals being made when one core wants to talk to another. This is reminiscent of swarms not relying on top-down control but local interactions. Being asynchronous draws the computer closer to how the brain works, and has other advantages such as using much less power as cores can shutdown when not being interacted with. The network is also designed to have redundancy; if a chip is broken signals will be routed around it. The end goal is to build a computer incorporating one million of these chips. This computer should have 1% of the power of the human brain. The purpose of the computer will be to help understand how the brain processes information. The team does not claim to be building some kind of Frankenstein-robot-brain, or to understand the human brain, but they hope that SpiNNaker will be useful as a tool for medics and scientists when studying complex brain injuries and diseases; it will be a platform for studying the flow of information in a complex system in many ways akin to the brain. It’s still an exciting thought that some form of intelligence might emerge from the machine. [17] [18] [19]

Swarm-based methods prove to have certain advantages when it comes to designing technological systems. Their distributed nature makes them more robust and fault-tolerant as they lack a single weak point. Their self-organised nature provides flexibility to deal rapidly with fluctuating, unpredictable and perhaps unexpected situations. Swarm Intelligence gives us a way of designing systems that have intrinsic autonomy, without need for extensive pre-programming and central control. Swarm Intelligence is essentially a form of artificial intelligence; it’s nice to have a system that will operate by itself, giving us the results without too much human oversight. But there are disadvantages: in nature’s ant mills the ants die because they are unable to realise that they are stuck in a loop. We can imagine a swarm of robots with no supervisor getting stuck in similar pickles. Whilst being apt for optimisation tasks, swarm-based techniques aren’t always suited to tasks requiring a greater depth of reasoning. Another criticism of the field is that using simple agents that display somewhat random behaviour could result in unpredictable behaviour and inefficiencies. [6] This point of view, however, is probably more of a misunderstanding of the swarm principle. Although at the agent level there is randomness, the intention is for that to disappear at the swarm level when the desired behaviour emerges. Furthermore, the random quality provides the inherent flexibility of the swarm. It could be looked at as a great asset, potentially allowing swarm systems to deal with unforeseen problems, an ability absent from traditional software.

At the beginning of this essay, we sought to find the answer to two questions. How do swarming animals inspire technological advances? And, is there greater scope for Swarm Intelligence in the future? In answer to the first question, this essay has described in depth how technology is derived from swarms, and analysed the current manifestations. The second question has been answered as well as can be: the entire field of robotics is in its infancy, so we could consider swarm robotics to be scarcely out of the womb, and with much room to grow. The European Space Agency is currently researching swarm robotics as a means of space exploration. So we can see Swarm Intelligence holds considerable promise for the future. Whilst artificial intelligence and robotics continue to advance, the swarm is something to watch out for.