Is there an "i" in "robot"?
In 1942 Isaac Asimov introduced the world to his 3 laws of robotics. These laws are primarily designed to ensure that a robot remains subservient to humans and that it never hurts a human. The laws also demand that a robot protect itself and in this respect it seems that if robots are in conflict with each other, they do what people often do – they cheat, they deceive and they get very selfish.
Asimov’s laws are as follows:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Recent research out of Switzerland has shown that when they are allowed to evolve, robots will indeed protect themselves, as the third law requires, and that they will do this at the expense of other robots if necessary.
Working at Ecole Polytechnique Fédérale of Lausanne, Dario Floreano, Laurent Keller and PhD student Sara Mitri placed their robots in an arena with a light ring marked near one end and a dark ring marked near the other. The light-coloured ring was a “good resource” and the dark-coloured ring was “poisonous”. The robots were able to sense the different rings and when they found the good resourse they would received points. Other robots could sense this light and would be attracted to the same area. If the robots stayed near the poisoned region they were penalised points. There were 1,000 robots in the experiment, and while each robot had the same hardware and much of the same programming, each one had a unique 264-bit binary code or "genome". In addition, the robots had a blue light that would randomly flash and could be detected by the other robots.
The robots were effectively competing for access to the good resource ring and after the first tests, the highest-scoring 200 robots were selected for the next phase. Each of these 200 robots had its genome mutated (with a 1 in 100 chance that any bit would be changed) and was “mated” with another robot in order to mix the various sections of their programs. This produced a new generation of robots. This new generation (and subsequent generations) were better at finding the good resource, partly because they were increasingly drawn to clusters of randomly flashing blue light from other robots who had found the light ring. Of course there is only limited space around this ring and once the researchers allowed the flashing of the blue light to evolve with the rest of the genome, an interesting change began to emerge – the robots were still able to find the good resource, but they were more selective about when they shone their blue light. By the 50th generation, they would still use the blue light in neutral parts of the arena, but they were much less likely to shine the light once they had found the good resource.
After 500 generations, 60 percent of the robots had evolved to keep their light off when they found the good resource. In this way they reduced the risk of other robots crowding in on the good resource – apparently they were hogging it all for themselves. Some of the robots even appeared to become somewhat paranoid; they evolved to move away from another robot’s blue light, as if they did not want to share anything with the other robot (or that the other robot would not share with them) or that they simply trust the messages from other robot.
This experiment was a case of “each robot for himself”, but it would be interesting to see how cooperative the robots might evolve to be if collaboration or group success was valued. For example, what would happen if the robots were awarded points when other robots detected their light and joined them. In another possible extension you could try different “species” of robots, some as “predators” and some as “prey” – could “predators” evolve to hunt in packs, or select the “weaker” individual at the back of a herd or “prey”? Would “prey” robots evolve to be less detectable or more protective of each other? I’m sure Floreano, Keller and Mitri will be having the same thoughts, and presumably telling each other . . .
Reader Comments (2)
thanks for the post
very interesting article thanks