On an Immuno-inspired Distributed, Embodied Action-Evolution cum Selection Algorithm
Traditional Evolutionary Robotics (ER) employs evolutionary techniques to search for a single monolithic controller which can aid a robot to learn a desired task. These techniques suffer from bootstrap and deception issues when the tasks are complex for a single controller to learn. Behaviour-decomposition techniques have been used to divide a task into multiple subtasks and evolve separate subcontrollers for each subtask. However, these subcontrollers and the associated subcontroller arbitrator(s) are all evolved off-line. A distributed, fully embodied and evolutionary version of such approaches will greatly aid online learning and help reduce the reality gap. In this paper, we propose an immunology-inspired embodied action-evolution cum selection algorithm that can cater to distributed ER. This algorithm evolves different subcontrollers for different portions of the search space in a distributed manner just as antibodies are evolved and primed for different antigens in the antigenic space. Experimentation on a collective of real robots embodied with the algorithm showed that a repertoire of antibody-like subcontrollers was created, evolved and shared on-the-fly to cope up with different environmental conditions. In addition, instead of the conventionally used approach of broadcasting for sharing, we present an Intelligent Packet Migration scheme that reduces energy consumption.
READ FULL TEXT