Human inspired robotic path planning and heterogeneous robotic mapping
One of the biggest challenges facing robotics is the ability for a robot to autonomously navigate real-world unknown environments and is considered by many to be a key prerequisite of truly autonomous robots. Autonomous navigation is a complex problem that requires a robot to solve the three problems of navigation: localisation, goal recognition, and path-planning. Conventional approaches to these problems rely on computational techniques that are inherently rigid and brittle. That is, the underlying models cannot adapt to novel input, nor can they account for all potential external conditions, which could result in erroneous or misleading decision making. In contrast, humans are capable of learning from their prior experiences and adapting to novel situations. Humans are also capable of sharing their experiences and knowledge with other humans to bootstrap their learning. This is widely thought to underlie the success of humanity by allowing high-fidelity transmission of information and skills between individuals, facilitating cumulative knowledge gain. Furthermore, human cognition is influenced by internal emotion states. Historically considered to be a detriment to a person's cognitive process, recent research is regarding emotions as a beneficial mechanism in the decision making process by facilitating the communication of simple, but high-impact information. Human created control approaches are inherently rigid and cannot account for the complexity of behaviours required for autonomous navigation. The proposed thesis is that cognitive inspired mechanisms can address limitations in current robotic navigation techniques by allowing robots to autonomously learn beneficial behaviours from interacting with its environment. The first objective is to enable the sharing of navigation information between heterogeneous robotic platforms. The second objective is to add flexibility to rigid path-planning approaches by utilising emotions as low-level but high-impact behavioural responses. Inspired by cognitive sciences, a novel cognitive mapping approach is presented that functions in conjunction with current localisation techniques. The cognitive mapping stage utilises an Anticipatory Classifier System (ACS) to learn the novel Cognitive Action Map (CAM) of decision points, areas in which a robot must determine its next action (direction of travel). These physical actions provide a shared means of understanding the environment to allow for communicating learned navigation information. The presented cognitive mapping approach has been trained and evaluated on real-world robotic platforms. The results show the successful sharing of navigation information between two heterogeneous robotic platforms with different sensing capabilities. The results have also demonstrated the novel contribution of autonomously sharing navigation information between a range-based (GMapping) and vision-based (RatSLAM) localisation approach for the first time. The advantage of sharing information between localisation techniques allows an individual robotic platform to utilise the best fit localisation approach for its sensors while still being able to provide useful navigation information for robots with different sensor types. Inspired by theories on natural emotions, this work presents a novel emotion model designed to improve a robot's navigation performance through learning to adapt a rigid path-planning approach. The model is based on the concept of a bow-tie structure, linking emotional reinforcers and behavioural modifiers through intermediary emotion states. An important function of the emotions in the model is to provide a compact set of high-impact behaviour adaptations, reducing an otherwise tangled web of stimulus-response patterns. Crucially, the system learns these emotional responses with no human pre-specifying the behaviour of the robot, hence avoiding human bias. The results of training the emotion model demonstrate that it is capable of learning up to three emotion states for robotic navigation without human bias: fear, apprehension, and happiness. The fear and apprehension responses slow the robot's speed and drive the robot away from obstacles when the robot experiences pain, or is uncertain of its current position. The happiness response increases the speed of the robot and reduces the safety margins around obstacles when pain is absent, allowing the robot to drive closer to obstacles. These learned emotion responses have improved the navigation performance of the robot by reducing collisions and navigation times, in both simulated and real-world experiments. The two emotion model (fear and happiness) improved performance the most, indicating that a robot may only require two emotion states (fear and happiness) for navigation in common, static domains.