Autonomously Learning About Meaningful Actions from Exploratory Behaviour
The thesis addresses the problem of creating an autonomous agent that is able to learn about and use meaningful hand motor actions in a simulated world with realistic physics, in a similar way to human infants learning to control their hand. A recent thesis by Mugan presented one approach to this problem using qualitative representations, but suffered from several important limitations. This thesis presents an alternative design that breaks the learning problem down into several distinct learning tasks. It presents a new method for learning rules about actions based on the Apriori algorithm. It also presents a planner inspired by infants that can use these rules to solve a range of tasks. Experiments showed that the agent was able to learn meaningful rules and was then able to successfully use them to achieve a range of simple planning tasks.