Developmental Robotics with Autoencoded Vision
Kyle Richmond-Crosset, Lisa Meeden
Developmental robotics, sometimes referred to as epigenetic robotics, is a subfield of robotics that studies developmental mechanisms and architectures that allow for open-ended, lifelong learning and often mimic human development. In 2017, Professor Lisa Meeden and Douglas S. Blank published Developing Grounded Goals through Instant Replay Learning, which describes a developmental robotic architecture that focuses on dramatic changes in sensory information during basic exploration. Sensor changes are defined as the robot’s interest level in the current situation. When the interest is above a threshold, then the robot trains itself to remember the moments leading up to those states so as to make them repeatable and achievable as goals in the future.
This summer, my research, which was facilitated by Professor Meeden, extended the aforementioned work with the implementation of robotic vision. Meeden and Blank’s initial experiments were based on dramatic changes in a small set of basic sensors such as stall, light, and sonar. The goal of this summer’s work was to scale the framework to handle more complex sensors, such as vision, where each image consists of thousands of pixel values. I attempted to reimagine and redesign the interest and vision components so the architecture could incorporate camera readings effectively. I developed several convolutional autoencoders, including variational autoencoders, which condense camera data into high-level abstractions rather than using the raw values directly. I found that these abstractions did create effective visual features that allowed the developmental system to recognize key moments in the visual stream. I also found, preliminarily, that the standard autoencoder produces more effective abstractions than variational autoencoders, which often produce disentangled representations but condense the camera data to such an extent as to make the encoded representation difficult to interpret.