At RIPL, our research focuses on machine learning-based methods that enable robots to operate effectively in unprepared environments, with a particular emphasis on settings in which robots work with and alongside people our homes and workplaces. We are developing perception algorithms with which robots are able to efficiently learn models of the objects, locations, and people in their environment that enable robots to usefully act within and interact with their surroundings. We’re particularly interested in algorithms that take as input multi-modal observations of a robot’s surround (e.g., laser range data, image streams, and a user’s natural language speech) and infer properties of the objects, places, people, and events that comprise a robot’s environment.

The following highlights some of the lab’s research directions. Please see the publications page for a updated list of papers.