The Robot Intelligence through Perception Lab (RIPL) at TTI-Chicago develops intelligent, perceptually aware robots that are able to work effectively with and alongside people in unstructured environments.
RIPL is directed by Professor Matthew R. Walter. Our research focuses on advanced perception algorithms that endow robots with a rich awareness of their surroundings and the ability to interact safely and naturally with humans. We are particularly interested in algorithms that take as input multi-modal observations of a robot’s surround (e.g., laser range data, image streams, and speech) and infer properties of the objects, places, people, and events that comprise a robot’s environment.
We are looking for talented PhD students who are excited about computer vision, natural language understanding, and machine learning for robotics. If you are interested in joining us, consider applying to TTI-Chicago.
Congratulations to Jiading Fang for having his paper on learning consistent multi-view scene representations accepted at ECCV!
Takuma Yoneda’s paper on learning invariant representations for RL was accepted at RSS. Congrats!
Older news is available here.