CS 333: Safe and Interactive Robotics

Fall 2017-2018, Class: Tue, Thu 3:00-4:20pm, McMurtry 360


Description:

Once confined to the manufacturing floor, robots are quickly entering the public space at multiple levels: drones, surgical robots, service robots, and self-driving cars are becoming tangible technologies impacting the human experience. Our goal in this class is to learn about and design algorithms that enable robots to reason about their actions, interact with one another, the humans, and the environment they live in, as well as plan safe strategies that humans can trust and rely on.

This is a project-based graduate course that studies algorithms in formal methods, control theory, and robotics, which can improve the state-of-the-art human-robot systems. We focus on designing new algorithms for enhancing safe and interactive autonomy.


Format:

The course is a combination of lecture and reading sessions. The lectures discuss the fundamentals of motion planning, formal methods applied in robotics, learning from demonstration, intent inference, and shared control required for modeling and design of safe and interactive autonomy for human-robot systems. During the reading sessions, students present and discuss recent contributions in this area. Throughout the semester, each student works on a related research project that they present at the end of the semester.


Prerequisites:

There are no official prerequisites, but an introductory course in artificial intelligence and robotics is recommended.


Learning Objectives:

At the end of this course you will have gained knowledge about applications of various topics in designing safe and interactive autonomous systems: temporal logics, reactive synthesis, planning and control, learning and human modeling, game theoretical foundations of interactive systems, safe learning, etc.

You will also have hands-on experience working on a research project and it is expected that you will gain the following research skills: analyzing literature related to a particular topic, critiquing papers, and presentation of research ideas.


Staff

Dorsa Sadigh

Dorsa Sadigh

Instructor

Office Hours: Mondays 11am-12pm, Gates 142 (by appointment)
Webpage
Toki Migimatsu

Toki Migimatsu

Course Assistant



Timeline

Date Lecture Handouts / Deadlines Notes
Tue, Sep 26 Introduction to Safe and Interactive Robotics Syllabus Template for Scribes
Survey
Slides
Thu, Sep 28 Lecture:  Motion Planning
  • Spatial Planning: A Configuration Space Approach. Lozano-Perez. (1983).
  • Analysis of Probabilistic Roadmaps for Path Planning. Kavraki, et al. (1988).
  • Randomized Kinodynamic Planning. LaValle, et al. (2001).
  • Path Planning in Expansive Configuration Spaces. Hsu, et al. (1997).
Laura, Charles
Notes
Tue, Oct 03 Lecture:  Trajectory Optimization Mingyu, Maxime, Keven
Notes
Thu, Oct 05 Lecture:  Formal Methods in Robotics
  • Church’s Problem Revisited. Kupferman and Vardi. (1999).
  • Temporal Logic-based Reactive Mission and Motion Planning. Kress-Gazit, et al. (2009).
  • A Fully Automated Framework for Control of Linear Systems from Temporal Logic Specifications. Kloetzer, et al. (2008).
  • Synthesis for Human-in-the-Loop Control Systems. Li, et al. (2014).
  • Optimization-based Trajectory Generation with Linear Temporal Logic Specifications. Wolff, et al. (2014).
  • Model Predictive Control with Signal Temporal Logic Specifications. Raman, et al. (2014).
Eli, Hesam, Kyle, Chelsea
Tue, Oct 10 Lecture:  Formal Methods in Robotics
  • Church’s Problem Revisited. Kupferman and Vardi. (1999).
  • Temporal Logic-based Reactive Mission and Motion Planning. Kress-Gazit, et al. (2009).
  • A Fully Automated Framework for Control of Linear Systems from Temporal Logic Specifications. Kloetzer, et al. (2008).
  • Synthesis for Human-in-the-Loop Control Systems. Li, et al. (2014).
  • Optimization-based Trajectory Generation with Linear Temporal Logic Specifications. Wolff, et al. (2014).
  • Model Predictive Control with Signal Temporal Logic Specifications. Raman, et al. (2014).
?
Thu, Oct 12 Reading:  Safe Learning and Control
    P1: A General Safety Framework for Learning-Based Control in Uncertain Robotic Systems. Fisac, et al. (Pros: Hesam, Cons: Sumeet).
  • Concrete Problems in AI Safety. Amodei, et al. (2016).
  • Reachability-based Safe Learning with Gaussian Processes. Akametalu, et al. (2014).
  • Safe Exploration for Optimization with Gaussian Processes. Sui, et al. (2015).
  • Safe Control under Uncertainty with Probabilistic Signal Temporal Logic. Sadigh, et al. (2016).
  • Safe Visual Navigation via Deep Learning and Novelty Detection. Richter, et al. (2017).
  • Diagnosis and Repair from Signal Temporal Logic Specifications. Ghosh, et al. (2016).
Sumeet, Hesam
Tue, Oct 17 Reading:  Safe Learning and Control Mingyu, Lin, Maxime, Hans
Thu, Oct 19 Reading:  Adversarial Neural Networks Nipun, Kyle, Shane, Pengda
Tue, Oct 24 Reading:  Models of Cognition Project Proposal Reports Due
  • Probabilistic Models of Cognition: Conceptual Foundations. Chater, et al. (2006).
  • Probabilistic Models of Cognition: Exploring representations and inductive biases. Griffiths, et al. (2010).
  • Helping People Make Better Decisions Using Optimal Gamification. Lieder, et al. (2016).
  • A Reward Shaping Method for Promoting Metacognitive Learning. Lider, et al. (2016).
  • Prospect Theory. Kahneman and Tversky. (1979).
Sharon, Chelsea, Yuhang, Pengda
Thu, Oct 26 Project Proposal Presentations
Tue, Oct 31 Reading:  Learning from Demonstration
  • Maximum Margin Planning. Ratliff, et al. (2006).
  • Maximum Entropy IRL. Ziebart, et al. (2010).
  • Movement Primitives via Optimization. Dragan, et al. (2015).
  • Active Preference-Based Learning of Reward Functions. Sadigh, et al.(2017).
Karen, Hesam, Shane, Sumeet
Thu, Nov 02 Reading:  Learning from Demonstration
  • Socially Compliant Mobile Robot Navigation via IRL. Kretzschmar, et al. (2016).
  • Predicting Human Reaching Motion in Collaborative Tasks using Inverse Optimal Control and Iterative re-planning. Mainprice, et al. (2015).
  • Trajectories and Keyframes for Kinesthetic Teaching. Akgun, et al. (2012).
  • Designing Robot Learners that Ask Good Questions. Cakmak, et al. (2012).
  • Using perspective taking to learn from ambiguous demonstrations. Breazeal, et al. (2012).
  • Understanding Intentions of Others. Meltzoff, et al. (1995).
  • Infant Imitation After a 1-Week Delay. Meltzoff, et al. (1988).
  • Rational Imitation in Preverbal Infants. Gergely, et al. (2002).
Yuhang, Gleb, Hans, Sharon
Tue, Nov 07 Reading:  Intent Inference
  • Planning Based Prediction for Pedestrians. Ziebart, et al. (2009).
  • Goal Inference as Inverse Planning. Baker, et al. (2007).
  • Intention-Aware Motion Planning. Bandyopadhyay, et al. (2013).
  • Shared Autonomy via Hindsight Optimization. Javdani, et al. (2015).
Vikranth, Pengda, Haruki, Maxime
Thu, Nov 09 Lecture:  Experimental Design Project Milestone Reviews Due
Tue, Nov 14 Guest Lecture:  Mo Chen
Thu, Nov 16 Reading:  Intent Expression and Intent in HRI
  • Generating Legible Motion. Dragan, et al. (2013).
  • Manipulating Mental States through Physical Action. Gray, et al. (2014).
  • Generating Plans that Predict Themselves. Fisac, et al. (2016).
  • Expressing Thought: Improving Robot Readability with Animation Principles. Takayama, et al. (2011).
  • Anticipation in Robot Motion. Gielniak, et al. (2011).
  • Robot Navigation in Dense Human Crowds. Trautman, et al. (2013).
  • Planning for Autonomous Cars that Leverages Effects on Human Actions. Sadigh, et al. (2016).
  • Information Gathering Actions over Human Internal State. Sadigh, et al. (2016).
  • Robot Planning with Mathematical Models of Human State and Action. Dragan. (2017).
  • Formalizing Human-Robot Mutual Adaptation: A Bounded Memory Model. Nikolaidis, et al. (2016).
Gleb, Gene, Lin, Chelsea
Tue, Nov 21 Thanksgiving Break
Thu, Nov 23 Thanksgiving Break
Tue, Nov 28 Reading:  Communication and Coordination
  • Learning Behavior Styles with Inverse Reinforcement Learning. Lee, et al. (2010).
  • Asking for Help Using Inverse Semantics. Tellex, et al. (2014).
  • Knowledge and implicature: Modeling language understanding as social cognition. Goodman, et al. (2013).
  • Coordination Mechanisms in Human-Robot Collaboration. Mutlu, et al. (2013).
  • Conversational Gaze Aversion for Humanlike Robots. Andrist, et al. (2014).
  • Robot Deictics: How Gesture and Context Shape Referential Communication. Sauppé, et al. (2014).
Karen, Vikranth, Gene, Haruki
Thu, Nov 30 Reading:  Collaboration
  • Implicitly Assisting Humans to Choose Good Grasps in Robot to Human Handovers. Bestick, et al. (2016).
  • Cooperative Inverse Reinforcement Learning. Hadfield-Menell, et al. (2016).
  • Human-Robot Cross-Training. Nikolaidis, et al. (2013).
Sharon, Nipun, Mingyu, Kyle
Tue, Dec 05 Project Presentations
Thu, Dec 07 Project Presentations



Grading

Component Contribution to Grade
Final Project 50%
Student Presentations & Paper Reviews 30%
Scribing & Class Participation 20%
Total 100%

Project Grading

Component Contribution to Grade
Project Proposal Reports 5%
Project Proposal Presentations 5%
Project Milestone Reviews 10%
Project Presentation (possibly with demos) 15%
Final Project Report 15%
Total 50%

This class is partially based on the following existing courses:
Algorithmic Human-Robot Interaction (Berkeley)
Cooperative Machines (MIT)
Computer-Aided Verification (Berkeley)
Human-Robot Interaction (Georgia Tech)


    © Dorsa Sadigh 2017