Program

The event will be split into two parts. The first part are talks of invited speakers, which will present state-of-the-art in the related fields and define open research questions. The second part covers a poster session of submitted papers. Here follows a tentative program.

Time Talk
8:30 - 8:45 Welcome and Introduction
8:45 - 9:10 Andrey Rudenko, Robert Bosch GmbH Corporate Research

Human motion Prediction: Problems, Methods and Challenges

With growing numbers of intelligent systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are important tasks for intelligent vehicles, service robots and advanced visual surveillance systems. What makes this task challenging is that human motion is influenced by a large variety of factors, including the person’s intention, the presence, attributes and actions of other surrounding agents, the geometry and semantics of the environment. In this talk, I will present our current results on surveying, analyzing and addressing the human motion prediction problem. First part of the talk summarizes a comprehensive analysis of the literature, where we categorize existing methods, propose a new taxonomy, discuss limitations of the state-of-the-art approaches and outline the open challenges. In the second part of the talk, I will highlight some of our research on predicting long-term motion of people in public spaces, learning occupancy priors in urban environments and benchmarking predicted trajectories.
Link to the survey on human motion prediction techniques

Slides
9:10 - 09:35 Dinesh Manocha, University of Maryland at College Park

Prediction of Human Motion and Traffic Agents in Dense Environments

We give an overview of our recent work on predicting the trajectories of pedestrian agents in a crowd and traffic agents (cars, bicycles, buses, pedestrians) in a dense setting. Our approach combines model-based methods that take into account geometric shape, dynamics, and behavior of each agent and combine them with learning-based methods that use deep neural networks. We are able to track the trajectory of each agent based on a single camera video in realtime and predict their trajectories over short-term (1-3 seconds) and long-term durations (4-6 seconds). In practice, our approach can model dense human crowds as well as heterogeneous, urban traffic videos. We will highlight the performance on many challenging scenarios and highlight many open issues for future work. Joint work with GAMMA group members at the University of Maryland at College Park

Slides
9:35 - 10:00 Pitch Talks of Papers 1, 2, 3, 4, 5
10:00 - 10:30 Coffee break
10:30 - 10:55 Mohan Trivedi, UC San Diego

On Understanding Human Motion, Activities and Intentions for Safe Autonomous Driving

These are truly exciting times, especially for researchers and scholars active in robotics and intelligent systems. Fruits of their labor are enabling transformative changes in daily lives of the general public. In this presentation we will focus on changes affecting our mobility on roads with highly automated intelligent vehicles. Autonomous driving is no longer confined to science fiction or specialized research laboratories, but is offered for the general public to ride. We discuss issues related to the understanding of human agents interacting with the automated vehicle, either as occupants of such vehicles, or in their vicinity, as pedestrians, cyclists, or inside surrounding vehicles. These issues require deeper examination and careful resolution to assure safety, reliability and robustness of these highly complex systems for operation on public roads. The presentation will highlight recent research dealing with understanding of activities, behavior, intentions of humans specifically in the context of autonomous driving.
Links to related papers
10:55 - 11:20 Alexandre Massoud Alahi, EPFL

Forecasting Human Mobility with Deep Learning: Challenges and Recipes

Over the recent years, various deep learning based forecasting models have been proposed to predict human motion behaviours - the unspoken social rules of mobility. In this talk, I will review state-of-the-art models based on LSTM, GAN, and MLP, share their limitations, and present some of our on-going works to address them.

Slides
11:20 - 11:45 David Hsu, National University of Singapore

Motion Prediction for Autonomous Driving in Dense Traffic

Motion prediction of pedestrians and vehicles is critical for autonomous driving. For accurate prediction, we must account for (i) the agent's intention, (ii) physical motion constraints, and (iii) interaction with a heterogeneous set of other agents, including both pedestrians and various types of vehicles, More importantly, motion prediction must be integrated with decision making to enable autonomous driving. In this talk, I will discuss our work on extending the ORCA model for motion prediction of pedestrians and vehicles and using the extended model within a partially observable Markov decision process (POMDP) for real-time online decision-making in dynamic environments.
11:45 - 12:45 Pitch talks of Papers 6, 7, 8, 9, 10
Poster session
12:45 - 13:45 Lunch
13:45 - 14:10 Jim Mainprice, University of Stuttgart

Human Motion Prediction for "Human-Aware" Robots

Conceptualizing control architectures that account explicitly for the human is both challenging and of fundamental importance to robotics. However to this day, despite decades of efforts in motion planning and control, mobile manipulators still fail to achieve human-level coordination regarding other humans in their environments. In this talk I will present on going work to bridge this gap integrating human motion prediction and model predictive control. This approach allows to consider the intricate human-robot behavior robustly. To motivate this approach, I will present some work on handover tasks where the robot proactively plans solutions that involve human movement. I will then show how human space sharing criteria can be extracted from interactive human motion capture using inverse optimal control. Finally, I will go over recent work where motion capture data is used to derive short-term dynamical models encoding full body human movement that can be combined with trajectory optimization once the high-level intent is identified. I will conclude by introducing a robot architecture that can take these predictions into account.

Slides
14:10 - 14:35 Michiel van de Panne, University of British Columbia

Physics-based Human Movement: Shared Models for Animation, Robotics, Vision, and Biomechanics

Recent advances in reinforcement learning have provided a powerful framework for designing the control that enables human simulations and robots to move with skill and grace. We review a recent physics-based imitation model and describe its direct application to the problems of prediction and control, as applied to computer vision, robotics, biomechanics, and computer animation. We discuss some of the challenges for achieving further scalability for these models.

Slides
14:35 - 15:00 Emel Demircan, California State University Long Beach

Understanding Human Perception in Manipulation and Locomotion Skills

Human motor performance is a key area of investigation in biomechanics, robotics, and machine learning. Understanding human neuromuscular control is important to synthesize prosthetic motions and ensure safe human-robot interaction. Building controllable biomechanical models through modeling and algorithmic tools from both robotics and biomechanics increases our scientific understanding of musculoskeletal mechanics and control. The resulting models can consequently help quantifying the characteristics of a subject’s motion and in designing effective treatments, like predictive simulations and motion training. My objective is to explore how neural control dictates motor performance in humans by developing a portable, soft, cyber-physical system and a computational framework - which incorporates real-time robotics-based control, AIbased perception and learning, and OpenSim’s musculoskeletal models. In this talk, I will present the modeling, control, and simulation components of this new framework with two examples on human manipulation and locomotion skills. The presented framework has promise to advance the field of rehabilitation robotics by deepening our scientific understanding of human motor performance dictated by musculoskeletal physics and neural control. Automated and real-time motion improvement and retraining, facilitated with such frameworks, promise to transform the neuromuscular health, longevity, and independence of millions of people, utilizing a cost effective approach.
15:00 - 15:30 Coffee break
15:30 - 15:55 Andrea Bajcsy, Berkeley

Confidence-aware Motion Prediction for Real-time Collision Avoidance

One of the most difficult challenges in robot motion planning is to account for the behavior of other moving agents, such as humans. Commonly, practitioners employ predictive models to reason about where other agents are going to move. Though there has been much recent work in building predictive models, no model is ever perfect: an agent can always move unexpectedly, in a way that is not predicted or not assigned sufficient probability. In such cases, the robot may plan trajectories that appear safe but in fact lead to collision. Rather than trust a model’s predictions blindly, we propose that the robot should use the model’s current predictive accuracy to inform the degree of confidence in its future predictions. This model confidence inference allows us to generate probabilistic motion predictions that exploit modeled structure when the structure successfully explains human motion, and degrade gracefully whenever the human moves unexpectedly. In this talk I will discuss how we accomplish this by maintaining a Bayesian belief over a single parameter which governs the variance of our human motion model. We couple this prediction algorithm with a recently-proposed robust motion planner and controller to guide the construction of robot trajectories which are, to a good approximation, collision-free with a high, user-specified probability. I will also discuss the overall safety properties of this approach by establishing a connection to reachability analysis, and conclude with recent work on scaling up this framework for multi-robot, multi-human collision avoidance.

Slides
15:55 - 16:20 Edward Schmerling, Stanford

Mitigating the "Element of Surprise" in Model-Based Robot Planning

Taking into account the full breadth of possibilities in how humans may respond to a robot's actions is a key component of enabling safe, anticipatory, and proactive robot interaction policies. By reasoning in the present about the relative likelihoods of multiple highly distinct future outcomes, a robot can hope to avoid costly surprises that might arise from optimizing against, e.g., a single most likely prediction. In this talk I will first give an overview of our recent work in learning deep generative trajectory prediction models amenable to planning applications in multi-agent scenarios where surrounding agents may dynamically come into and out of relevance. Then, in the context of enabling safer autonomous driving, I will describe our experimental work in incorporating these models into a planning framework with a minimally interventional safety controller that accounts for a further class of "surprises" --- when human actions consistently defy the robot's predictive distributions.

Slides
16:20 - 16:45 Closing

Accepted Papers for Pitch Talks

The accepted papers will be presented during the 2 pitch talks - posters sessions. Each paper will be presented in a short 3 minutes talk. The following table reports the order of presentations: the first five papers will be presented during the first pitch talk session (starting at 9:35), the second will start around 11:45. Right after the second pitch talks, a poster session for all the papers will be held in the same room.

Number, time Name Authors
1 SE(3) Multimotion Estimation Through Occlusion Kevin Judd and Jonathan Gammell
2 Spatiotemporal Learning of Directional Uncertainty in Urban Environments Weiming Zhi, Ransalu Senanayake, Lionel Ott and Fabio Ramos
3 Schedule-based Motion Prediction for Human-Centric Autonomous Observation David Kent and Sonia Chernova
4 The Emotionally Intelligent Robot: Improving Socially-aware Human Prediction in Crowded Environments Aniket Bera, Tanmay Randhavane and Dinesh Manocha
5 Human Motion Prediction Framework for Safe Flexible Robotized Warehouses Tomislav Petković, Jakub Hvězda, Tomáš Rybecký, Ivan Marković, Miroslav Kulich, Libor Přeučil and Ivan Petrović
6 Dynamic Hilbert Maps: Real-Time Occupancy Predictions in Changing Environments Vitor Guizilini, Ransalu Senanayake and Fabio Ramos
7 Using Maximum Entropy Deep Inverse Reinforcement Learning to Learn Personalized Navigation Strategies Abhisek Konar, Bobak Hamed Baghi and Gregory Dudek
8 An Integrative Approach of Social Dynamic Long Short-Term Memory and Deep Reinforcement Learning for Socially Aware Robot Navigation Xuan-Tung Truong and Trung Dung Ngo
9 Scene Induced Multi-Modal Trajectory Forecasting via Planning Nachiket Deo and Mohan Trivedi
10 Spatio-temporal Representation of Time-varying Pedestrian Flows Tomas Vintr, Sergi Molina, Ransalu Senanayake, George Broughton, Zhi Yan, Jiri Ulrich, Tomasz Piotr Kucner, Chittaranjan Srinivas Swaminathan, Filip Majer, Maria Stachova, Achim Lilienthal and Tomáš Krajník

Intended audience

The topic of human motion prediction is of interest for researchers from different scientific areas, such as motion planning, learning and control, human-robot interaction, intelligent transportation systems and computer vision. However, it is rarely in the spotlight and is usually treated as small and single component part of a larger research problem. This workshop aims to build a platform for researchers interested in the development of reliable motion prediction approaches and tools to evaluate their performance and quality. It will feature a diverse set of high-profile invited speakers both form academia and industry. Their expertise is covering a wide spectrum of topics, including computer vision, human-robot interaction, autonomous vehicles.