Autonomous Motion
Note: This department has relocated.

Department Talks

Learning Complex Robot-Environment Interactions

Talk
  • 26 October 2017 • 11:00—12:15
  • Jens Kober
  • AMD meeting room

The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. Reinforcement learning and imitation learning are two different but complimentary machine learning approaches commonly used for learning motor skills.

Organizers: Dieter Büchler


Structured Deep Visual Dynamics Models for Robot Manipulation

Talk
  • 23 October 2017 • 10:00—11:15
  • Arunkumar Byravan
  • AMD meeting room

The ability to predict how an environment changes based on forces applied to it is fundamental for a robot to achieve specific goals. Traditionally in robotics, this problem is addressed through the use of pre-specified models or physics simulators, taking advantage of prior knowledge of the problem structure. While these models are general and have broad applicability, they depend on accurate estimation of model parameters such as object shape, mass, friction etc. On the other hand, learning based methods such as Predictive State Representations or more recent deep learning approaches have looked at learning these models directly from raw perceptual information in a model-free manner. These methods operate on raw data without any intermediate parameter estimation, but lack the structure and generality of model-based techniques. In this talk, I will present some work that tries to bridge the gap between these two paradigms by proposing a specific class of deep visual dynamics models (SE3-Nets) that explicitly encode strong physical and 3D geometric priors (specifically, rigid body dynamics) in their structure. As opposed to traditional deep models that reason about dynamics/motion a pixel level, we show that the physical priors implicit in our network architectures enable them to reason about dynamics at the object level - our network learns to identify objects in the scene and to predict rigid body rotation and translation per object. I will present results on applying our deep architectures to two specific problems: 1) Modeling scene dynamics where the task is to predict future depth observations given the current observation and an applied action and 2) Real-time visuomotor control of a Baxter manipulator based only on raw depth data. We show that: 1) Our proposed architectures significantly outperform baseline deep models on dynamics modelling and 2) Our architectures perform comparably or better than baseline models for visuomotor control while operating at camera rates (30Hz) and relying on far less information.

Organizers: Franzi Meier


Machine Ethics

Talk
  • 20 October 2017 • 11:00 am—12:00 am
  • Michael and Susan Leigh Anderson
  • AMD Seminar Room

We argue that ethically significant behavior of autonomous systems should be guided by explicit ethical principles determined through a consensus of ethicists. Such a consensus is likely to emerge in many areas in which autonomous systems are apt to be deployed and for the actions they are liable to undertake, as we are more likely to agree on how machines ought to treat us than on how human beings ought to treat one another. Given such a consensus, particular cases of ethical dilemmas where ethicists agree on the ethically relevant features and the right course of action can be used to help discover principles needed for ethical guidance of the behavior of autonomous systems. Such principles help ensure the ethical behavior of complex and dynamic systems and further serve as a basis for justification of their actions as well as a control abstraction for managing unanticipated behavior.

Organizers: Vincent Berenz


Challenges of writing and maintaining programs for robots

Talk
  • 04 August 2017 • 11:30—12:45
  • Mirko Bordignon
  • AMD meeting

Writing and maintaining programs for robots poses some interesting challenges. It is hard to generalize them, as their targets are more than computing platforms. It can be deceptive to see them as input to output mappings, as interesting environments result in unpredictable inputs, and mixing reactive and deliberative behavior make intended outputs hard to define. Given the wide and fragmented landscape of components, from hardware to software, and the parties involved in providing and using them, integration is also a non-trivial aspect. The talk will illustrate the work ongoing at Fraunhofer IPA to tackle these challenges, how Open Source is its common trait, and how this translates into the industrial field thanks to the ROS-Industrial initiative.

Organizers: Vincent Berenz


On the Sense of Agency and of Object Permanence in Robots

Talk
  • 27 June 2017 • 13:30—14:30
  • Sarah Bechtle
  • N2.025 (AMD seminar room - 2nd floor)

This work investigates the development of the sense of agency and of object permanence in humanoid robots. Based on findings from developmental psychology and from neuroscience, development of sense of object permanence is linked to development of sense of agency and to processes of internal simulation of sensor activity. In the course of the work, two sets of experiments will be presented, in the first set a humanoid robot has to learn the forward relationship between its movements and their sensory consequences perceived from the visual input. In particular, a self-monitoring mechanism was implemented that allows the robot to distinguish between self-generated movements and those generated by external events. In a second experiment, once having learned this mapping, the self-monitoring mechanism is exploited to suppress the predicted visual consequences of intended movements. The speculation is made that this process can allow for the development of sense of object permanence. It will be shown, that using these predictions, the robot maintains an enhanced simulated image where an object occluded by the movement of the robot arm is still visible, due to sensory attenuation processes.

Organizers: Stefan Schaal Lidia Pavel


  • Omur Arslan
  • N2.025 (AMD seminar room - 2nd floor)

In robotics, it is often practically and theoretically convenient to design motion planners for approximate simple robot and environment models first, and then adapt such reference planners to more accurate complex settings. In this talk, I will introduce a new approach to extend the applicability of motion planners of simple settings to more complex settings using reference governors. Reference governors are add-on control schemes for closed-loop dynamical systems to enforce constraint satisfaction while maintaining stability, and offers a systematic way of separating the issues of stability and constraint enforcement. I will demonstrate example applications of reference governors for sensor-based navigation in environments cluttered with convex obstacles and for smooth extensions of low-order (e.g., position- or velocity-controlled) feedback motion planners to high-order (e.g., force/torque controlled) robot models, while retaining stability and collision avoidance properties.

Organizers: Stefan Schaal Lidia Pavel


  • Dr. Raj Madhavan
  • N2.025 (AMD seminar room - 2nd floor)

Many of the existing Robotics & Automation (R&A) technologies are at a sufficient level of maturity and are widely accepted by the academic (and to a lesser extent by the industrial) community after having undergone the scientific rigor and peer reviews that accompany such works. I believe that most of the past and current research and development efforts in robotics and automation have been squarely aimed at increasing the Standard of Living (SoL) in developed economies where housing, running water, transportation, schools, access to healthcare, to name a few, are taken for granted. Humanitarian R&A, on the other hand, can be taken to mean technologies that can make a fundamental difference in people’s lives by alleviating their suffering in times of need, such as during natural or man-made disasters or in pockets of the population where the most basic needs of humanity are not met, thus improving their Quality of Life (QoL) and not just SoL. My current work focuses on the applied use of robotics and automation technologies for the benefit of under-served and under-developed communities by working closely with them to develop solutions that showcase the effectiveness of R&A solutions in domains that strike a chord with the beneficiaries. This is made possible by bringing together researchers, practitioners from industry, academia, local governments, and various entities such as the IEEE Robotics Automation Society’s Special Interest Group on Humanitarian Technology (RAS-SIGHT), NGOs, and NPOs across the globe. I will share some of my efforts and thoughts on challenges that need to be taken into consideration including sustainability of developed solutions. I will also outline my recent efforts in the technology and public policy domains with emphasis on socio-economic, cultural, privacy, and security issues in developing and developed economies.

Organizers: Ludovic Righetti


  • Sylvain Calinon
  • N2.025

Human-centric robotic applications often require the robots to learn new skills by interacting with the end-users. From a machine learning perspective, the challenge is to acquire skills from only few interactions, with strong generalization demands. It requires: 1) the development of intuitive active learning interfaces to acquire meaningful demonstrations; 2) the development of models that can exploit the structure and geometry of the acquired data in an efficient way; 3) the development of adaptive control techniques that can exploit the learned task variations and coordination patterns. The developed models often need to serve several purposes (recognition, prediction, online synthesis), and be compatible with different learning strategies (imitation, emulation, exploration). For the reproduction of skills, these models need to be enriched with force and impedance information to enable human-robot collaboration and to generate safe and natural movements. I will present an approach combining model predictive control and statistical learning of movement primitives in multiple coordinate systems. The proposed approach will be illustrated in various applications, with robots either close to us (robot for dressing assistance), part of us (prosthetic hand with EMG and tactile sensing), or far from us (teleoperation of bimanual robot in deep water).

Organizers: Ludovic Righetti


Multi-contact locomotion control for legged robots

Talk
  • 25 April 2017 • 11:00—12:30
  • Dr. Andrea Del Prete
  • N2.025 (AMD seminar room - 2nd floor)

This talk will survey recent work to achieve multi-contact locomotion control of humanoid and legged robots. I will start by presenting some results on robust optimization-based control. We exploited robust optimization techniques, either stochastic or worst-case, to improve the robustness of Task-Space Inverse Dynamics (TSID), a well-known control framework for legged robots. We modeled uncertainties in the joint torques, and we immunized the constraints of the system to any of the realizations of these uncertainties. We also applied the same methodology to ensure the balance of the robot despite bounded errors in the its inertial parameters. Extensive simulations in a realistic environment show that the proposed robust controllers greatly outperform the classic one. Then I will present preliminary results on a new capturability criterion for legged robots in multi-contact. "N-step capturability" is the ability of a system to come to a stop by taking N or fewer steps. Simplified models to compute N-step capturability already exist and are widely used, but they are limited to locomotion on flat terrains. We propose a new efficient algorithm to compute 0-step capturability for a robot in arbitrary contact scenarios. Finally, I will present our recent efforts to transfer the above-mentioned techniques to the real humanoid robot HRP-2, on which we recently implemented joint torque control.

Organizers: Ludovic Righetti


  • Todor Stoyanov and Robert Krug
  • AMD Seminar Room (Paul-Ehrlich-Str. 15, 1rst floor)

In this talk we will give an overview of research efforts within autonomous manipulation at the AASS Research Center, Örebro University, Sweden. We intend to give a holistic view on the historically separated subjects of robot motion planning and control. In particular, viewing motion behavior generation as an optimal control problem allows for a unified formulation that is uncluttered by a-priori domain assumptions and simplified solution strategies. Furthermore, We will also discuss the problems of workspace modeling and perception and how to integrate them in the overarching problem of autonomous manipulation.

Organizers: Ludovic Righetti