Robotics manipulation, learning-based control, and human-robot interaction
Developed visuomotor policy using deep neural networks and dynamic movement primitives for planar 2D manipulation tasks. The model was trained end-to-end from human demonstrations in cluttered scenes, using RGB images as input and generating the parameters of the dynamical system as output which produced the robot trajectory. Tested in complex real-world planar tasks (stem unveiling and grasping in clutter).
Developed an MPC-like optimization framework for enforcing in real-time kinematic constraints on nonlinear dynamical systems trained from human demonstrations. Showcased superior performance compared to repelling potential fields and classic MPC approaches. Validated on handover and pick-and-place tasks, under posiition, velocity, acceleration and obstacle avoidance constraints.
Developed a novel framework that encodes human demonstrations using Gaussian Mixture Models (GMM), while satisfying the necessary conditions to ensure global asymptotic stability at the target. An object load transfer strategy based on haptic cues is also integrated to ensure stable grasp release. The strategy was validated through both V-REP simulations and real-world experiments involving various objects and human receivers.