How can we enable robots to learn representations of the world that allow for robust performance and control? My work focuses on robot learning for flexible task representations from variable, imperfect human demonstrations and high-dimensional sensory learning using distribution-based methods and information-theoretic measures.
M.S. in Mechanical Engineering from Northwestern University, 2016
B.S. in Mechanical Engineering from California Institute of Technology, 2013
Awards and Honors
NSF Science Nation Feature, April 2017.
“Engineering highly adaptable robots requires new tools for new rules”
3rd Place IEEE Control Systems Society Video Competition, June 2017.
“Autonomous Robot Drawing: From Distributions to Actions Using Feedback”
TA for Machine Dynamics (ME 314), Fall 2014
Ergodic exploration using binary sensing for non-parametric shape estimation
I. Abraham, A. Prabhakar, M. Hartmann, and T. Murphey
IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 827–834, 2017. PDF, Video
Autonomous visual rendering using physical motion
A. Prabhakar, A. Mavrommati, J. Schultz, and T. D. Murphey
Workshop on the Algorithmic Foundations of Robotics (WAFR), 2016. PDF, Video 1, Video 2, Video 3
Ergodic exploration with stochastic sensor dynamics
G. De La Torre, K. Flaßkamp, A. Prabhakar, and T. D. Murphey
American Controls Conf. (ACC), pp. 2971 – 2976, 2016. PDF
Symplectic integration for optimal ergodic control
A. Prabhakar, K. Flaßkamp, and T. D. Murphey
IEEE Int. Conf. on Decision and Control (CDC), pp. 2594 – 2600, 2015. PDF