Kyriakos G. Vamvoudakis was born in Athens, Greece. He received the Diploma (a 5-year degree, equivalent to a Master of Science) in Electronic and Computer Engineering from the Technical University of Crete, Greece in 2006 with highest honors. After moving to the United States of America, he studied at The University of Texas at Arlington with Frank L. Lewis as his advisor, and he received his M.S. and Ph.D. in Electrical Engineering in 2008 and 2011, respectively. From May 2011 to January 2012, he worked as an Adjunct Professor and Faculty Research Associate at the University of Texas at Arlington and at the Automation and Robotics Research Institute. During the period from 2012 to 2016, he was a project research scientist at the Center for Control, Dynamical Systems, and Computation at the University of California, Santa Barbara. He was an assistant professor at the Kevin T. Crofton Department of Aerospace and Ocean Engineering at Virginia Tech until 2018.
Dr. Vamvoudakis currently serves as an Assistant Professor at The Daniel Guggenheim School of Aerospace Engineering at Georgia Tech. He holds a secondary appointment in the School of Electrical and Computer Engineering. His research interests include reinforcement learning, control theory, cyber-physical security, bounded rationality, and safe/assured autonomy. Dr. Vamvoudakis is the recipient of a 2019 ARO YIP award, a 2018 NSF CAREER award, and several international awards, including the 2016 International Neural Network Society Young Investigator (INNS) Award, the Best Paper Award for Autonomous/Unmanned Vehicles at the 27th Army Science Conference in 2010, the Best Presentation Award at the World Congress of Computational Intelligence in 2010, and the Best Researcher Award from the Automation and Robotics Research Institute in 2011. He is a member of Tau Beta Pi, Eta Kappa Nu, and Golden Key honor societies and is listed in Who's Who in the World, Who's Who in Science and Engineering, and Who's Who in America. He has also served on various international program committees and has organized special sessions, workshops, and tutorials for several international conferences. He currently is a member of the Technical Committee on Intelligent Control of the IEEE Control Systems Society, a member of the Technical Committee on Adaptive Dynamic Programming and Reinforcement Learning of the IEEE Computational Intelligence Society, a member of the IEEE Control Systems Society Conference Editorial Board, an Associate Editor of: Automatica; IEEE Computational Intelligence Magazine; IEEE Transactions on Systems, Man, and Cybernetics: Systems; Neurocomputing; Journal of Optimization Theory and Applications; IEEE Control Systems Letters; and of Frontiers in Control Engineering-Adaptive, Robust and Fault Tolerant Control. He is also a registered Electrical/Computer Engineer (PE), a member of the Technical Chamber of Greece, and a Senior Member of both IEEE and AIAA.
Contact Information for Dr. Vamvoudakis:
The Daniel Guggenheim School of Aerospace Engineering
Montgomery Knight Building, Office 415-B
Georgia Institute of Technology
270 Ferst Drive, NW
Atlanta, GA 30332-0150, USA
E-mail: kyriakos at gatech.edu
Telephone: +1 (404) 385-3342
Fax: +1 (404) 894-2760
- Control Theory
- Reinforcement Learning
- Game Theory
- Cyber-physical Systems
- ARO YIP Award, 2019
- NSF CAREER Award, 2018
- International Neural Network Society (INNS) Young Investigator Award, 2016
- Best Paper Award for Autonomous/Unmanned Vehicles, 27th Army Science Conference, Orlando, December 2010
A. Kanellopoulos, K. G. Vamvoudakis, “A Moving Target Defense Control Framework for Cyber-Physical Systems,” IEEE Transactions on Automatic Control, vol. 65, no. 3, pp. 1029-1043, 2020.
A. Kanellopoulos, K. G. Vamvoudakis, “Non-Equilibrium Dynamic Games and Cyber-Physical Security: A Cognitive Hierarchy Approach,” Systems and Control Letters, vol. 125, pp. 59-66, 2019.
B. Kiumarsi, K. G. Vamvoudakis, H. Modares, F. L. Lewis, “Optimal and Autonomous Control Using Reinforcement Learning: A Survey,” IEEE Transactions on Neural Networks and Learning Systems (Special Issue on Deep Reinforcement Learning and Adaptive Dynamic Programming), vol. 29, no. 6, pp. 2042-2062, 2018.
K. G. Vamvoudakis, “Q-learning for Continuous-Time Linear Systems: A Model Free Infinite Horizon Optimal Control Approach,” Systems and Control Letters, vol. 100, pp. 14-20, 2017.
K. G. Vamvoudakis, “Non-Zero Sum Nash Q-learning for Unknown Deterministic Continuous-Time Linear Systems,” Automatica, vol. 61, pp. 274-281, 2015.
K. G. Vamvoudakis, J. P. Hespanha, B. Sinopoli, Y. Mo, “Detection in Adversarial Environments,” IEEE Transactions on Automatic Control (Special Issue on Control of Cyber-Physical Systems), vol. 59, no. 12, pp. 3209-3223, 2014.
F. L. Lewis, D. Vrabie, K. G. Vamvoudakis, “Reinforcement Learning and Feedback Control: Using Natural Decision Methods to Design Optimal Adaptive Controllers,” IEEE Control Systems Magazine, vol. 32, no. 6, pp. 76-105, 2012.
D. Vrabie, K. G. Vamvoudakis, F. L. Lewis, Optimal Adaptive Control and Differential Games by Reinforcement Learning Principles, Control Engineering Series, IET Press, 2012.
K. G. Vamvoudakis, F. L. Lewis, G. R. Hudas, “Multi-Agent Differential Graphical Games: Online Adaptive Learning Solution for Synchronization with Optimality,” Automatica, vol. 48, no. 8, pp. 1598-1611, 2012.
K. G. Vamvoudakis, F. L. Lewis, “Online Actor-Critic Algorithm to Solve the Continuous-Time Infinite Horizon Optimal Control Problem,” Automatica, vol. 46, no. 5, pp. 878-888, 2010.
Last revised June 30, 2020