Contact Dr Adolfo Perrusquia Guzman
Areas of expertise
- Aeronautical Systems
- Autonomous Systems
- Instrumentation, Sensors and Measurement Science
- Mechatronics & Advanced Controls
- Systems Engineering
Adolfo Perrusquía is an expert in reinforcement learning, especially for the control of dynamical systems (e.g., robotics, autonomous vehicles). In particular, he is expertise in combining classical nonlinear control theory with recent data-driven learning methods.
He has a M.Sc. and PhD. degrees in automatic
control from the CINVESTAV-IPN (rank 2 research in Latin America) and the B.Eng. degree in Mechatronic
Engineering from the IPN
(rank 4 university in Mexico). He has published extensively in employing artificial intelligence
techniques applied to dynamical systems. He was awarded by the Mexican Society of Artificial Intelligence with the third place for the best PhD thesis in Artificial Intelligence 2021.
He has been appointed Chair of the Task Force on Reinforcement Learning for Robots in the IEEE Computational Intelligence Society. Adolfo Perrusquía joined Cranfield in 2021 as a Research Fellow. He has been awarded by the Royal Academy of Engineering with a UK-IC postdoctoral fellowship in 2021. He is within the Human Machine Intelligence Research Group led by Prof. Weisi Guo.
Adolfo Perrusquia is a Research Fellow in Reinforcement Learning for Engineering in the School of Aerospace, Transport and Manufacturing (SATM) and an UK-IC Postdoctoral Research Fellow.
His expertise is on theory and applications of both control and artificial intelligence. In particular, he is extremely interested in system identification, nonlinear control (which includes adaptive and robust control), robotics, deep learning and especially in reinforcement learning applications. Since January 2021, He is teaching some modules of the M.Sc. in Applied Artificial Intelligence.
Department for Transport
Articles In Journals
- Perrusquia A & Guo W (2023) A closed-loop output error approach for physics-informed trajectory inference using online data, IEEE Transactions on Cybernetics, 53 (3) 1379-1391.
- Perrusquia A & Guo W (2023) Physics informed trajectory inference of a class of nonlinear systems using a closed-loop output error technique, IEEE Transactions on Systems, Man, and Cybernetics: Systems, Available online 10 August 2023.
- Guo W, Wei Z, Gonzalez O, Perrusquía A & Tsourdos A (2023) Control layer security: a new security paradigm for cooperative autonomous systems, IEEE Vehicular Technology Magazine, Available online 21 July 2023.
- Perrusquia A & Guo W (2023) Hippocampus experience inference for safety critical control of unknown multi-agent linear systems, ISA Transactions, 137 (June) 646-655.
- Perrusquia A & Guo W (2023) Optimal control of nonlinear systems using experience inference human-behavior learning, IEEE CAA Journal of Automatica Sinica, 10 (1) 1-13.
- Perrusquia A & Guo W (2023) Reward inference of discrete-time expert’s controllers: a complementary learning approach, Information Sciences, 631 (June) 396-411.
- Perrusquia A & Guo W (2023) Closed-loop output error approaches for drone’s physics informed trajectory inference, IEEE Transactions on Automatic Control, Available online 22 February 2023.
- Perrusquia A, Garrido R & Yu W (2022) Stable robot manipulator parameter identification: a closed-loop input error approach, Automatica, 141 (July) Article No. 110294.
- Perrusquía A (2022) Robust state/output feedback linearization of direct drive robot manipulators: a controllability and observability analysis, European Journal of Control, 64 (March) Article No. 100612.
- Perrusquia A (2022) A complementary learning approach for expertise transference of human-optimized controllers, Neural Networks, 145 (January) 33-41.
- Ramirez J, Yu W & Perrusquia A (2022) Model-free reinforcement learning from expert demonstrations: a survey, Artificial Intelligence Review, 55 (4) 3212-3241.
- Perrusquia A (2022) Solution of the linear quadratic regulator problem of black box linear systems using reinforcement learning, Information Sciences, 595 (May) 364-377.
- Perrusquía A (2022) Human-behavior learning: a new complementary learning perspective for optimal decision making controllers, Neurocomputing, 489 (June) 157-166.
- Perrusquía A, Flores-Campos JA & Yu W (2022) Optimal sliding mode control for cutting tasks of quick-return mechanisms, ISA Transactions, 122 (March) 88-95.
- Flores-Campos JA, Perrusquía A, Gómez LH, González N & Armenta-Molina A (2021) Constant speed control of slider-crank mechanisms: a joint-task space hybrid control approach, IEEE Access, 9 65676-65687.
- Kacker T, Perrusquia A & Guo W (2023) Multi-spectral fusion using generative adversarial networks for UAV detection of wild fires. In: 2023 5th International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Bali, 20-23 February 2023.
- Mendoza J, Perrusquía A & Flores-Campos JA (2022) Mechanical advantage assurance control of quick-return mechanisms in task space. In: 2022 19th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Mexico City, 9-11 November 2022.
- Perrusquia A & Guo W (2022) Performance objective extraction of optimal controllers: a hippocampal learning approach. In: 2022 IEEE 18th International Conference on Automation Science and Engineering, Mexico City, 20-24 August 2022.
- Perrusquia A & Guo W (2022) Cost inference of discrete-time linear quadratic control policies using human-behaviour learning. In: CODiT 2022: 8th International Conference on Control, Decision and Information Technologies, Istanbul, Turkey, 17-20 May 2022.
- Perrusquía A & Yu W (2022) Human-behavior learning for infinite-horizon optimal tracking problems of robot manipulators. In: 2021 60th IEEE Conference on Decision and Control (CDC), Austin, Texas, 14-17 December 2021.
- Perrusquía A, Garrido R & Yu W (2021) An input error method for parameter identification of a class of Euler-Lagrange systems. In: 18th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Mexico City, 10 November - 12 October 2021.