Contact Dr Adolfo Perrusquia Guzman
Areas of expertise
- Aeronautical Systems
- Autonomous Systems
- Instrumentation, Sensors and Measurement Science
- Mechatronics & Advanced Controls
- Systems Engineering
Background
Adolfo Perrusquía is an expert in reinforcement learning, especially for the control of dynamical systems (eg, robotics, autonomous vehicles). In particular, he is expertise in combining classical nonlinear control theory with recent data-driven learning methods.
He has a MSc and PhD degrees in automatic control from the CINVESTAV-IPN (rank 2 research in Latin America) and the BEng degree in Mechatronic Engineering from the IPN (rank 4 university in Mexico). He has published extensively in employing artificial intelligence techniques applied to dynamical systems. He was awarded by the Mexican Society of Artificial Intelligence with the third place for the best PhD thesis in Artificial Intelligence 2021.
Adolfo Perrusquía joined Cranfield in 2021. He has been appointed Chair of the Task Force on Reinforcement Learning for Robots in the IEEE Computational Intelligence Society. He is an Associate Editor of the IEEE Transactions on Neural Networks and Learning Systems. He has been awarded by the Royal Academy of Engineering with a UK-IC postdoctoral fellowship in 2021. He is within the Human Machine Intelligence Research Group led by Professor Weisi Guo.
Research opportunities
- Reinforcement Learning
- Inverse Reinforcement Learning
- System Identification
- Machine Learning
- Deep Learning
- Neural Networks
- Linear and Nonlinear Control
- Robotics
Current activities
Adolfo Perrusquia is a Lecturer in Reinforcement Learning for Engineering and a former UK-IC Postdoctoral Research Fellow.
His expertise is on theory and applications of both control and artificial intelligence. In particular, he is extremely interested in system identification, nonlinear control (which includes adaptive and robust control), robotics, deep learning and especially in reinforcement learning applications. Since January 2021, he is teaching some modules of the MSc in Applied Artificial Intelligence.
Clients
- Thales SA
- Department for Transport
- National Police Chiefs' Council
- Saab UK Ltd (BlueBear)
Publications
Articles In Journals
- Perrusquía A & Guo W. (2025). Drone’s objective inference using policy error inverse reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, 36(1)
- Sonntag V, Perrusquía A, Tsourdos A & Guo W. (2025). A COLREGs compliance reinforcement learning approach for USV manoeuvring in track-following and collision avoidance problems. Ocean Engineering, 316
- Perrusquía A & Guo W. (2025). Uncovering reward goals in distributed drone swarms using physics-informed multiagent inverse reinforcement learning. IEEE Transactions on Cybernetics, 55(1)
- Guo W, Wei Z, González-Villarreal OJ, Perrusquía A & Tsourdos A. (2024). Control layer security: a new security paradigm for cooperative autonomous systems. IEEE Vehicular Technology Magazine, 19(1)
- Deep A, Perrusquía A, Aljaburi L, Al-Rubaye S & Guo W. (2024). A novel distributed authentication of blockchain technology integration in IoT services. IEEE Access, 12
- Perrusquía A & Guo W. (2024). Trajectory inference of unknown linear systems based on partial states measurements. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 54(4)
- Perrusquía A, Guo W, Fraser B & Wei Z. (2024). Uncovering drone intentions using control physics informed machine learning. Communications Engineering, 3(1)
- El Debeiki M, Al-Rubaye S, Perrusquía A, Conrad C & Flores Campos JA. (2024). An advanced path planning and UAV relay system: enhancing connectivity in rural environments. Future Internet, 16(3)
- Perrusquía A & Guo W. (2024). Reservoir computing for drone trajectory intent prediction: a physics informed approach. IEEE Transactions on Cybernetics, 54(9)
- Kumar A, Perrusquía A, Al-Rubaye S & Guo W. (2024). Wildfire and smoke early detection for drone applications: a light-weight deep learning approach. Engineering Applications of Artificial Intelligence, 136
- Perrusquía A, Zou M & Guo W. (2024). Explainable data-driven Q-learning control for a class of discrete-time linear autonomous systems. Information Sciences, 682
- Bildik E, Tsourdos A, Perrusquía A & Inalhan G. (2024). Decoys Deployment for Missile Interception: A Multi-Agent Reinforcement Learning Approach. Aerospace, 11(8)
- Flores-Campos JA, Torres-San-Miguel CR, Paredes-Rojas JC & Perrusquía A. (2024). Prescribed Time Interception of Moving Objects’ Trajectories Using Robot Manipulators. Robotics, 13(10)
- Ali AM, Perrusquía A, Guo W & Tsourdos A. (2024). Flight plan optimisation of unmanned aerial vehicles with minimised radar observability using action shaping proximal policy optimisation. Drones, 8(10)
- Perrusquía A, Wei Z & Guo W. (2024). Trajectory intent prediction of autonomous systems using dynamic mode decomposition. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 54(12)
- Mugabe J, Wisniewski M, Perrusquía A & Guo W. (2024). Enhancing situational awareness of helicopter pilots in unmanned aerial vehicle-congested environments using an airborne visual artificial intelligence approach. Sensors, 24(23)
- Perrusquía A & Guo W. (2023). Closed-loop output error approaches for drone’s physics informed trajectory inference. IEEE Transactions on Automatic Control, 68(12)
- Perrusquía A & Guo W. (2023). Reward inference of discrete-time expert's controllers: A complementary learning approach. Information Sciences, 631(June)
- Perrusquía A & Guo W. (2023). Physics informed trajectory inference of a class of nonlinear systems using a closed-loop output error technique. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 53(12)
- Perrusquia A & Guo W. (2023). Optimal Control of Nonlinear Systems Using Experience Inference Human-Behavior Learning. IEEE/CAA Journal of Automatica Sinica, 10(1)
- Perrusquía A. (2022). Robust state/output feedback linearization of direct drive robot manipulators: a controllability and observability analysis. European Journal of Control, 64(March)
- Perrusquía A. (2022). Solution of the linear quadratic regulator problem of black box linear systems using reinforcement learning. Information Sciences, 595(May)
- Perrusquía A. (2022). Human-behavior learning: a new complementary learning perspective for optimal decision making controllers. Neurocomputing, 489(June)
- Perrusquía A, Garrido R & Yu W. (2022). Stable robot manipulator parameter identification: a closed-loop input error approach. Automatica, 141(July)
- Perrusquía A & Guo W. (2022). A closed-loop output error approach for physics-informed trajectory inference using online data. IEEE Transactions on Cybernetics, 53(3)
- Perrusquía A & Guo W. (2022). Optimal control of nonlinear systems using experience inference human-behavior learning. IEEE CAA Journal of Automatica Sinica, 10(1)
- Perrusquía A & Guo W. (2022). Hippocampus experience inference for safety critical control of unknown multi-agent linear systems. ISA Transactions, 137(June)
- Perrusquia A & Yu W. (2022). Neural H₂ Control Using Continuous-Time Reinforcement Learning. IEEE Transactions on Cybernetics, 52(6)
- Perrusquía A, Flores-Campos JA & Yu W. (2021). Optimal sliding mode control for cutting tasks of quick-return mechanisms. ISA Transactions, 122(March)
- Flores-Campos JA, Perrusquía A, Hernández-Gómez LH, González N & Armenta-Molina A. (2021). Constant speed control of slider-crank mechanisms: a joint-task space hybrid control approach. IEEE Access, 9
- Ramírez J, Yu W & Perrusquía A. (2021). Model-free reinforcement learning from expert demonstrations: a survey. Artificial Intelligence Review, 55(4)
- Perrusquía A. (2021). A complementary learning approach for expertise transference of human-optimized controllers. Neural Networks, 145(January)
- Perrusquia A & Yu W. (2021). Discrete-Time H2 Neural Control Using Reinforcement Learning. IEEE Transactions on Neural Networks and Learning Systems, 32(11)
- Perrusquía A, Yu W & Li X. (2021). Nonlinear control using human behavior learning. Information Sciences, 569
- Perrusquía A & Yu W. (2021). Identification and optimal control of nonlinear systems using recurrent neural networks and reinforcement learning: An overview. Neurocomputing, 438
- Perrusquía A & Yu W. (2021). Continuous-time reinforcement learning for robust control under worst-case uncertainty. International Journal of Systems Science, 52(4)
- Perrusquía A, Yu W & Li X. (2021). Multi-agent reinforcement learning for redundant robot control in task-space. International Journal of Machine Learning and Cybernetics, 12(1)
- Perrusquia A, Flores-Campos JA, Torres-Sanmiguel CR & Gonzalez N. (2020). Task Space Position Control of Slider-Crank Mechanisms Using Simple Tuning Techniques Without Linearization Methods. IEEE Access, 8
- Perrusquia A, Flores-Campos JA & Torres-San-Miguel CR. (2020). A Novel Tuning Method of PD With Gravity Compensation Controller for Robot Manipulators. IEEE Access, 8
- Perrusquía A & Yu W. (2020). Robot Position/Force Control in Unknown Environment Using Hybrid Reinforcement Learning. Cybernetics and Systems, 51(4)
- Perrusquía A & Yu W. (2020). Robust control under worst‐case uncertainty for unknown nonlinear systems using modified reinforcement learning. International Journal of Robust and Nonlinear Control, 30(7)
- Yu W & Perrusquía A. (2020). Simplified Stable Admittance Control Using End-Effector Orientations. International Journal of Social Robotics, 12(5)
- Perrusquía A & Yu W. (2020). Human-in-the-Loop Control Using Euler Angles. Journal of Intelligent & Robotic Systems, 97(2)
- Flores Campos JA & Perrusquía A. (2019). Slider position control for slider-crank mechanisms with Jacobian compensator. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, 233(10)
- Perrusquía A, Yu W & Soria A. (2019). Position/force control of robot manipulators using reinforcement learning. Industrial Robot: the international journal of robotics research and application, 46(2)
- Perrusquía A, Yu W, Soria A & Lozano R. (2017). Stable admittance control without inverse kinematics. IFAC-PapersOnLine, 50(1)
- Xu J, Panagopoulos D, Perrusquía A, Guo W & Tsourdos A. Generalising Rescue Operations in Disaster Scenarios Using Drones: A Lifelong Reinforcement Learning Approach. Drones, 9(6)