Contact Dr Adolfo Perrusquia Guzman
Areas of expertise
- Aeronautical Systems
- Autonomous Systems
- Instrumentation, Sensors and Measurement Science
- Mechatronics & Advanced Controls
- Systems Engineering
Background
Adolfo Perrusquía is an expert in reinforcement learning, especially for the control of dynamical systems (eg, robotics, autonomous vehicles). In particular, he is expertise in combining classical nonlinear control theory with recent data-driven learning methods.
He has a MSc and PhD degrees in automatic control from the CINVESTAV-IPN (rank 2 research in Latin America) and the BEng degree in Mechatronic Engineering from the IPN (rank 4 university in Mexico). He has published extensively in employing artificial intelligence techniques applied to dynamical systems. He was awarded by the Mexican Society of Artificial Intelligence with the third place for the best PhD thesis in Artificial Intelligence 2021.
Adolfo Perrusquía joined Cranfield in 2021. He has been appointed Chair of the Task Force on Reinforcement Learning for Robots in the IEEE Computational Intelligence Society. He is an Associate Editor of the IEEE Transactions on Neural Networks and Learning Systems. He has been awarded by the Royal Academy of Engineering with a UK-IC postdoctoral fellowship in 2021. He is within the Human Machine Intelligence Research Group led by Professor Weisi Guo.
Research opportunities
- Reinforcement Learning
- Inverse Reinforcement Learning
- System Identification
- Machine Learning
- Deep Learning
- Neural Networks
- Linear and Nonlinear Control
- Robotics
Current activities
Adolfo Perrusquia is a Lecturer in Reinforcement Learning for Engineering and a former UK-IC Postdoctoral Research Fellow.
His expertise is on theory and applications of both control and artificial intelligence. In particular, he is extremely interested in system identification, nonlinear control (which includes adaptive and robust control), robotics, deep learning and especially in reinforcement learning applications. Since January 2021, he is teaching some modules of the MSc in Applied Artificial Intelligence.
Clients
- Thales SA
- Department for Transport
- National Police Chiefs' Council
- Saab UK Ltd (BlueBear)
Publications
Articles In Journals
- Perrusquía A & Guo W. (2025). Drone’s Objective Inference Using Policy Error Inverse Reinforcement Learning. IEEE Transactions on Neural Networks and Learning Systems, 36(1)
- Sonntag V, Perrusquía A, Tsourdos A & Guo W. (2025). A COLREGs compliance reinforcement learning approach for USV manoeuvring in track-following and collision avoidance problems. Ocean Engineering, 316
- Perrusquía A & Guo W. (2025). Uncovering Reward Goals in Distributed Drone Swarms Using Physics-Informed Multiagent Inverse Reinforcement Learning. IEEE Transactions on Cybernetics, 55(1)
- Guo W, Wei Z, Gonzalez O, Perrusquía A & Tsourdos A. (2024). Control Layer Security: A New Security Paradigm for Cooperative Autonomous Systems. IEEE Vehicular Technology Magazine, 19(1)
- Deep A, Perrusquía A, Aljaburi L, Al-Rubaye S & Guo W. (2024). A Novel Distributed Authentication of Blockchain Technology Integration in IoT Services. IEEE Access, 12
- Perrusquía A & Guo W. (2024). Trajectory Inference of Unknown Linear Systems Based on Partial States Measurements. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 54(4)
- Perrusquía A, Guo W, Fraser B & Wei Z. (2024). Uncovering drone intentions using control physics informed machine learning. Communications Engineering, 3(1)
- El Debeiki M, Al-Rubaye S, Perrusquía A, Conrad C & Flores-Campos JA. (2024). An Advanced Path Planning and UAV Relay System: Enhancing Connectivity in Rural Environments. Future Internet, 16(3)
- Perrusquía A & Guo W. (2024). Reservoir Computing for Drone Trajectory Intent Prediction: A Physics Informed Approach. IEEE Transactions on Cybernetics, 54(9)
- Kumar A, Perrusquía A, Al-Rubaye S & Guo W. (2024). Wildfire and smoke early detection for drone applications: A light-weight deep learning approach. Engineering Applications of Artificial Intelligence, 136
- Perrusquía A, Zou M & Guo W. (2024). Explainable data-driven Q-learning control for a class of discrete-time linear autonomous systems. Information Sciences, 682
- Bildik E, Tsourdos A, Perrusquía A & Inalhan G. (2024). Decoys Deployment for Missile Interception: A Multi-Agent Reinforcement Learning Approach. Aerospace, 11(8)
- Flores-Campos JA, Torres-San-Miguel CR, Paredes-Rojas JC & Perrusquía A. (2024). Prescribed Time Interception of Moving Objects’ Trajectories Using Robot Manipulators. Robotics, 13(10)
- Ali AM, Perrusquía A, Guo W & Tsourdos A. (2024). Flight Plan Optimisation of Unmanned Aerial Vehicles with Minimised Radar Observability Using Action Shaping Proximal Policy Optimisation. Drones, 8(10)
- Perrusquía A, Wei Z & Guo W. (2024). Trajectory Intent Prediction of Autonomous Systems Using Dynamic Mode Decomposition. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 54(12)
- Mugabe J, Wisniewski M, Perrusquía A & Guo W. (2024). Enhancing Situational Awareness of Helicopter Pilots in Unmanned Aerial Vehicle-Congested Environments Using an Airborne Visual Artificial Intelligence Approach. Sensors, 24(23)
- Perrusquia A & Guo W. (2023). A Closed-Loop Output Error Approach for Physics-Informed Trajectory Inference Using Online Data. IEEE Transactions on Cybernetics, 53(3)
- Perrusquia A & Guo W. (2023). Optimal control of nonlinear systems using experience inference human-behavior learning. IEEE CAA Journal of Automatica Sinica, 10(1)
- Perrusquía A & Guo W. (2023). Hippocampus experience inference for safety critical control of unknown multi-agent linear systems. ISA Transactions, 137(June)
- Perrusquía A & Guo W. (2023). Closed-Loop Output Error Approaches for Drone's Physics Informed Trajectory Inference. IEEE Transactions on Automatic Control, 68(12)
- Perrusquía A & Guo W. (2023). Reward inference of discrete-time expert's controllers: A complementary learning approach. Information Sciences, 631(June)
- Perrusquía A & Guo W. (2023). Physics Informed Trajectory Inference of a Class of Nonlinear Systems Using a Closed-Loop Output Error Technique. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 53(12)
- Perrusquia A & Guo W. (2023). Optimal Control of Nonlinear Systems Using Experience Inference Human-Behavior Learning. IEEE/CAA Journal of Automatica Sinica, 10(1)
- Perrusquía A, Flores-Campos JA & Yu W. (2022). Optimal sliding mode control for cutting tasks of quick-return mechanisms. ISA Transactions, 122(March)
- Ramírez J, Yu W & Perrusquía A. (2022). Model-free reinforcement learning from expert demonstrations: a survey. Artificial Intelligence Review, 55(4)
- Perrusquía A. (2022). A complementary learning approach for expertise transference of human-optimized controllers. Neural Networks, 145(January)
- Perrusquía A. (2022). Robust state/output feedback linearization of direct drive robot manipulators: A controllability and observability analysis. European Journal of Control, 64(March)
- Perrusquía A. (2022). Solution of the linear quadratic regulator problem of black box linear systems using reinforcement learning. Information Sciences, 595(May)
- Perrusquía A. (2022). Human-behavior learning: A new complementary learning perspective for optimal decision making controllers. Neurocomputing, 489(June)
- Perrusquía A, Garrido R & Yu W. (2022). Stable robot manipulator parameter identification: A closed-loop input error approach. Automatica, 141(July)
- Perrusquia A & Yu W. (2022). Neural H₂ Control Using Continuous-Time Reinforcement Learning. IEEE Transactions on Cybernetics, 52(6)
- Flores-Campos JA, Perrusquia A, Hernandez-Gomez LH, Gonzalez N & Armenta-Molina A. (2021). Constant Speed Control of Slider-Crank Mechanisms: A Joint-Task Space Hybrid Control Approach. IEEE Access, 9
- Perrusquia A & Yu W. (2021). Discrete-Time H2 Neural Control Using Reinforcement Learning. IEEE Transactions on Neural Networks and Learning Systems, 32(11)
- Perrusquía A, Yu W & Li X. (2021). Nonlinear control using human behavior learning. Information Sciences, 569
- Perrusquía A & Yu W. (2021). Identification and optimal control of nonlinear systems using recurrent neural networks and reinforcement learning: An overview. Neurocomputing, 438
- Perrusquía A & Yu W. (2021). Continuous-time reinforcement learning for robust control under worst-case uncertainty. International Journal of Systems Science, 52(4)
- Perrusquía A, Yu W & Li X. (2021). Multi-agent reinforcement learning for redundant robot control in task-space. International Journal of Machine Learning and Cybernetics, 12(1)
- Perrusquia A, Flores-Campos JA, Torres-Sanmiguel CR & Gonzalez N. (2020). Task Space Position Control of Slider-Crank Mechanisms Using Simple Tuning Techniques Without Linearization Methods. IEEE Access, 8
- Perrusquia A, Flores-Campos JA & Torres-San-Miguel CR. (2020). A Novel Tuning Method of PD With Gravity Compensation Controller for Robot Manipulators. IEEE Access, 8
- Perrusquía A & Yu W. (2020). Robot Position/Force Control in Unknown Environment Using Hybrid Reinforcement Learning. Cybernetics and Systems, 51(4)
- Perrusquía A & Yu W. (2020). Robust control under worst‐case uncertainty for unknown nonlinear systems using modified reinforcement learning. International Journal of Robust and Nonlinear Control, 30(7)
- Yu W & Perrusquía A. (2020). Simplified Stable Admittance Control Using End-Effector Orientations. International Journal of Social Robotics, 12(5)
- Perrusquía A & Yu W. (2020). Human-in-the-Loop Control Using Euler Angles. Journal of Intelligent & Robotic Systems, 97(2)
- Flores Campos JA & Perrusquía A. (2019). Slider position control for slider-crank mechanisms with Jacobian compensator. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, 233(10)
- Perrusquía A, Yu W & Soria A. (2019). Position/force control of robot manipulators using reinforcement learning. Industrial Robot: the international journal of robotics research and application, 46(2)
- Perrusquía A, Yu W, Soria A & Lozano R. (2017). Stable admittance control without inverse kinematics. IFAC-PapersOnLine, 50(1)
Conference Papers
- KN K, Ignatyev D, Tsourdos A & Perrusquia A. (2024). Advancing Fault Diagnosis in Aircraft Landing Gear: An innovative two-tier Machine Learning Approach with intelligent sensor data management
- Gruffeille C, Perrusquía A, Tsourdos A & Guo W. (2024). Disaster Area Coverage Optimisation Using Reinforcement Learning
- Bildik E, Tsourdos A, Perrusquía A & Inalhan G. (2024). Swarm Decoys Deployment for Missile Deceive using Multi-Agent Reinforcement Learning
- Perrusquía A & Guo W. (2024). A Novel Physics-Informed Recurrent Neural Network Approach for State Estimation of Autonomous Platforms
- Zou M, Perrusquía A & Guo W. (2024). Explaining Data-Driven Control in Autonomous Systems: A Reinforcement Learning Case Study
- Wang Y, Perrusquia A & Ignatyev D. (2024). Flying Like Birds: Leveraging Distributed Aerodynamic Data for Enhanced Self-Sensing and Flight State Prediction
- Wang Y, Perrusquia A & Ignatyev D. (2024). Towards Bio-Inspired Control of Aerial Vehicle: Distributed Aerodynamic Parameters for State Prediction
- Kacker T, Perrusquia A & Guo W. (2023). Multi-Spectral Fusion using Generative Adversarial Networks for UAV Detection of Wild Fires
- Singh G, Perrusquía A & Guo W. (2023). A Two-Stages Unsupervised/Supervised Statistical Learning Approach for Drone Behaviour Prediction
- Fraser B, Perrusquía A, Panagiotakopoulos D & Guo W. (2023). Hybrid Deep Neural Networks for Drone High Level Intent Classification using Non-Cooperative Radar Data
- Flores-Campos JA & Perrusquía A. (2023). Robust Control of Linear Systems: A Min-Max Reinforcement Learning Formulation
- Fraser B, Perrusquía A, Panagiotakopoulos D & Guo W. (2023). A Deep Mixture of Experts Network for Drone Trajectory Intent Classification and Prediction using Non-Cooperative Radar Data
- Perrusquía A & Guo W. (2022). Cost Inference of Discrete-time Linear Quadratic Control Policies using Human-Behaviour Learning
- Perrusquia A & Guo W. (2022). Performance Objective Extraction of Optimal Controllers: A Hippocampal Learning Approach
- Mendoza J, Perrusquia A & Flores-Campos JA. (2022). Mechanical Advantage Assurance Control of Quick-return Mechanisms in Task Space
- Perrusquia A, Garrido R & Yu W. (2021). An Input Error Method for Parameter Identification of a Class of Euler-Lagrange Systems
- Perrusquia A & Yu W. (2021). Human-Behavior Learning for Infinite-Horizon Optimal Tracking Problems of Robot Manipulators
- Perrusquia A, Yu W & Li X. (2020). Robust Control in the Worst Case Using Continuous Time Reinforcement Learning
- Perrusquia A, Yu W & Li X. (2020). Redundant Robot Control Using Multi Agent Reinforcement Learning
- Perrusquia A & Yu W. (2020). Neural H2 Control Using Reinforcement Learning for Unknown Nonlinear Systems
- Perrusquia A, Yu W & Soria A. (2019). Large space dimension Reinforcement Learning for Robot Position/Force Discrete Control
- Perrusquia A & Yu W. (2019). Task space human-robot interaction using angular velocity Jacobian
- Perrusquia A, Yu W & Li X. (2019). Impedance Control without Environment Model by Reinforcement Learning
- Perrusquia A, Yu W & Soria A. (2019). Optimal contact force of Robots in Unknown Environments using Reinforcement Learning and Model-free controllers
- Perrusquia A, Flores-Campos JA & Yu W. (2019). Simple Optimal Tracking Control for a Class of Closed-Chain Mechanisms in Task Space
- Perrusquia A, Tovar C, Soria A & Martinez JC. (2016). Robust controller for aircraft roll control system using data flight parameters