The overarching goal of our research is to develop novel learning and control algorithms to enable robots to safely and efficiently collaborate with humans and other robots to complete complex tasks. The algorithms are applied to various robot platforms, including aerial robotics, soft robotics, and human-robot interaction. Please check the summary of each project below and feel free to contact us if you have any questions or want to know more details!
We greatly acknowledge the National Science Foundation, Office of Naval Research, Science Foundation Arizona, Arizona Department of Health Services, Salt River Project, Northrop Grumman Cooperation, and several internal funding sources, for supporting our past and current research.
Unmanned aerial vehicles (UAVs) are popular in various applications, such as aerial photography, surveillance, search and rescue, and precision agriculture. However, autonomous operations of small UAVs in dynamic environments pose challenges on the design of vehicle hardware and the embedded autonomy algorithms. Our research in this area includes (1) exploring the design of morphing UAVs, (2) developing dynamic models and precision control algorithms for the new hardware, and (3) demonstrating aerial-physical interaction for navigation and manipulation.
> Software-in-the-Loop Simulation of Aerial Robots
Make a simulation pipeline to rapidly test algorithms for aerial robots.
Integrate controllers using ROS2 into a simulation environment with Ardupilot and PX4 flight controllers.
> Contact Based Soft Aerial Robots
To exploit the physical contact between the multi-rotor drone and its environment for better control and manipulation and higher safety.
Developing various compliant multi-rotor drones for passive resilience in contacts, detecting the contacts via different sensing methods. Modeling and simulation of contacts/collisions between drones and their physical environment.
> Contact Based Safe Navigation for Aerial Robots
To exploit the physical contact between the multi-rotor drone and its environment for better control, motion planning and higher safety.
Developing a safe planning and control algorithm for collision based efficient navigation. Building a simulator for RL based planning integrating contact model and recovery controller.
Soft robotics is reshaping the future of technology by developing flexible, adaptable systems that safely interact with humans and operate in complex environments. By utilizing soft, deformable materials, this work focuses on creating robots that address real-world challenges across various industries. Current projects include a precision-engineered soft robotic arm designed for advanced modeling and control, pneumatic fabric-based exosuits that offer personalized support for individuals in rehabilitation, and a soft pipe inspection robot capable of navigating intricate pipelines to ensure safe and efficient maintenance. These technologies aim to improve quality of life, enhance mobility, and enable safer infrastructure management. The driving mission is to push the boundaries of robotics through human-centered, adaptable designs that meet the growing demand for innovative solutions in healthcare, industry, and beyond, offering systems that are not only efficient but also intuitive in their interaction with people.
> Soft Robotic Arm
Modeling and control of Soft Robotic Arm.
Work on pneumatically operated soft robotic arm to test, train and implement models and develop control algorithms to achieve tasks including but not limited to trajectory tracking.
> Soft Knee Exosuit
Design, characterize and test soft inflatable actuator based exosuit on healthy human subjects to evaluate the assistance provided by exosuit.
Design and evaluate Soft Robotic exosuit powered by a new Inflatable Actuators and develop controls. To evaluate the effect of exosuit during flexion and extension, surface electromyography (sEMG) sensors are placed to record the muscle activity.
> Pipe Inspection Robot
This pipe inspection robot consists of several bistable inflatable fabric actuators, enabling it to navigate pipes of various sizes (4-6 inches in diameter) using inchworm locomotion.
Designed to handle obstacles within the pipes. The large bistable actuator located in the center of the robot generates impact force, allowing it to push away or break through obstructions. The smaller bistable actuators at the head and tail can adapt to diameter changes in pipes.
Improving bistable structure (materials, fabrication methods.etc) to make it more reliable and robust Control the robot to perform a jumping gait inside the pipe.
Robots are increasingly employed in close proximity to humans. For the humans and robots to collaborate safely and efficiently, a robot needs to understand human intents, predict human actions, consider human factors, in order to optimize its own actions to complete a task with human safely, efficiently, and friendly. Here we will explore a game-theoretic framework to model the bilateral inference and decision making process between the human and robot. We are interested in both proximal and physical tasks that involve joint decision-making and joint-action between the human and robot. One major challenge is to model the human actions in highly dynamic tasks given the strong variability and uncertainty of humans. We will apply the developed algorithms in various human-robot collaboration scenarios, including autonomous vehicles, collaborative manufacturing, wearable robots, and assistive devices. For more details about how we apply the developed algorithms to autonomous vehicles, please check this page.
> Prospect-Theoretic Reinforcement Learning
Make AI better understand human preferences and decisions to make AI better able to assist.
Integrate risk-aware cognitive models (CPT) into interactive AI planning in an Overcooked environment.
> Game Theoretical Modeling of Human-Robot Interactions
Developing a game theoretical based controller for physical human-robot interactions scenario such as controlling assistive wearable robots.
We are aiming to integrate incomplete information games with optimal control and reinforcement learning to infer the human intent during HRI tasks and also model the possible learning process of the human while interacting with the robot.