AR3n : A Reinforcement Learning-based Assist-As-Needed Controller for Robotic Rehabilitation

In this project, we present AR3n (pronounced as Aaron), an assist-as-needed (AAN) controller that utilizes reinforcement learning to supply adaptive assistance during a robot assisted handwriting rehabilitation task. AR3n uses a soft actor-critic algorithm to derive a model-free controller for upper limb stroke rehabilitation. Unlike previous AAN controllers, our method does not require manual-tuning of controller parameters or the need for patient specific physical models. We propose the use of a virtual patient model to generalize AR3n across multiple subjects. The system modulates robotic impedance based on a subject's tracking error, while minimizing the amount of robotic assistance. It delivers stable realtime assistance and prevents over-reliance on robotic assistance. The controller is experimentally validated through a set of simulations and human subject experiments. We compare our system to traditional rule-based controllers and a Learning-from-Demonstration controller. Finally, we demonstrate the efficacy and superiority of AR3n over rule-based controllers through a human subject study.