Estimation of grip pattern using reinforcement learning for prosthetic arm
When an arm or any other limb is severed or lost, a prosthetic device, also known as a prosthesis, can aid rehabilitation. Given the upper-limb prosthesis contributes to the rehabilitation of motor abilities, tactile sensations are lost in an amputee who uses his or her hands for grip control. Thus, the result is a common assumption that restoring force input will help in the regulation of prosthetic grasping force. This project introduces a measuring system implementing neural network to enhance the detection for dynamic motions of the five fingers in real-time. The prosthetic limb was made by utilizing 3D printing technology, which results in a low-cost, efficient solution that can be tailored to the size, form, and colour preferences of each user. The system uses the microprocessor Raspberry-pi as the central unit to enable the machine learning models. A vision-based analysis was used to collect data using neural network hand landmarks to discern between distinct grip patterns that will help the prosthesis. A machine learning (ML) pipeline is used in the computer vision hand landmark approach. This includes models like BlazePalm, which utilised a single image to derive twenty-one 3D landmarks in a bounding box of a hand. Tactile input, in addition to visual data, is the most effective way of relaying information about grip action. Here under the conditions, the hand landmark comparison algorithms, as well as the pressure data from the tactile sensors on the prosthetic arm, revealed unique patterns that may be used to discern different grab actions. This comparison data is trained using neural network models to predict the hand grasp and reach motions. This system is further proposed to be integrated to an EEG headset, wherein the prosthetic arm will predict hand reach and grasp patterns enabled with Reinforcement learning using the signal feedback along with the tactile feedback.