Cyberflex

Software

Our software team was very intentional in achieving their learning goals of understanding signal processing, optimizing machine learning models, predicting signals in real time, and creating user friendly data collection systems. 

What We Accomplished!

  • Learned to process sEMG signals
  • Trained and Optimized Machine Learning (CNN) models
  • Optimized an sEMG data collection system for multiple sensors
  • Successfully implemented real time recognition of gestures

Signal Processing
One of our learning goals was to determine how sEMG signals are processed to be properly used in our context. The following was our approach to this: 

This project involves two distinct pipelines for processing sEMG data to control the prosthetic arm: raw and pre-processed signals. Although the sensors we used to detect EMG signals have pins which output the filtered and raw signal respectively, we decided it would still be beneficial to learn how the signals from the emg are processed. Thus we created two pipelines, one designed to handle raw sEMG signals directly from the sensors and one assuming that the sEMG data has already been pre-processed.

The first pipeline is designed to handle raw sEMG signals directly from the sensors. This pipeline runs an FFT (Fast Fourier Transform) on the raw signals, processing the raw signals through several stages, including filtering, rectification, and normalization. It begins by applying a bandpass filter to isolate the relevant frequency range and remove unwanted noise and artifacts. The signal is then rectified to ensure it only has positive values, followed by the application of a moving RMS envelope to smooth the signal. Finally, the processed signal is normalized against maximum voluntary contraction (MVC) values to standardize the data for consistent use across different users. This pipeline allows real-time control of the prosthetic arm by interpreting the muscle signals from scratch.

The second pipeline assumes that the sEMG data has already been pre-processed. In this case, the raw signal has been filtered, rectified, and normalized, so the pipeline focuses on using the already processed data to control the prosthetic arm. This streamlined pipeline reduces computational complexity and speeds up the response time since the signal-processing steps have already been completed. Both pipelines are designed to convert sEMG data into usable data that we can use to train machine learning models that drive the prosthetic's movements, with the first pipeline offering a more comprehensive solution for real-time signal acquisition and processing.

In the end, to keep things simpler and more computationally efficient, we simply used the preprocessed signals in our final data collection and model training efforts. 

Plots

To the right is an example of a plot we get after training our model. As you can see, as the model trains its “Loss” (also known as error) decreases steadily, thus the model is learning. The bottom plot is of our validation accuracy which in this case steadily rose until we got to around 94% accuracy.

Lastly, we create a function which sends our prediction over to the the arduino so that the firmware code can move motors based on what the gesture was.

We then loop through this until we wish to end the session. 

And all of it coming together!

Below is an explanation of everything software and how it was done!

You can view our code here on our public GitHub Repository!

Source Code