Throughout this sprint, we created a 3D sketch model of our final vision, while working on new hand/ forearm CAD and researching optimized sensors placements to train our final models for Sprint 3.
CAD has been printed for just the hand, while the CAD for the wrist and stand has been worked on. We recognized a few problem with the hand that will be fixed with new CAD.
electrical
We have learned about optimal muscle placements for the sensors. We have also written code for the Arduino which can now move 5 servo motors connected to the Arduino to individually control fingers and differentiate between rock, paper, and scissor gestures. We also created code to use our trained model to predict the gestures, but we haven't been able to fully implement it yet due to issues with our data collection.
Software
We have created machine learning models which can successfully differentiate between different gestures to a high degree of accuracy. We also began work on a script that can detect these gestures in real time. We hope to be able to implement these models in our next sprint.
3D Printed Sketch
We used this prototype to test the integration of our machine learning and electrical/software process as the code is running. it is an improved model of our cardboard, but has certain issues such as passing strings through the fingers that new CAD will solve. We are using five servos and an Arduino Uno R4.
What we hope to accomplish in the next sprint!
To create a new 3D printed mechanical arm with a forearm block that can act as a stand. We also hope to change the fingers and motors to DC motors based on what the mechanical team decided to use in the CAD model. We will also fix our script to read sEMG data, predict a gestures using our model (which we will also train on more data and number of gestures), and translate it to the motors to control the hand. We will also update sensor placements for each of the electrodes to optimize muscle signal readings.