A traveling music box that follows and reads music at various speeds and produces the corresponding notes mechanically.
Our first sprint was primarily focused on research and replicating what we learned in previous projects to apply to our current MVP. Since we wanted to order parts as early as possible it was essential to figure out what the musical mechanism would be. The mechanical team tried out using pipes, solenoids, and a kalimba to play a sound. However we found these methods to either be too quiet, imprecise, or difficult to replicate. At the end of the sprint we did have a sound making prototype working with the kalimba, four bar linkage, and solenoid. For the electrical and software side of the prototyping we recycled Mini Project 3 Line- follower robot code and chassis to get the robot driving. We also experimented with sensing different types of colors, however with the IR sensor it was highly inaccurate. Thus we ended the sprint with the following realizations and proposed solutions:
A traveling music box that follows and reads music at various speeds and produces the corresponding notes mechanically.
Our MVP for this sprint stayed the same, however we had two major changes in mind after our first sprint review: using OpenCV on a RaspberryPi for color detection and an actual music box instead of a kalimba. The reasons we chose these changes were because the Pi would have the processing power to run OpenCV and the music box would have a wider range of notes in a more compact space as well as a lower actuating force. We started out optimistic for this direction however with the sprint review being changed to a check in along with the pile up of other projects we did not get as much integration done as we did iteration. Using the RaspberryPi turned out to be much more trouble than it was worth since setting up the dependencies was not working, we were unfamiliar with communicating between it and an Arduino, as well as the toll it would be taking on the budget. As for using the music box the mechanical team worked on getting more familiar with the mechanism in addition to looking for tutorials on how to play it with servos. At this time we were still considering using a track with colorful music notes that would form a circular form on the ground that the robot could line follow around while detecting the colors of the notes to play. Thus we ended the sprint with the following realizations and proposed solutions:
Our group would like to design a log shaped music box that will play a tune based on a person's facial expression live.
Our MVP for this project changed significantly from our two final sprints as we had an intense ideation session at the beginning of the sprint. We had two major changes with the project due to time, ability, and budget constraints: first we were changing the music box to be stationary rather than traveling to read notes, second since we were no longer using a travelling music box that would read notes off of a track and already using a laptop tether we decided to use facial recognition and play a song based on facial expressions since we still wanted a playful and interactive aspect to our project. With the project overhaul we were able to eliminate many of our practical design concerns including reducing the size of the robot to drive, interfacing sensing communications with the servo driving Arduino, and sensing music notes off of a surface. But, we still had a significant amount of work cut out for us in this sprint, all of the code for these pivots including how the music would be played, facial recognition, and servo movement had not been coded. By the end of the sprint however, after a difficult run-in with attempting to use Midi files, they were all working. As for the mechanical portion we could still use the case from Sprint 2 however there was still a lot of work to be done with getting the servos to line up with the music box notes correctly. This was due to a combination of factors including the need for the wafers to be reset as they were hit with spring wire, which was held down with a loose grill that could be easily be moved if something got misaligned. Even with some of these issues, we were still able to get some of the notes to be played. Finally, we were able to get our team bonding in by creating small clay animals that would go on the outside of the case.
Our group would like to design a log shaped music box that will play a tune based on a person's facial expression live.
Our MVP for this final mini sprint remained the same as we mainly sought to polish what we had to show for our third sprint review. We now had seven different hard-coded songs working on the servos for the seven different emotions recognized by the algorithm (neutral, happy, sad, angry, fear, surprise, disgust.) Then we iterated on different grill designs and were able to make one that worked well enough that the lower half of the C major scale C4 to C5 worked when played. We were also able to finally get our aesthetic case finished and have a beautiful log-like natural design along with the clay animals we created for team bonding.
created with
Website Builder Software .