Play the music of your feelings.

LogaRhythm

read more

About the Project

Our minimum viable project is a log-shaped music box that plays a short song based on a user's facial expression.

We strive to bring joy through fun music and aesthetic design

System Overview

The LogaRhythm is a delightful integration of programming, circuitry, and fabrication. These elements come together to allow us to use facial recognition to control the output of servos that mechanically play notes on a music box.

Project Objectives

Our team members each had unique and ambitious learning goals we hoped to achieve with this project. One thing we all wanted to do was work on something we would be passionate about, and create something we could become fond of. So we had to come up with a project that fit all of these criteria. For the mechanical side of this project we wanted to work on making music by mechanical means rather than a speaker. For the electrical and software side, we wanted to integrate this mechanically generated music with facial recognition software that could interface between python and Arduino. In addition to these technical aspects, we wanted to improve our design process; including iteration, integration, and documentation.  To fulfill our goal of working on something we'd be fond of and passionate about, we decided to create something aesthetically pleasing and nature inspired.

Mechanical Integration

We strove to marry form and function in a meticulously crafted case and the carefully fabricated internal components.

read more

Electrical Subsystem

We aimed to safely integrate the pathway between software and mechanical modules.

read more

The Process

Past Iterations.... and budgeting

Sprint One


Minimum Viable Product:

A traveling music box that follows and reads music at various speeds and produces the corresponding notes mechanically.


Process

Our first sprint was primarily focused on research and replicating what we learned in previous projects to apply to our current MVP. Since we wanted to order parts as early as possible it was essential to figure out what the musical mechanism would be. The mechanical team tried out using pipes, solenoids, and a kalimba to play a sound. However we found these methods to either be too quiet, imprecise, or difficult to replicate. At the end of the sprint we did have a sound making prototype working with the kalimba, four bar linkage, and solenoid. For the electrical and software side of the prototyping we recycled Mini Project 3 Line- follower robot code and chassis to get the robot driving. We also experimented with sensing different types of colors, however with the IR sensor it was highly inaccurate. Thus we ended the sprint with the following realizations and proposed solutions:

  • Regular sheet music is too small to read, especially with the IR sensors → use colored sticky notes as music notes on the ground
  • The mechanical method of generating sounds takes up a large amount of room → create an efficient servo array
  • Did not entirely integrate the moving robot and playing music aspects of project → focus more on integrating project next sprint
  • Every note played needs a mechanism → scale down the number of notes played to make a smaller, more robust robot

Sprint Three


Minimum Viable Product:

Our group would like to design a log shaped music box that will play a tune based on a person's facial expression live. 


Process

Our MVP for this project changed significantly from our two final sprints as we had an intense ideation session at the beginning of the sprint. We had two major changes with the project due to time, ability, and budget constraints: first we were changing the music box to be stationary rather than traveling to read notes, second since we were no longer using a travelling music box that would read notes off of a track and already using a laptop tether we decided to use facial recognition and play a song based on facial expressions since we still wanted a playful and interactive aspect to our project. With the project overhaul we were able to eliminate many of our practical design concerns including reducing the size of the robot to drive, interfacing sensing communications with the servo driving Arduino, and sensing music notes off of a surface. But, we still had a significant amount of work cut out for us in this sprint, all of the code for these pivots including how the music would be played, facial recognition, and servo movement had not been coded. By the end of the sprint however, after a difficult run-in with attempting to use Midi files, they were all working. As for the mechanical portion we could still use the case from Sprint 2 however there was still a lot of work to be done with getting the servos to line up with the music box notes correctly. This was due to a combination of factors including the need for the wafers to be reset as they were hit with spring wire, which was held down with a loose grill that could be easily be moved if something got misaligned. Even with some of these issues, we were still able to get some of the notes to be played. Finally, we were able to get our team bonding in by creating small clay animals that would go on the outside of the case.

  • Music box components still not inside case → figure out how to fit inside case and decorate
  • Interface for emotion recognition and user is slightly clunky → refine delays to be more streamlined
  • Grill piece bowed at middle → more rigid grill to hold down spring wire and hold gear spinning motor