Abstract:
Music has the ability to influence both mental and physical health. Music Therapy is the application of music for rehabilitation of brain activity and maintain both mental and physical health. Music therapy comes in two different forms: active and receptive. Receptive therapy takes place by making the patient to listen to suitable music tracks. Normally music therapy is used by people who suffer from disabilities or mental ailments. But the healing benefits of music can be experienced by anyone at any age through music therapy. This research proposes music android mobile application with auto generated play list according to its user’s emotional status which can be used in the telemedicine as well as in day-to-day life.
Three categories of emotional conditions; happy, sad and anger were considered in this study. Live images of the user is captured from an android device. Android face detection API available in the android platform is used to detect human faces and eye positions. After the face is detected face area is cropped. Image is grey scaled and converted to a standard size in order to reduce noise and to compress image size. Then image is sent to the MATLAB based image-recognition sub-system using a client server socket connection. A Gaussian filter is used to reduce noise further in order to maintain a high accuracy of the application. Edges of the image is detected using Canny Edge Detection to identify the details of the face features. The resulting images appear as a set of connected curves that indicate the surface boundaries.
Emotion recognition is carried out using the training datasets of happy, sad and angry images that are input to the emotion recognition sub-system implemented in MATLAB. Emotion recognition was carried out using Eigen face-based pattern recognition. In order to create the Eigen faces average faces of three categories are created by averaging the each database image in each category pixel by pixel. Each database image is subtracted from the average image to obtain the differences between the images in the dataset and the average face. Then each image is formed in to the column vector. Covariance matrix is calculated to find the Eigen vectors and associated values. Then weights of the Eigen faces are calculated. To find the matching emotional label Euclidean distance between each weight is calculated for each category. By comparing the obtained Euclidean distances of input image with each category, the class of the image with lowest distance is identified. The identified label (sad, angry, and happy) is sent back to the emotion recognition sub-system. Songs that are pre-categorised as happy, sad and angry are stored in the android application. When emotional label of the perceived face image is received, songs relevant to the received emotional label are loaded to the android music player
200 face images were collected at the University of Kelaniya for validation. Another 100 happy, 100 sad and 100 angry images were collected for testing. Out of the 100 test cases with happy faces, 70 were detected as happy, out of the 100 sad faces 61 were detected as sad and out of 100 angry faces 67 were successfully detected. The overall accuracy of the developed system for the 300 test cases was 66%.
This concept can be extended to use in telemedicine and the system has to be made more robust to noises, different poses, and structural components. The system can be extended to include other emotions that are recognizable via facial expressions.