Computer Vision App Assists Mental Health Clinicians

Researchers from Carnegie Mellon University have been developing MultiSense technologies to automatically sense human non-verbal behaviors such as facial expressions, eye gaze, head gestures, and vocal non-verbal behaviors like the voice and its tenseness. The goal is to assist clinicians working with mental health patients with their diagnosis and treatments of mental disorders such as depression, anxiety, PTSD, schizophrenia, and autism.

Louis-Philippe Morency, PhD and colleagues describe that one of the primary algorithms, from a computer-vision perspective, is facial landmark detection. It automatically identifies the position of 68 “landmarks” or key points on the face. These were defined over the years as being reliable to track over time. Examples are the eyebrow positions, contour of the mouth, the eye corners, and jaw contours. These are cornerstones for a later stage of analysis because knowing their current shapes really helps understand and recognize the facial expression. This is coupled with things like head-tilt and eye-gaze estimations.

The goal of the software is not to diagnose depression, that’s always the job of the doctor. We’re building these algorithms as decision support tools for the clinicians so that they can do their assessments. But from an academic perspective, we do want to know how well these behavioral markers are correlating with the assessment of clinicians. We’ve done this work and seen around a 78% correlation. So it’s not 100%, but our data is significant. We’re definitely heading in the right direction! 

ai-mental-health

 

 

 

 

 

 

 

Source: MedGadget