In a novel way of using Google Glass, researchers at Stanford are testing the device to help autistic children recognize and classify emotions.
Catalin Voss and Nick Haber are leading the Autism Glass Project that aims to build at-home treatments for autism. The system relies on machine learning for feature extraction, to detect the so called “action units” from faces, classify emotions and allow users to read face expressions.
The project’s first phase was launched last year, and was conducted in the lab. Google and Packard Foundation both chipped in to help researches with 35 Glass units and $379,408 in grant funds, respectively.
The current, second phase — involving 100 children — is designed to allow children to “interact with their surrounding” using a special game called “Capture the Smile.”
The system relies on machine learning for feature extraction to allow users to read face expressions.Developed by MIT’s Media Lab, said game tasks children to go around wearing Glass and search for individuals with a specific emotion on their faces. By monitoring performance in this game as well as combining video analysis and questionnaires, the so called “quantitative phenotype” of Autism can be built for each study participant, providing a mathematical observation of the physical manifestations of their autism. From there, the team can show how their device helps improve emotion recognition over the long term. Also, researchers are looking to use the findings to develop a better understanding of how visual engagement plays a role in the emotion detection process.
Presuming the technology is validated for widespread clinical use, the team hopes to widen the bottleneck for autism treatment, and at some point in the future – even make these digital tools reimbursable.
The second phase of the project is expected to last for several months.