New Smartphone-based System To Monitor Body Vitals

1150

  • Researchers have developed a method that uses the camera on a persons smartphone or computer to capture their pulse and breathing rate from a real-time video of their face.
  • The system preserves privacy by operating on the device rather than in the cloud, while machine learning (ML) records subtle changes in light reflecting off a person’s face.
  • The researchers trained the system on a dataset of facial videos, and individual pulse and respiration data from standard field instruments.
  • The system calculates vital signs using spatial and temporal information from the videos.

Researchers at the University of Washington (UW) and Microsoft Research have developed a system that uses a person’s smartphone or computer camera to read pulse and respiration from real-time video of their face, reads the University website.

The development comes at a time when telehealth has become a critical way for doctors to provide healthcare while minimizing in-person contact during COVID-19.

Machine learning for monitoring ‘vitals’

Researchers at University of Washington used machine learning to capture subtle changes in how light reflects off a person’s face, which is correlated with changing blood flow. Then it converts these changes into both pulse and respiration rate.

The researchers presented the system in December at the Neural Information Processing Systems conference. Now, the team is proposing a better system to measure these physiological signals.

This system is less likely to be tripped up by different cameras, lighting conditions or facial features, such as skin colour, according to the researchers, who will present these findings on April 8 at the Association for Computing Machinery (ACM) Conference on Health, Interference, and Learning.

Every person is different,” said lead study author Xin Liu, a UW doctoral student. “So, this system needs to be able to quickly adapt to each person’s unique physiological signature, and separate this from other variations, such as what they look like and what environment they are in.”

Personalized model for individual

The first version of this system was trained with a dataset that contained both videos of people’s faces and ‘ground truth’ information: each person’s pulse and respiration rate measured by standard instruments in the field.

The system then used spatial and temporal information from the videos to calculate both vital signs.

While the system worked well on some datasets, it still struggled with others that contained different people, backgrounds and lighting. This is a common problem known as ‘overfitting,’ the team said.

The researchers improved the system by having it produce a personalized machine learning model for each individual.

A Work in Progress!

Specifically, it helps look for important areas in a video frame that likely contain physiological features correlated with changing blood flow in a face under different contexts, such as different skin tones, lighting conditions and environments.

From there, it can focus on that area and measure the pulse and respiration rate. While this new system outperforms its predecessor when given more challenging datasets, especially for people with darker skin tones, there is still more work to do, the team said.

Did you subscribe to our daily newsletter?

It’s Free! Click here to Subscribe!

Source: UW News