This approach is now more common: you can see examples of it at say https://www.google.com/search?q=telemedicine+heartrate+web+camera . I don't know what the original paper was. The original paper's algorithm was: - take the average of the video - perform independent component analysis on the 3 RGB channels - select the channel with the highest fourier component near 60 Hz - graph it It plots a chart of the heartbeat of the person who's skin is most visible in the video. Nowadays people would use transformer models for this task. I'd like to amplify the origin of the components, live on a cell phone, so users can see that everybody's heartbeat is visible to the naked eye without cell phones already, written on our skin, and likely used by our subconsciousness to inform our senses of care for each other. I'd like to implement it in a general way, so the project could be expanded so that normal users could process other data. Most software used to be written in a general way. It's important for people to know about things like the algorithm, because computers are doing them already. But the task is just for therapy. To put my mind back together. It was supposed to be a small casual project.