low-end van eck phreaking via transformer model to make success more likely to start with, proposing one of two prefabbed images on the screen: either top half white and bottom half black, or top half black and bottom half white. hopefully this roughly maximizes the low-frequency information associated with the screen. 1. randomly display one of the two prefabbed images 2. record radio signal, maybe via audio cable antenna if no radio. note sample rate and precise starting time. 3. at about the same time, record video from desktop using something like ffmpeg -f x11grab . also note precise starting time and sample rate. 4? restart the feeds after some set interval like 30 seconds if it makes timestamping and data slicing easier 5. for each timepoint, use as much of the recording as possible as the input and classify which image is displayed the lowest component might be around 120 Hz so there's a good chance this can be made to work and would simply need a design fix if it doesn't. once demonstrated to work, the process could be improved to classify smaller portions of the image. honestly it might be more fun to sum periods ! i still have yet to train and use a transformer model for anything. i guess with van eck phreaking there were basically two parts that ideally fed back with each other until the parameters were sufficiently delineated: identifying the pixel clock, and amplifying the signal. amplifying the signal is significantly aided if its timepoints can be precisely predicting; meanwhile identify the pixel clock and predicting the timepoints is significantly aided if the signal is clear.