While I was looking at this: - I found CLNeRF at https://github.com/IntelLabs/CLNeRF , which says it uses something called generative replay to maintain NeRFs of scenes that are mutating live with continual learning. Good for holo-calls. When I briefly searched "generative replay" I saw a paper from way back in 2017. - There's a lot more regarding video understanding, tracking, segmentation, etc. I think I settled on those three approaches because they had clear online demos and simple parallelism, not sure. One of the most recent papers I bumped into I think related to segmentation for motion classification. [they don't all use SAM. there are some major heavily researched categories such as aerial photography and medicine that look way ahead of general purpose efforts, but likely transfer in some manner.]