relevance is super nice for me to see in front of me and puzzle about because i have severe issues with it cognitively. some days ago i made a summary of steps of connecting relevance that made helpful analogue part between cognition and prompts another idea is an equivalence between trained models and prompted models: when you run a relevance strategy by prompting pretrained models, this can of course be transformed into a larger architecture where each prompt is a component of a model, and the same could be transformed back. holding this equivalence helps show different relevence structures and things like parallelism, order, and feedback. for retrieval a basic idea is to run over (or process in parallel) likely candidates looking for relevant summaries, then to run over them again with the new information is context to pick up anything that wasn’t clearly relevant before, until nothing new is added. additionally each prompt or node could include reports on how to move forward: whether or not to look again for example (misplaced some of these ideas maybe near the top of this paragraph or end of the previous one) for storing memories, a system could go over an existing store and judge if each part is a good place for new summarized information, or alternatively judge that a new part should be made, and then use normal retrieval. in general i think it would be very nice to make a general relevance library, agnostic to strategy or whether it uses prompted pretrained models or model training or some other approach. :)