I did a simple draft translation of general complex inference to machine learning (introspectively):
3 models (or prompts):
1. a generative or simulation model for a context
2. a classifier identifying the context
3. a trainee that generates missing system representations
Models 1 and 2 perform forward inference from the output of 3 to make a loss function that compares with known data, and bam you have a system that can depict plausible underlying causes in whatever mode you train it on. It’s cool because similar to human thought. 3 ends up making a decision tree since multiple outputs compared. (guessing)
For example you could use this to predict the source code for a binary without looking inside it (easier with an RL env or just user handholding of it):
1. prompt language model to execute source code
2. prompt language model to identify it is correct code and any other known data
3. soft tune a prompt to produce source
(oops)