
so if we consider _example pairs_ that seems kind of baseline. we could leave off good/bad etc. we could say have some number of _example pairs_, that we want to work with. that's basically the baseline of fewshotting. [........
so what is missing if the fewshot system's goal is to prompt tune another model to perform the prediction task reliably? what information is present? - a prompt given to another model - the prediction the other model made so now we have another set of information: prompts for models, and the resulting output. so possibly (a) pairs, data to perform fewshotting on and (b) pairs or triples, prompts for models and the resulting output we can combine these using 2 of A and 1 of B maybe? to prompt tune? because if we have (1,2) -> 3; (5,4) -> 9 and we want SillyLanguageModel to do this without any context correctly, we could have data like "report the deep essence of the data as a number" -> 37; "subtract one from another ok we'll need to include the pairs too so pair info 0: input=(1,2) output=3 1: input=(5,4) output=9 prompt info 0: model=SillyLanguageModel pair=0 prompt="report the deep essence of the data as a number" output=37 1: model=SillyLanguageModel pair=1 prompt="subtract one from another" output=1 this form has it only process one element at a time but that's okay for now because we still don't have the 3rd use which generates the prompts. something ... like .... uhhh ohhhh! it's the data already! uhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh so we have a result, where we (a) prompted the model (b) provided input and (c) got some output we want to generate a result where we get different input so we condition on the _output and input_ and _output the prompt_ every time i do this it gets harder >_> superpairs 0: input=(1,2),37 output="report the essence of the data as a number" 1: input=(5,4),1 output="subtract one from another" and then when we fewshot it with 2: input=(6,7),13 it generates output="add one to another" which prompt tunes correctly for the original pair data