I tried posting an idea publicly at https://github.com/hwchase17/langchain/issues/1111#issue-1589016921 . quote:
Hi,
I would like to propose a relatively simple idea to increase the power of the library. I am not an ML researcher, so would appreciate constructive feedback on where the idea goes.
If langchain’s existing caching mechanism were augmented with optional output labels, a chain could be made that would generate improved prompts by prompting with past examples of model behavior. This would be especially useful for transferring existing example code to new models or domains.
A first step might be to add an API interface and cache field for users to report back on the quality of an output or provide a better output.
Then, a chain could be designed and contributed that makes use of this data to provide improved prompts.
Finally, this chain could be integrated into the general prompt system, so a user might automatically improve prompts that could perform better.
This would involve comparable generalization of parts to that already seen within the subsystems of the project.
What do you think?