[ot][project-idea][spam] langchain adapter
---------- Forwarded message ---------- š¶ Bark Bark is a transformer-based text-to-audio model created by Suno. Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. The model can also produce nonverbal communications likeā¦ https://t.co/ZpvzNmW94Z Read more at Twitter <https://twitter.com/_akhaliq/status/1650159967301672960> ---------- langchain - @LangChainAI RT @logspace_ai: Join us for a @LangChainAI webinar next Wednesday to explore a universe of no-code agents and chains! āļøš¦ Read more at Twitter <https://twitter.com/LangChainAI/status/1650193029226168320> ---------- End forwarded message ---------- Langchain contains prefabbed prompts that work with openaiās paid api. The way prompts and models work is such that different prompts can work with different models. Langchain has an associated hub where such prompts can be collected, but it would be nice if it worked with other models out of the box. Notably local models. The task of generating good prompts is called prompt tuning. This can be automated if there is data to guide a prompt tuner with. Langchain has a caching mode that collects such data. It is easier to automatically generate soft prompts (embeddings that are vectors of floats) than hard or token prompts (basically normal text), but soft prompts generally only work on local models, since API interfaces usually donāt accept raw embeddings as inputs. ideas: - [ ] glance through the langchain hub to see if anybody has been curating prompts for local models. if so, look for or start work on integrating those into default easy usage. - [ ] work on developing a data generator that runs tasks through the various default prompts. this could be as simple as enabling caching and running examples. - [ ] set up code to tune soft prompts for various popular local models. this could use a soft prompting, peft, or adapter library, or be done manually as soft prompting is simply training where only a passed embedding vector is trained. optionally the original default prompts could be included to make a more plug-and-play system, although note that the increased data size will require more resources to make use of the result. - [ ] contribute code for allowing for soft prompts or adapters with local models. there is an existing soft prompting library that could be referenced for interface ideas. - [ ] look up hard prompting and find a way to reasonably accomplish this. this has the advantage of being directly user introspectable and working with remote apis, but given the results are less powerful and harder both to construct and to use compared to soft prompts it may be a little less interesting. i might start with data generation and try to present it a little well regarding prompting and adapting the recent llama-adapter project did find a way to make a powerful-looking hybrid of the two that would work for other models as well there is an alternative to the above approaches, where a human simply writes prompts for the models the way 5he langchain developers did for openaiās models.
participants (1)
-
fuzzyTew