[ml] langchain runs local model officially
Undescribed Horrific Abuse, One Victim & Survivor of Many
gmkarl at gmail.com
Wed Apr 5 09:04:35 PDT 2023
https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html
note:
- llamacpp is cpu only. the huggingface backend can run the same model
on gpu. it is not always faster
- llamacpp is experiencing political targeting and a little upheaval
in optimization code
- there are newer and more powerful models that can be loaded just
like gpt4all, such as vicuna
langchain is a powerful and noncomplex language model frontend library
supporting openai that provides for coding autonomous agents that use
tools and access datastores. see also llama-index .
More information about the cypherpunks
mailing list