[ml] langchain runs local model officially

efc at swisscows.email efc at swisscows.email
Thu Apr 6 01:43:19 PDT 2023


I run alpaca.cpp on a laptop with 8 GB ram and the 7B model. Works pretty 
well.

I would love to find a project that would enable me to go to the 13B 
model, but have not yet found one that enables me to run that on only 8 GB 
ram.


On Wed, 5 Apr 2023, Undescribed Horrific Abuse, One Victim & Survivor of Many wrote:

> https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html
>
> note:
> - llamacpp is cpu only. the huggingface backend can run the same model
> on gpu. it is not always faster
> - llamacpp is experiencing political targeting and a little upheaval
> in optimization code
> - there are newer and more powerful models that can be loaded just
> like gpt4all, such as vicuna
>
> langchain is a powerful and noncomplex language model frontend library
> supporting openai that provides for coding autonomous agents that use
> tools and access datastores. see also llama-index .
>
>



More information about the cypherpunks mailing list