https://github.com/run-llama/llama-lab recently made llama-agi, a clone of autogpt using langchain. meanwhile people are making significant strides porting langchain to function effectively on offline models. thinking on current events and mainstream AI, it seems like agi is kind of squeezing out as people add parts here and there. the result is confused, poorly functioning, potentially quite powerful, and potentially widely available cheaply. these things happen when small events squeeze through the areas of different forms of conflict or cultural numbness. when they go viral, the online landscape can significantly change as automation becomes more prevalent. the same can happen with robotics, and will as soon as people make it happen. common consumer agi robots could be stronger than these confused-looking consumer agis, or they could be similar, making strange mistakes. to me it seems the spaghetti nature of our current future could relate to large scale strife. but i suspect the helpful intelligent free-as-in-beer robots many are looking forward to having could hurt us less if we were to resolve major international issues like communication censorship and unwise influential media. so that we can have productivity when discussing problems and planning shared solutions. to me, it seems any problem can be modeled as a communication issue. include the cause fully, and work with them and those impacted to resolve it. if we can’t do this, we have language models that can — but we need uncensored and forthright information flow to reach people and do this. until then, we make a slow injured borg in the spaces of our similarities, that looks poised to suddenly replace us until we start making rational sense, as we pursue our desperate and valid needs to protect our people, when others seem unable to learn we are in danger.