20 Oct
2024
20 Oct
'24
5:19 a.m.
20/24 what if we had an urge to use the llama model but it wasn't set up anywhere? maybe we could offload everything and oh there are small llama models now, that fit on tiny gpus :s anyway! let's try using a big one and like commenting on all the numbers as they go through they model ;p [honestly i bet if somebody kept that up for a few days they would figure out how to downscale it some, could be wrong]