
i took my device out of S mode (:s :s) thinking i would play league of legends but having trouble with the authentication server when i took it out of S mode it humorously said i would be missing out on the speed of using microsoft edge. microsoft edge is _so slow_ for me omigod sometimes i have to wait over a minute to click a button or scroll a page. so leaving S mode is mostly a token gesture perhaps y'know spamming GPT-4 is kind of a funny behavior, but now there are things like manus and swe-agent maybe i should set those up and spam with them last time i used uhhhh cline and put a lot of money in that, i'm feeling intense again but luckily don't have much money ... let's seen what mannaandpoem/open-manus can do for me. or swe-agent codespaces ... to use them i'll need an api key for a language model. i know how to make targon's deepseek go for free, but it requires i think crafting the prompt manually. let's see if either of these are conducive to that. 1557 - i see a .vscode folder :s ergh i don't usually use vscode - ok it uses openai's library but not quite how i expect - it's in python. so it should be reasonable to import huggingface's tokenizers and automate local prompt crafting. i don't immediately remember how to do that, but it is not complex once found. (i'm looking at https://github.com/mannaandpoem/OpenManus :s :s i should go on their discord and warn them that i mentioned them here and they could get targeted :( ) it uses tiktoken for token counting ... kind of overlaps prompt generation i guess i'd have to patch the format_message function which doesn't do very much as-is ... i'm at https://github.com/mannaandpoem/OpenManus/blob/main/app/llm.py it passes it to the chat completions api in line 268 so it's straightforward but involves engaging areas i don't have super well - does tiktoken do prompt formatting? this would be most compatible with the existing stuff - what was the huggingface approach again? well maybe i'll just try pasting the huggingface approach in. let me try a draft of it. here we go :D
import transformers tokenizer = transformers.AutoTokenizer.from_pretrained('deepseek-ai/DeepSeek-R1') tokenizer.apply_chat_template([{'role':'user','content':'hi'}], tokenize=False) '<|begin▁of▁sentence|><|User|>hi'
so all i haev to do is figure out the huggingface model id for the used model, tokenize it, and pass it to the openai non-chat completions endpoint, and i can use services where that's free. :s :s :s !!!