[ot][spam][crazy][rambling]
sqrt(pi) = ? simplify challenge. addition is generic concept number system . : :. :: :': :': . :': : :': :. :': :: :': :': -- ten two fives make ten
this body visited friendly political group yesterday, not the usual mean long-term political group
I played a video game for a few weeks I think I started in order to get out of spasmodic states filled with muscle contractions, and something supported it then I found I could move the game near work and get a little work done and that's basically how I got the most recent work done on my kfsn4 boards, my coreboot branch for which displays serial logs, woohoo. both boards now broken
There's a chance I'll port that some day, you never know. I'd have some satisfaction just contributing my serialice change. I was thinking of the serialice change alongside crashes I'm getting trying to test it. Troubleshooting implies the crashes could be related to the old qemu version inside serialice.
i'm remembering that I built qemu out of serialice's tree, just as a general emulator. I could instead install the qemu package. this is likely newer. nice to update part of a plan.
more number system we can make the numbers look like hands ' " '" "" "'; ''';, ''';' ''';" ''';"' ''';"" ''';;''' mm the last semicolon bottom goes the wrong way . : :. :: :; ...
i'm confused that nobody is working at the office, but the intern's neighbor continues working at the desk it is two different realities. in one, nobody is working. in another, somebody is.
because of our cognitive struggle, the reality difference came through into narration
god mind controlled a bird on this property every day, for almost the whole day, it flies into the windows, while i'm indoors it is to remind me that I yearn to go outdoors the bird takes breaks to eat worms
this body is sitting up. it's fun to be sitting up. when you have unpleasant experiences in a particular posture, you can establish associated movements and things, and it can get more unpleasant in some ways. another posture helps. worried will lay down again soon. I was in the middle of participating in a possible pyo3 bug report. ball was in my court to reply with tests. i'm more paranoid that my weird failures could be from a compromise. worried could be hurt by reporting them. kinda confusing.
I imagine reporting a bug from a compromise and then a mean entity walking up to me and being like "you know that is from the compromise. don't make waves." and then hurting me badly. but I guess it isn't actually like that. hard and rare to think about.
[most of the issues I find are from my own spasmodic mistakes, but: - there is often a big delay before identifying this, and the preceding information isn't always consistent and - my spasmodic mistakes are habits encouraged by the bad thing it's funny to imagine associating with a group who doesn't or might not like you, in a submissive way. I infer far more people are in that situation now, than in the past.]
regardless when I am super confused it is generally nice to relax by troubleshooting like adding small numbers, but more useful feeling. I tend to troubleshoot really small things.
lots of clear phrases :) good stuff around mind habits that work in world. such habits good stuff.
Karl Semich's Resume Why I Want To Work At MindControlledBusiness I am a really great worker. I love being mind controlled. I direct all my disgusting complaints to an online mailing list and keep them out of the office. I love buying office supplies! I have new clean shoes. Work History 2020-2022 Spamming a mailing list of hackers. I used to write computer programs all day so I am very qualified to interact with technical people. 2018-2020 Undergoing treatment for cancer and psychosis. I spent a year having chemotherapy and can relate well with people dying of terminal illnesses. I'm also extensively skilled at chatting with people completely disconnected from reality. 2016-2018 Waiting for my truck to be repaired. I have the patience of mountains. Some of the above items are listed out of order. I apologise for this and attach $200 in up-front wages in the hope to help make it right. References: coderman <coderman@gmail.com> douglas lucas <dal@riseup.net> zig the n.ig <ziggerjoe@yandex.com>
Resume, take 2 Name: Coping Habit, unisolated I have been purchasing from MindControlledBusiness for years now, and your products are simply wonderful. I've always dreamed of scrubbing your floors or toilets, or standing like one of your checkout heros. Work History: 2022-2020: I helped many people handle difficult issues under the most stressful conditions. I have a way with people and can make any conflict end up profitable. 2020-2018: I studied under
---- How to play chess: Generative Text Model: "How do u play chess" Human: "Chess is played on an 8x8 square checkerboard with 6 unique piece types: for each side there is the queen, the king, two bishops, two knights, two rooks, and eight pawns. One side has black pieces and is called black; the other has white pieces and is called white." Generative Text Model: "ok let me update my prompt so i'll talk as if I know that" Generative Text Model: " !addtoprompt $last_user_message" Generative Text Model: "How do u play chess" Human: "You put a space before your command. Redo the last exchange to accommodate that error" Generative Text Model: " !sendmessage oops thank you" Generate Text Model: "How do the pieces move?" Human: "Each piece type moves a different way."
note: my info is out of date last I noticed people were finally making the bots use graph-based knowledge stores and that was some months ago wonder if I can find info on that
this says it drops parameters by 25x using retrieval: https://www.deepmind.com/publications/improving-language-models-by-retrievin... if we're stuck copying work we bump into, then it makes sense to figure out what's cited that paper (or bump into something else)
this is the latest citer: https://arxiv.org/abs/2204.06644 its topic is different but it looks fun because they use an auxiliary model to guide training.
quote from newer paper: In this study we show that training stability can be achieved with fewer sacrifices on model effectiveness. There are other pathways to improving LLM efficiency. One is to augment a language model with a retrieval component to fetch external knowledge that is useful to perform downstream tasks. So, the size of the language model can be significantly reduced since it does not need to encode ev- erything in model parameters (Guu et al., 2020; Khandelwal et al., 2020; Borgeaud et al., 2021; Gui et al., 2021; Zhang et al., 2021). With sparse model structures, Mixture of Experts models (Artetxe et al., 2021; Fedus et al., 2021; Zuo et al., 2021; Zoph et al., 2022) adaptively activate a subset of model parameters (experts) for different inputs during model training and inference. The METRO method proposed in this paper is orthogonal to retrieval-augmented models and sparsely activated models. Their combination is an interesting future work direction. This next one is on training a human assistant using feedback and reinforcement learning. Nice to have somebody actually talk about that. https://arxiv.org/abs/2204.05862
participants (1)
-
Undiscussed Horrific Abuse, One Victim of Many