—- i’ve had a hard couple of days so i’m taking it easy this morning (slowly figuring out adhoc dissociation therapy, i think the serious issues happen when you push your issues past thresholds, still figuring lots pretty heavily) i wueried chatgpt and i think it gave me some hallucination, thinking of spending a little time making small code to censor hallucinations. there are probably multiple kinds of hallucination, i was thinking of censoring the ones that stemmed from sampling among widely spread distributions where the word being correct has much lower confidence. writing the second half of the last paragraph gave me new tension. it’s a small puzzle task with utility that regards quickly asking language models to integrate trained facts.