7 Jan
2023
7 Jan
'23
9:06 a.m.
Not much text to add here, but I was thinking on how precious rational feedback from expressions is, and how rare; and also how hard it is to provide this reliably, when the person (I was actually thinking myself here, but professor rat could be an example the other way as i rarely understand what he says) does not seem to be expressing rationally at all. It's still so important. I spend a lot of daydreaming and attempting around producing systems that will respond to me rationally, in any way at all. (the current mainstream AI approach kind of pointedly avoids enforcing rationality. this lets language models more easily hold disparate beliefs.)