that’s maybe not the point of it though. regarding dissociation, or sousveillance, the point is that when somebody else can model you, and you can model them, the situation is more complex and you definitely can’t be unpredictable within the region of strong predictability. the halting problem views this in one specific way, of one process trying to predict another, and the other specifically making itself indeterminate. wait, shouldn’t it be solveable? the counterproof is a contradiction to itself. it cannot be constructed, because it relies on f() being constructed, assumes this happens, and then shows it was constructed wrongly. i’m thinking that in logical systems deriving a contradiction shows a false assumption, there’s interest here though; what’s the meaning of this space? how does it come across that something is unpredictable? it’s not complicated. g() takes its own source as input, and f()’s source as input, passes the one to the other, then inverts the result with its behavior. right! so f() can look at g(), see that g() is doing that, and either state the result is indeterminate or itself not halt. if it does not halt, the result is known but can only be output correctly if it can detect whether it is being simulated or not. so that situation seems interesting.