this is similar to two autonomous agents that are assuming the other will act counter to their goals. in such a situation, things can change significantly. and one can show this clearly with logical structures. fighting is a+=B. but other behaviors are b++. note that if one agent performs b++ at a time when another agent wants a higher A, the other agent gets closer to their goal with no energy or time expended at all on their part. something of importance here wouod be establishing a simulation where it is beneficial and interesting for the agents to work together. if one want a high A and another wants a low A, it seems frightening that the6 cannot reach a final agreement. however, it is notable that there is shared utility in having B be a useful value that is large enough to effectiveky reach Z from whereever A is p. maybe a more useful situation is to have Z change in a pattern, where the agents want Z to have different values on a time schedule that does not conflict. [note: internally the reason graph holds assumptions of conflict, which is harsh] with the time schedule situation, one might imagine the agents discovering conflict because they are pursuing different values, but then after great difficulty needing to discover cooperation to meet the values effectively. i infer it’s not an ideal scenario. i’m thinking some of reason structures. in the situation i imagined, i though i’d like for the reason structures to be public. this helps learn about reasoning between them.