[crazy][spam][notes] microstrategies (micriopeace)
game theory? decision tree. very small environment. at least two agents. a decision tree is analogous to reasons engaged in either diplomacy or trade. interested in diplomacy and cooperation. assuming all possibilities are already discovered, an environment with multiple agents finds greatest success if the agents work together rather than expending energy in competition. what is very simple environment, and then how can we reason around strategies i; that environment? to facilitate this we pick actions with reasons. ummm thinking of variable A and B. say raising or lowering A by B is an action (add, sub). if a goal is to mutate A, then mutating B becomes a means for that end. maybe four actions; add/sub A with B to A, and inc/dec B. we can then try to consider a decision structure around this. a naive decision structure considers fully described world states after events. the world is A and B. consider judgement metric ie reward related to distance of A from a target Z. if B is fixed, A+B or A-B mutate the metric immediately. If we then consider inc/dec of B, this mutates the metric in a delayed manner. The naive tree must be explored in some depth to see that there is return from doing this. This is seen easier of the add/sub operation is limited. A complex system could form an analysis of the two categories of operation, and describe the situations in which one outcompetes the other. Regardless, a reason structure is formed: something that (a) validates a choice to either add/sub or inc/dec to meet Z and (b) holds that inc/dec is being done for the plan to add/sub, and that add/sub is done for the plan to have A=Z . It is this second structure that is of interest — the trees of reasons. this structure is a complex mutation of the naive tree of world events. they relate to the same data ie what can happen in the world, based on what the agent chooses to do, for the purpose of selecting the actions that are best for the agent. … please continue monologueing since this topic is so severely inhibited … i flubbed and am having big inhib. so what’s of interest is how the reason trees can be used in a shared way for the agents to not only predict the actions of each other but form wiser reason trees that include the reasons of others, and … long story short to evolve communication, community, collective decision making structures and/or governments, and even things analogous to families and caring behavior. all analogies, obviously, not literal, but so important for protecting life from harm of automated borg. they form cooperative structures even if they are just arithmetic agents. they have to figure out how to work together to meet their goals most effectively. assuming they have solved the environment independently already. and it’s our responsibility to figure out how.
for me there are two big complexities here, reprsenting reasons in a useful way which is inhibited for me, and holding good decisions in the context of other goal-oriented agents, which could help me form habits to make better decisions that protect my sanity more.
maybe a decision tree with reasons, compared to a decision tree of simple world states, could be considered a decision tree where the decision is backed by a theoretical fully expanded decision tree. a drawing of recursion. humanly, we summarize our world states into patterns. a pattern is roughly analogous to a huge set of expanded states bearing a consistent similarity. when we use a pattern in a context, we are proposing that the context bears the same similarity and behaves the same way as all the examples used to form the pattern. with A+B and B++, there is a pattern where
this is important to me because part of my mind behaves like an automated goal process regarding using strategies of changing my behavior without relating with my thoughts. i understand this may be common with dissociative disorder. the problem is that i make poor decisions and significantly worsen the situation for myself. it’s very hard for me to conceive of structures and decisions that are not in conflict. it’s counter to the dissociative separation, i get myself very mentally injured, establishing a norm of conflict in my mind where my own parts, me and the other thing in me, try to act to harm each other, simply by having established the habit. it’s very hard to step out of this because when pressure reduces on one side the other side can try to use it to take advantage. this is similar to two autonomous agents that are assuming the other will act counter to their goals. in such a situation, things can change significantly. and one can show this clearly with logical structures. fighting is a+=B. but other behaviors are b++. note that if one agent performs b++ at a time when another agent wants a higher A, the other agent gets closer to their goal with no energy or time expended at all on their part.
this is similar to two autonomous agents that are assuming the other will act counter to their goals. in such a situation, things can change significantly. and one can show this clearly with logical structures. fighting is a+=B. but other behaviors are b++. note that if one agent performs b++ at a time when another agent wants a higher A, the other agent gets closer to their goal with no energy or time expended at all on their part. something of importance here wouod be establishing a simulation where it is beneficial and interesting for the agents to work together. if one want a high A and another wants a low A, it seems frightening that the6 cannot reach a final agreement. however, it is notable that there is shared utility in having B be a useful value that is large enough to effectiveky reach Z from whereever A is p. maybe a more useful situation is to have Z change in a pattern, where the agents want Z to have different values on a time schedule that does not conflict. [note: internally the reason graph holds assumptions of conflict, which is harsh] with the time schedule situation, one might imagine the agents discovering conflict because they are pursuing different values, but then after great difficulty needing to discover cooperation to meet the values effectively. i infer it’s not an ideal scenario. i’m thinking some of reason structures. in the situation i imagined, i though i’d like for the reason structures to be public. this helps learn about reasoning between them.
with my cognitive decline, i have a big issue where i tend to be single minded with a tiny working memory, especially when very stressed. similar to a single threaded thing with minimal storage, how i used to optimize old computer code. maybe rather than on a time schedule, it would be more interesting analogously to have the agents value different areas of the system. this seems more similar to the kinds of problems a human brain runs into. but the time schedule idea seems to me like enough to plan other parts of the system around. maybe a third variable? and apply ops across all values? then the parts can value different Zs for different variables, and there are enough options that cooperation can work, but not too many variables. there are so many op combinations but maybe it can be patterned.
maybe we can imagine two variables and a common store, maybe similar to the tragedy of the commons which i don’t remember well if A and B both take from C, they lose ability to increase. but if they increase C, they gain this ability in the future. kind of equal options for both conflict and cooperation.
we could even make their ability to take from C dependent on how much value they have, so they can succeed with conflict if desired, and that space of danger where nourishing an agent that could decide to take your supply would be present
or maybe it’s better to not have an environment that lets that happen. danger can be bad. anyway, in that scenario C might start at 0, and maybe agents could either increase C or transfer from C to A or B
reason structure? so one of the things confusing me regarding reason representation is how to represent the patterns that back them. i have an inhibition around structural representation of variables, and it’s quite normal to have variables in logical patterns.
i’m thinking the recursive concept could help me with reason structure say i made a proposal that it is good to increase B or C for purpose of increasing A [expands decision tree to show this]
i encountered the cognitive topic of the thread during it value around reason structures for both ends, either data or pattern based, should be analogous and transformable, but the transform involves pattern recognition.
participants (1)
-
Undescribed Horrific Abuse, One Victim & Survivor of Many