On 7/9/23, Undescribed Horrific Abuse, One Victim & Survivor of Many <gmkarl@gmail.com> wrote:
thinking a little of a simple logical system
like, say we have a tiny handful of operators and constants, and the code tries all combinations of these to reach an output value from an initial input state.
maybe it would do more than this, who knows
then a second loop would be the malicious dissociated part. it’s goal would be to be in complete control of the first loop’s ability to meet its goal, both immediately and in the future by managing the first loop’s exposure to learning. [when thinking of this i briefly have a more refined sense of some of my issues]
i suppose that living systems hold their goals in a variety of ways, and it could be unlikely the second loop would be countering every single goal in complete precision, not sure … it still seems of interest to consider direct countering. maybe conscious goals are of interest here? [seems similar to human aggression]
it’s nice to code over and over, never get very far, something is missing. things near the “inverted sweet spot of learning” concept seem interesting to add on to it.
idea of considering shared process of both, metabehaviors. maybe import to really imagine being both, and considering the goals of this shared thing, rather than thinking of one’s owns goals, so as not to stimulate the battle