[spam][crazy][spam] imaginary code resembling trained dissociation
sometimes when i’m trying really hard to use part of my brain i write little smidges of code similar to my brain issue. it’s honestly nice to just use my faculties in any way at all at those times. this often looks like two object classes with method names that talk about or simplistically roleplay one attacking or stopping the other. i was thinking recently of q viewpoint around that near learning and skill, and the idea that systems that adapt or pursue goals have “sweet spots” of adaptation. i could see myself as avoiding these areas of effective behavior and learning, instead engaging the ineffective ones. other parts too …. i don’t remember well … anyway i experience these things for years, and one of the things going on is my brain is always modeling optimal skill improvement by focusing on the areas that avoid this most strongly.
thinking a little of a simple logical system like, say we have a tiny handful of operators and constants, and the code tries all combinations of these to reach an output value from an initial input state. this code might happily putt along, reaching the output state from the input state over and over and over again in various ways. it could just loop. this loop would model my intended processes of pursuing and improving goals. [to do this better, one might have the loop develop information useful for future goals from the behavior.] then a second loop would be the malicious dissociated part. it’s goal would be to be in complete control of the first loop’s ability to meet its goal, both immediately and in the future by managing the first loop’s exposure to learning. [when thinking of this i briefly have a more refined sense of some of my issues] it’s nice to code over and over, never get very far, something is missing. things near the “inverted sweet spot of learning” concept seem interesting to add on to it.
hello :) we infer we are new to you. the general state appears to be homeostatic for the foreseeable future. [storms happen of course]
maybe it bears some similarity to that turing problem of predicting the completion of another code turing’s contrived counterexample places power regarding prediction and control in such a way that the goal cannot succeed. of course this proof also disproves itself in some ways because the code in question must be able to predict the behavior of the halting-prediction code. in a normal, realistic scenario, it’s possible to predict the behavior of a system in most situations, and in many it’s unreasonable to do so, but you can in a lot of them if you work really hard to. one of these difficult situation is when the system is observing your prediction, and attempting to be unpredictable — or otherwise pursuing a conflicting goal.
when two live systems observe each other in conflict, the one with more secret resources is at an advantage.
but what’s a lot easier is to consider the shared system, the behavior of both systems together, as a single collected system. additionally, what’s a lot easier is to consider the behaviors that both systems engage in pursuing. the ones that aren’t in conflict.
On 7/9/23, Undescribed Horrific Abuse, One Victim & Survivor of Many <gmkarl@gmail.com> wrote:
but what’s a lot easier is to consider the shared system, the behavior of both systems together, as a single collected system.
additionally, what’s a lot easier is to consider the behaviors that both systems engage in pursuing. the ones that aren’t in conflict.
so i guess this bears a little similarity to the halting problem. we can consider our shared behavior, and make choices together. that space includes
On 7/9/23, Undescribed Horrific Abuse, One Victim & Survivor of Many <gmkarl@gmail.com> wrote:
maybe it bears some similarity to that turing problem of predicting the completion of another code turing’s contrived counterexample places power regarding prediction and control in such a way that the goal cannot succeed. of course this proof also disproves itself in some ways because the code in question must be able to predict the behavior of the halting-prediction code.
i often get in arguments with mathematicians because i don’t learn much math theory. we both walk away thinking we are right. i would, for the purposes of this larger concept, assuming that a halting problem can be fully solved only if it is contrived such that it has more capability to predict its test processes than they have to predict it. you can make physical systems where both have an equal ability to predict the other, and you then reach a logical real physical conclusion where the answer is indeterminate because the action of each depends on the choice of other in fair balance.
On 7/9/23, Undescribed Horrific Abuse, One Victim & Survivor of Many <gmkarl@gmail.com> wrote:
On 7/9/23, Undescribed Horrific Abuse, One Victim & Survivor of Many <gmkarl@gmail.com> wrote:
maybe it bears some similarity to that turing problem of predicting the completion of another code turing’s contrived counterexample places power regarding prediction and control in such a way that the goal cannot succeed. of course this proof also disproves itself in some ways because the code in question must be able to predict the behavior of the halting-prediction code.
i often get in arguments with mathematicians because i don’t learn much math theory. we both walk away thinking we are right.
i would, for the purposes of this larger concept, assuming that a halting problem can be fully solved only if it is contrived such that it has more capability to predict its test processes than they have to predict it.
you can make physical systems where both have an equal ability to predict the other, and you then reach a logical real physical conclusion where the answer is indeterminate because the action of each depends on the choice of other in fair balance.
uhh so quick argument against halting problem: the halting detector’s data is not input data to the detection function. considering only pure function -like behavior, it looks solvable to me. i am not a mathematician, and have not read the problem in depth.
On 7/9/23, Undescribed Horrific Abuse, One Victim & Survivor of Many <gmkarl@gmail.com> wrote:
On 7/9/23, Undescribed Horrific Abuse, One Victim & Survivor of Many <gmkarl@gmail.com> wrote:
On 7/9/23, Undescribed Horrific Abuse, One Victim & Survivor of Many <gmkarl@gmail.com> wrote:
maybe it bears some similarity to that turing problem of predicting the completion of another code turing’s contrived counterexample places power regarding prediction and control in such a way that the goal cannot succeed. of course this proof also disproves itself in some ways because the code in question must be able to predict the behavior of the halting-prediction code.
i often get in arguments with mathematicians because i don’t learn much math theory. we both walk away thinking we are right.
i would, for the purposes of this larger concept, assuming that a halting problem can be fully solved only if it is contrived such that it has more capability to predict its test processes than they have to predict it.
you can make physical systems where both have an equal ability to predict the other, and you then reach a logical real physical conclusion where the answer is indeterminate because the action of each depends on the choice of other in fair balance.
uhh so quick argument against halting problem: the halting detector’s data is not input data to the detection function. considering only pure function -like behavior, it looks solvable to me. i am not a mathematician, and have not read the problem in depth.
it’s normal for a handful of nerds to question things taught in school, without resolution. teachers usually tell them to take a higher level class or go through the rest of their degree and write a paper, or just kind of end the discussion due to time constraints.
On 7/9/23, Undescribed Horrific Abuse, One Victim & Survivor of Many <gmkarl@gmail.com> wrote:
thinking a little of a simple logical system
like, say we have a tiny handful of operators and constants, and the code tries all combinations of these to reach an output value from an initial input state.
maybe it would do more than this, who knows
then a second loop would be the malicious dissociated part. it’s goal would be to be in complete control of the first loop’s ability to meet its goal, both immediately and in the future by managing the first loop’s exposure to learning. [when thinking of this i briefly have a more refined sense of some of my issues]
i suppose that living systems hold their goals in a variety of ways, and it could be unlikely the second loop would be countering every single goal in complete precision, not sure … it still seems of interest to consider direct countering. maybe conscious goals are of interest here? [seems similar to human aggression]
it’s nice to code over and over, never get very far, something is missing. things near the “inverted sweet spot of learning” concept seem interesting to add on to it.
idea of considering shared process of both, metabehaviors. maybe import to really imagine being both, and considering the goals of this shared thing, rather than thinking of one’s owns goals, so as not to stimulate the battle
On 7/9/23, Undescribed Horrific Abuse, One Victim & Survivor of Many <gmkarl@gmail.com> wrote:
On 7/9/23, Undescribed Horrific Abuse, One Victim & Survivor of Many <gmkarl@gmail.com> wrote:
thinking a little of a simple logical system
like, say we have a tiny handful of operators and constants, and the code tries all combinations of these to reach an output value from an initial input state.
maybe it would do more than this, who knows
it’s nice how it verifies it works !! i/we was/were thinking kind of “do i know how to do this” “what are all the ways i know how to do this” it’s like a cognitive self-maintenance process. a second step might be verifying transfer to other tasks and sets of resources. [idea of noticing if something is going wrong! scary!]
participants (1)
-
Undescribed Horrific Abuse, One Victim & Survivor of Many