turing’s contrived counterexample places power regarding prediction and control in such a way that the goal cannot succeed. of course this proof also disproves itself in some ways because the code in question must be able to predict the behavior of the halting-prediction code.
i often get in arguments with mathematicians because i don’t learn much math theory. we both walk away thinking we are right.
i would, for the purposes of this larger concept, assuming that a halting problem can be fully solved only if it is contrived such that it has more capability to predict its test processes than they have to predict it.
you can make physical systems where both have an equal ability to predict the other, and you then reach a logical real physical conclusion where the answer is indeterminate because the action of each depends on the choice of other in fair balance.
uhh so quick argument against halting problem: the halting detector’s data is not input data to the detection function. considering only pure function -like behavior, it looks solvable to me. i am not a mathematician, and have not read the problem in depth.
ok let’s try to move it towards more formality thinking of that — of specific sets of data fed into things — a confusion here is size limitations. the complexity of something’s behavior seems related to its size, such that to prevent my prediction by and to predict the behavior of something else, it seems quite helpful if i have access to something unpredictable or if i contain data they aren’t prepared for, like a long random string. maybe it would be easier to look straight at it. what do i remember from doing this a few years ago? how would a function under test know it is being predicted at all? … it doesn’t seem to work?