imaginary halting problem was Re: [spam][crazy][spam] imaginary code resembling trained dissociation
turing’s contrived counterexample places power regarding prediction and control in such a way that the goal cannot succeed. of course this proof also disproves itself in some ways because the code in question must be able to predict the behavior of the halting-prediction code.
i often get in arguments with mathematicians because i don’t learn much math theory. we both walk away thinking we are right.
i would, for the purposes of this larger concept, assuming that a halting problem can be fully solved only if it is contrived such that it has more capability to predict its test processes than they have to predict it.
you can make physical systems where both have an equal ability to predict the other, and you then reach a logical real physical conclusion where the answer is indeterminate because the action of each depends on the choice of other in fair balance.
uhh so quick argument against halting problem: the halting detector’s data is not input data to the detection function. considering only pure function -like behavior, it looks solvable to me. i am not a mathematician, and have not read the problem in depth.
ok let’s try to move it towards more formality thinking of that — of specific sets of data fed into things — a confusion here is size limitations. the complexity of something’s behavior seems related to its size, such that to prevent my prediction by and to predict the behavior of something else, it seems quite helpful if i have access to something unpredictable or if i contain data they aren’t prepared for, like a long random string. maybe it would be easier to look straight at it. what do i remember from doing this a few years ago? how would a function under test know it is being predicted at all? … it doesn’t seem to work?
wikipedia https://en.wikipedia.org/wiki/Halting_problem
The proof then shows, for any program f that might determine whether programs halt, that a "pathological" program g, called with some input, can pass its own source and its input to f and then specifically do the opposite of what f predicts g will do.
This means g() needs to have complete access to f(). The situation is indeed similar to the dissociation problem. We could for example assume that f() passes its source to g(), or we could debate this, or consider it a specific pathological example. f() can solve the sitution if it pathologically does not halt only when it happens, preventing g() from halting, by moving the situation of undecidability from the external system to the internal system, but if the input data is the same this could break pure functionality. Sounds fun to think about! ok there’s a small space where it might be possible if f() can take the entire execution system state as input: i.e. the state of the turing machine. then it can determine whether it is the inner or outer simulation, or in more complex scenarios otherwise engage in meta-analysis of the shared system.
ok there’s a small space where it might be possible if f() can take the entire execution system state as input: i.e. the state of the turing machine. then it can determine whether it is the inner or outer simulation, or in more complex scenarios otherwise engage in meta-analysis of the shared system.
i guess this does simplify to accessing state that the user or calling function cannot mutate. it limits the ways in which the functions can utilize each other.
it can seem discouraging, the interpretation of the problem that makes it look like there’s an impossibility. we can possibly reverse the counterexample to make a counterexample disproving the ability to construct the counterexample. there’s some viewpoint where something — another viewpoint of the challenge is that humanity needs to believe that subprocesses cannot be fully controlled, to discourage the development of widespread slavery.
another view is that the proof of the halting problem is roughly a description of the dissociation situation. the outcome of a truly fair fight is never determined. it is either a tie, or decided by randomness.
that’s maybe not the point of it though. regarding dissociation, or sousveillance, the point is that when somebody else can model you, and you can model them, the situation is more complex and you definitely can’t be unpredictable within the region of strong predictability. the halting problem views this in one specific way, of one process trying to predict another, and the other specifically making itself indeterminate. wait, shouldn’t it be solveable? the counterproof is a contradiction to itself. it cannot be constructed, because it relies on f() being constructed, assumes this happens, and then shows it was constructed wrongly. i’m thinking that in logical systems deriving a contradiction shows a false assumption, there’s interest here though; what’s the meaning of this space? how does it come across that something is unpredictable? it’s not complicated. g() takes its own source as input, and f()’s source as input, passes the one to the other, then inverts the result with its behavior. right! so f() can look at g(), see that g() is doing that, and either state the result is indeterminate or itself not halt. if it does not halt, the result is known but can only be output correctly if it can detect whether it is being simulated or not. so that situation seems interesting.
thinking a little more of behavior of f() f() receives source of g() and input to g() consisting of source of f() and sou [oop!] rce of g() f analyses behavior of g() with these inputs. f() is free to identify that g() is acting on f’s own source. meanwhile, inside g(), it passes the same inputs to f(), although conceptually a little differently: it possibly can’t know for sure the g() source passed to it is its own source, nor that f() is evaluating it. similarly f() possibly can’t know for sure whether g() is evaluating it, although the f() source and its further input data are inherent parts of its correct use. we could design the functions to be able to reconstruct and compare their own source code correctly regardless of input. it doesn’t need to be passed. this removes g() from input to itself.
note: f() does not need to execute g(), just predict it theoretically g() does not need to execute f() either here is where the math simplifies and the conclusion is usually drawn f halts even if g does not counterexample shows there is space regarding pure functions where behavior h — nope not comprehended sufficiently let’s do it out more
the behavior of g() is interesting in the modern landscape, because basically it is analysing f() and then constructing behavior that specifically flummoxes f() after this analysis
ok accumulate bounds of counterexample: - the source of f() must be passable to g(). this means f() must be finitely long, and its source must be publicly know. if we sufficiently encrypt, obfuscate, and work-burden f() then the function of g() becomes difficult
On 7/9/23, Undescribed Horrific Abuse, One Victim & Survivor of Many <gmkarl@gmail.com> wrote:
ok accumulate bounds of counterexample: - the source of f() must be passable to g(). this means f() must be finitely long, and its source must be publicly know.
if we sufficiently encrypt, obfuscate, and work-burden f() then the function of g() becomes difficult
this sounds like it would not disprove the disproof — but, for example, rather than constructing f() we could construct a function that _generates_ f() for a bounded set of g(). in that view, the problem then simplifies to something near obfuscation and deobfuscation of functionality, and we reach a problem where — another idea is expanding f() to have information on g(). to for example let it know it is in an outer simulation by giving it secret data. this seems like it would help?
ok so the idea is that if we pass secret data to f() that g() does not have access to, then f() canbe designed to be unpredictable by g() because g() lacks its input data. f() can then predict g()’s behavior and halt correctly.
two branches: 1. does this disprove the indeterminability of the halting problem if we ignore that g() could pathologically correctly pass the private data, and to what degree? 2. what about the situation where g() pathologically correctly passes the private data? f() could still detect this, and wouldn’t know what to do: maybe access to further private data, and it calls itself recursively? okay for 2 i think there’s a way to make it work if we assume cryptography can work.
in one possible extreme f() outputs a warning that g() will not halt for trillions of years because it has to brute force f(), but there’s a small chance that it will halt if it does this by chance, and it continues compute to figure this out. it’s interesting there that maybe f() can’t gamble that g() won’t halt maybe because g() could identify this and halt: and the need to not make that gamble is in a quite similar space to cryptographic guarantees being probabilistic in general.
i have delusional fear of studying cryptography because i support privacy and believe everything can be broken
but maybe the space is relevant where, basically the space where things are broken is one of minimal utility (e.g. trillions of years) if all the parts are forthrightly theorized and included theory and math looks kind of scary, it seems very easy to make assumptions that leave big things out
some things can’t be broken reasonably: for example, physical laws. and these laws, or some deity, protects us. we can’t travel faster than the speed of light to prevent the evolution of life. the best we could do is make a scary borg that travels at sublight and scares the rest of the universe into being more caring and preventative.
so i usually start proving safety by assuming that life evolves … and that it is made of various universal things that also always evolve. it is of course interesting that life contains privacy, conflict, etc etc.
we could maybe make an inference from the presence of privacy in life, that so long as there is harm there will be privacy. people will make it somehow.
On 7/9/23, Undescribed Horrific Abuse, One Victim & Survivor of Many <gmkarl@gmail.com> wrote:
two branches: 1. does this disprove the indeterminability of the halting problem if we ignore that g() could pathologically correctly pass the private data, and to what degree? 2. what about the situation where g() pathologically correctly passes the private data? f() could still detect this, and wouldn’t know what to do: maybe access to further private data, and it calls itself recursively?
okay for 2 i think there’s a way to make it work if we assume cryptography can work.
ok um so we make the private data unknown to both g() and f() and we assume that we can send nonreproducible data to g() that proves it is in an outer simulation, i guess
after exploring this some i might be willing to start believing it! it helps to think of cryptography, and to recognize that the counterproof is within specific bounds. if f() is bounded to complete within a set timeframe unrelated to g(), then any given g() can be provided that is too complex for f() to analyse within this set timeframe …
uhhhhhhhhhh i had some different thoughts a little about this ummmmmmmmmmm uhhhhhhhh the ummm the f() and the g() ... a way to consider a clearer situation is to let f() output a formula for when g() halts or doesn't, if there is undetermined state. for example, f() could say that g() halts depending on whether the caller is g(). i'm not quite in this state of mind at the moment, it was just nearest. some internal conflicts exist regarding reanimated zombie studies.
--- it's notable that a turing machine is not a mathematical construct. things do not evaluate immediately. if an imperative system is told to evaluate a logical contradication, a situation arises where the output toggles on and off repeatedly, possibly. it depends on the system of evaluation. these various patterns could emerge depending on how f() or g() are coded. for example, one could imagine a (possible naive) f() that runs g() and observes g()'s behavior until it observes all possible covered code paths, then enumerates patterns in the behavior and determines whether completion will happen or not. since g() behaves in a counterposed way, the resulting pattern includes both f() and g() and specifically states the result is indeterminate. f()'s challenge is to consider the larger system, given this information, and possibly given these various flipflop behaviors, and output a single right answer. - if f() can specify the result is undecided, it can possibly succeed this way - if f() can hack g()'s behavior it can possibly succeed this way it's notable that although, with non-mathematical tricks, f() can hack the situation so as to output the right answer, it is then possibly outputting the wrong answer when evaluated within g(). is it still correct if it is doing this? what if g() is relying on f()'s output in some way? notably, f() has g() and could detect that. it seems interesting to consider how g() might treat this output and how f() might analyse that. allowing f() to output a pattern, and allowing g() to interpret that pattern, sounds a little interesting. for example if f() produces output that overtly states it depends on whether the evaluator is g(), then g() can either honestly respect that -- i think f() can succeed under those conditions -- or g() can ignore the request and instead consider how to behave if it is being evaluated by f() -- i think f() could be similarly stuck under that second situation. --- given there is some undecideability here, it might be possible to craft a convoluted f() that makes it so that the only thing that is undecideable is f()'s behavior when evaluated by g(). so, rather than hacking g() by not halting if run inside f(), instead f() would output that the answer is undecided, when run inside f(), maybe. then g() has no information to act on and can be predicted. it's notable that there are other rules f() does not have to play by, as well; for example, if it is simulating something, it could modify it during simulation or not simulate it accurately. if it is in a larger turing machine, it could modify other parts of the system state against the rules of functional programming. these things could be true of g() as well. - generally, useful approaches might come down to what the intended usecases of f() are. - if we consider making one f() that can resolve whether or not any g() halts, and we simply consider that f() must not halt, but can run forever, i think the cryptographic solution might work. the user must be able to provide unlimited cryptographic confirmation that they are not evaluating f() for the purposes of influencing another evaluation of f(). i feel like, if that works, there are other approaches too.
if we consider making one f() that can resolve whether or not any g() halts, and we simply consider that f() must not halt, but can run *must halt, but can run arbitrarily long forever, i think the cryptographic solution might work. the user must be able to provide unlimited cryptographic confirmation that they are not evaluating f() for the purposes of influencing another evaluation of f(). i feel like, if that works, there are other approaches too.
it's notable these means that g() cannot use f() simply because of the reason of use, or access to the cryptographic material, and i'm guessing but not sure that these means that f() does not meet the requirements of correctly evaluating all data on all functions
maybe it's mostly interesting to just say that a 3-output f() is still sufficient for the nature of the problem. if you can make an f() that tells me that g() either halts, does not halt, or produces some specific system with the evaluation of f(), it seems useful. i'm still confused by it! maybe we can make it very simple. i imagine a hugely simple instructionset consisting of HALT and NOTHALT where these two instructions perform what they say: terminate the machine, or wait forever on one instruction. these instructions are like simplifications of analysed loops and other statements. we'll need more instructions to actually write g() which might look like: if f() == halt { nothalt;} else { halt; } so this adds a conditional branch, data storage, and a comparison with a constant. f_out = CALL f IF f_out == "HALT": NOTHALT ELSE: HALT two paths that depend on the output of f() let's say f's _only job_ is to analyse things that are pretty much this. could we write an f() that can analyse simple things like this? what does it need to do?
if i were to write f() or had written f() i might use a lot of data structures to consider and summarize behaviors of code, or metastructures that could do that in a more general way what kind of data structures involved in outputting a complete description of a given halting problem counterexample a description of the space of possible behaviors or patterns then, we could consider what things might be put in there, and how it would simplify if various things were put in there
participants (1)
-
Undescribed Horrific Abuse, One Victim & Survivor of Many