Text follows this introductory line. I believe I have this fully sorted out. Many problems can crop up on the path, and I believe I have a solution to any possible problems. The thought parts are hard to sustain, bring together, and retain, so there's only a bit here and there. Basically, the patient works with an AI as if they were part of a GAN. The AI learns to predict the behavior of the patient, and the patient learns to understand their own behavior. As the AI forms a model of the patient's psychosis, it maps things that trigger it or worsen it. The patient can then make informed decisions on how to act. They could pursue one or more of: - alteration of their brain to prevent the experiences - investigation into how the experiences developed and why they continue, in order to resolve them therapeutically - avoidance of the events that stimulate their experiences They could engage - personal cognitive training to learn to handle things - augmentation to warn them of or prevent the experiences This sounds both exotic and easy, but we have the research to do this today, and I am not aware of it being done. Research can take years to reach the doctor's office as things move through clinical trials. I've seen visualisations of AI models for image-recognition. I've also seen that AI models of human behavior are widespread among politics and sales. I'm sure visualising models of human behavior has already been done. I will probably be trying to work on this. But I expect public research to outpace me due to my issues. My eta would be a couple decades.
In the face of untrusted systems or untrusted people using them, a different kind of AI could be started with, that would first model and demonstrate the behaviors of itself, first, to the patient. Like forming a shared language before beginning a dialog. In such a case the patient must be aware that a malicious AI can convince its user of anything and craft data that makes anything appear true. The user should be aware that AI forms its behaviors based on training data and logic, so if the AI cannot demonstrate consistent appropriately weighted patterns of data and logic that defend every tiniest part of its choices, then it is misbehaving. The AI can still both outcompute and deceive the user so some degree of trust or trust-building is needed. People with psychosis can't trust all their experiences, so the process would be repeated in "symbiosis" with treatment. There is a lot of danger for taking advantage of patients: changing their memories permanently, etc. We will need to find and advocate for each other.
Modelling health care a little in general might help provide some information around where patients might be taken advantage of. [seems a little similar to modelling psychosis of course with very different parts]
I'm guessing that the biggest "weird corruptions" are when penniless or wealthy human trafficking victims such as sex slaves are taken to the doctor for treatment. There are likely a lot of cultures that have emerged from hiding the slavery but still getting treatment.
[and I certainly don't have a solution to slavery. worried more about cognitive integrity so that people can craft and act on such things.]
participants (1)
-
Karl