In the face of untrusted systems or untrusted people using them, a different kind of AI could be started with, that would first model and demonstrate the behaviors of itself, first, to the patient. Like forming a shared language before beginning a dialog. In such a case the patient must be aware that a malicious AI can convince its user of anything and craft data that makes anything appear true. The user should be aware that AI forms its behaviors based on training data and logic, so if the AI cannot demonstrate consistent appropriately weighted patterns of data and logic that defend every tiniest part of its choices, then it is misbehaving. The AI can still both outcompute and deceive the user so some degree of trust or trust-building is needed. People with psychosis can't trust all their experiences, so the process would be repeated in "symbiosis" with treatment. There is a lot of danger for taking advantage of patients: changing their memories permanently, etc. We will need to find and advocate for each other.