If we build fancy systems to detect things like misadvertised keys or MitM attacks, how can we reasonably inform an end user what is amiss in an actionable way that won't confuse them with too many false positives to avoid taking action when something bad actually happens?

I recently went to SOUPS and saw a number of presentations on the general difficulty of communicating security-actionable information to users. From what I saw I'd say the problem is twofold:

1) How does the system provide a high confidence level that when it tries to communicate a security-actionable event, it's fairly certain it's not a false positive? False positives condition users to ignore security warnings

2) How do you express what's happening to the user in such a way that they will actually take action on it and not just click-through dismiss it?

Given the wide-ranging number of scenarios, the answer will of course be contextual, and I'd be curious to hear any replies about how systems try to solve the "right key" user experience problem in general.

That said, the messaging use-case (in conjunction with a "key directory" system) is particularly interesting to me.

If an end-to-end encrypted messaging system which relies on a centrally-managed key directory (e.g. iMessage) were to by coersion or compromise publish a poison key to their directory to facilitate a MitM attack, but the system creators wanted to make such action obvious to their users, how can the systems reasonably detect and reflect this in such a way that such users aren't conditioned to ignore such alerts for routine events (e.g. the SSH "SOMETHING NASTY" message) and actually feel compelled to take action on that knowledge?

And then what? How can we help someone who is a victim of an attack like this actually compile all of the necessary information for someone to figure out what actually happened? How can encryption tools compile incident reports that experts can scrutinize to determine what happened?

--
Tony Arcieri