AI threats

Douglas Lucas dal at riseup.net
Mon Feb 26 00:40:58 PST 2018


On 2018-02-25 11:13, 10r wrote:
> Hi. I wonder if there has ever been a topic about AI threats against
> humanity. If not, I would like to propose this discussion. Should we
> think of models / agents that only work on encrypted information such
> as numer.ai or should we just think about how to develop such agents /
> models safely (if that is possible)?

One AI topic I wonder about is, how might it be applied for profit and
social control in domestic situations. Imagine an Amazon Echo-like
device in the home, surveilling family/roommate members' interactions
and arguments, and reacting with evaluations and advice about what the
family/roommate members should do differently -- including which
purchases to make -- based on whichever therapy paradigm was being
imposed, whatever values were assumed to be true. What things might go
really well in this situation? What things might go disastrously wrong?
For example, could the device be at all sensitive to subtext, to power
dynamics (who controls the definition of each situation?), or would it
just have a very surface level take, like hey, this one family/roommate
member's anger exceeded the permissible limit, that person now needs to
apologize and swallow a pharmaceutical. I also wonder what kind of key
words/phrases to plug into search engines to find out more about this
possible scenario, and what kind of sources I should be looking at --
trade journals? Are any industries, like the psychology/medical
industry, developing AI along these lines? The closest I've found so far
is the concept/field of "affective computing":
https://en.wikipedia.org/wiki/Affective_computing If anyone has thoughts
or suggestions, please share, thanks!



More information about the cypherpunks mailing list