Garbage In Garbage Out...
    
    "Will artificial intelligence get more aggressive and selfish the
    more intelligent it becomes? A new report out of Google’s DeepMind
    AI division suggests this is possible based on the outcome of
    millions of video game sessions it monitored. The results of the two
    games indicate that as artificial intelligence becomes more complex,
    it is more likely to take extreme measures to ensure victory,
    including sabotage and greed.
    
    The first game, Gathering, is a simple one that involves gathering
    digital fruit. Two DeepMind AI agents were pitted against each other
    after being trained in the ways of deep reinforcement learning.
    After 40 million turns, the researchers began to notice something
    curious. Everything was ok as long as there were enough apples, but
    when scarcity set in, the agents used their laser beams to knock
    each other out and seize all the apples.
    
    Watch the video battle below, showcasing two AI bots fighting over
    green apples:
    
    Video
    
    The aggression, they determined, was the result of higher levels of
    complexity in the AI agents themselves. When they tested the game on
    less intelligent AI agents, they found that the laser beams were
    left unused and equal amounts of apples were gathered. The simpler
    AIs seemed to naturally gravitate toward peaceful coexistence.
    
    Researchers believe the more advanced AI agents learn from their
    environment and figure out how to use available resources to
    manipulate their situation — and they do it aggressively if they
    need to.
    
    “This model … shows that some aspects of human-like behaviour emerge
    as a product of the environment and learning,” a DeepMind team
    member, Joel Z Leibo, told Wired.
    
    “Less aggressive policies emerge from learning in relatively
    abundant environments with less possibility for costly action. The
    greed motivation reflects the temptation to take out a rival and
    collect all the apples oneself.”
    
    The second game, Wolfpack, tested the AI agents’ ability to work
    together to catch prey. The agents played the game as wolves who
    were being tested to see if they would join forces as strategic
    predators; if they jointly protected the prey from scavengers they
    would enjoy a greater reward. Researchers once again concluded that
    the agents were learning from their environment and figuring out how
    they could collaboratively win. For example, one agent would corner
    the prey and then wait for the other to join.
    
    Researchers believe both games show an ability in artificial
    intelligence entities to learn quickly from their environments in
    achieving objectives. The first game, however, presented an added
    bit of abstract speculation.
    
    If more complex iterations of artificial intelligence necessarily
    develop aggressive, greedy ‘impulses,’ does this present a problem
    for a species already mired in its own avarice? While the abstract
    presented by DeepMind does not venture to speculate on the future of
    advanced artificial minds, there is at least anecdotal evidence here
    to suggest AI will not necessarily be a totally logical egalitarian
    network. With complex environments and willful agents, perhaps
    aggression and self-preservation arise naturally…even in machines."
    
    Link, with more links:
    http://theantimedia.org/artificial-intelligence-human-like/