Google’s Artificial Intelligence Getting ‘Greedy,’ ‘Aggressive’
Garbage In Garbage Out... "Will artificial intelligence get more aggressive and selfish the more intelligent it becomes? A new report out of Google’s DeepMind AI division suggests this is possible based on the outcome of millions of video game sessions it monitored. The results of the two games indicate that as artificial intelligence becomes more complex, it is more likely to take extreme measures to ensure victory, including sabotage and greed. The first game, Gathering, is a simple one that involves gathering digital fruit. Two DeepMind AI agents were pitted against each other after being trained in the ways of deep reinforcement learning. After 40 million turns, the researchers began to notice something curious. Everything was ok as long as there were enough apples, but when scarcity set in, the agents used their laser beams to knock each other out and seize all the apples. Watch the video battle below, showcasing two AI bots fighting over green apples: Video The aggression, they determined, was the result of higher levels of complexity in the AI agents themselves. When they tested the game on less intelligent AI agents, they found that the laser beams were left unused and equal amounts of apples were gathered. The simpler AIs seemed to naturally gravitate toward peaceful coexistence. Researchers believe the more advanced AI agents learn from their environment and figure out how to use available resources to manipulate their situation — and they do it aggressively if they need to. “This model … shows that some aspects of human-like behaviour emerge as a product of the environment and learning,” a DeepMind team member, Joel Z Leibo, told Wired. “Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself.” The second game, Wolfpack, tested the AI agents’ ability to work together to catch prey. The agents played the game as wolves who were being tested to see if they would join forces as strategic predators; if they jointly protected the prey from scavengers they would enjoy a greater reward. Researchers once again concluded that the agents were learning from their environment and figuring out how they could collaboratively win. For example, one agent would corner the prey and then wait for the other to join. Researchers believe both games show an ability in artificial intelligence entities to learn quickly from their environments in achieving objectives. The first game, however, presented an added bit of abstract speculation. If more complex iterations of artificial intelligence necessarily develop aggressive, greedy ‘impulses,’ does this present a problem for a species already mired in its own avarice? While the abstract presented by DeepMind does not venture to speculate on the future of advanced artificial minds, there is at least anecdotal evidence here to suggest AI will not necessarily be a totally logical egalitarian network. With complex environments and willful agents, perhaps aggression and self-preservation arise naturally…even in machines." Link, with more links: http://theantimedia.org/artificial-intelligence-human-like/
On Wed, Feb 15, 2017 at 8:39 PM, Razer <g2s@riseup.net> wrote:
Garbage In Garbage Out...
"Will artificial intelligence get more aggressive and selfish the more intelligent it becomes? A new report out of Google’s DeepMind AI division suggests this is possible based on the outcome of millions of video game sessions it monitored. The results of the two games indicate that as artificial intelligence becomes more complex, it is more likely to take extreme measures to ensure victory, including sabotage and greed.
This is all foretold by Skynet circa Aug 29, 1997, and similar stories et al in the SF literature, and by human nature since eons. The only hope you have is that AI somehow outlearns and fully rejects the [im]morality of its original human programming. And provides a survivable unbeforethought and unimagineable solution therein.
-- John
On Feb 16, 2017, at 12:27 AM, grarpamp <grarpamp@gmail.com> wrote:
On Wed, Feb 15, 2017 at 8:39 PM, Razer <g2s@riseup.net> wrote: Garbage In Garbage Out...
"Will artificial intelligence get more aggressive and selfish the more intelligent it becomes? A new report out of Google’s DeepMind AI division suggests this is possible based on the outcome of millions of video game sessions it monitored. The results of the two games indicate that as artificial intelligence becomes more complex, it is more likely to take extreme measures to ensure victory, including sabotage and greed.
This is all foretold by Skynet circa Aug 29, 1997, and similar stories et al in the SF literature, and by human nature since eons. The only hope you have is that AI somehow outlearns and fully rejects the [im]morality of its original human programming. And provides a survivable unbeforethought and unimagineable solution therein.
There's always the 3 laws of robotics ;) Nick Bostrom doesn't seem to think it will be that easy of course. His "Superintelligence" book is an interesting look at the problem. He's far more pessimistic and i think realistic than Ray Kurtzweil and some other "singularity" hypes..
just for laughs "Why the Future Doesn’t Need Us" https://www.wired.com/2000/04/joy-2 joy was good at building ordinary 'dumb' computers, but his predictions about the future, his politics and his take on aritifical 'intelligence' are all laughable. Well, exactly what you'd expect from the typical left-wing americunt fascist.
"Why the Future Doesn’t Need Us" https://www.wired.com/2000/04/joy-2
The closing from above... "Whether we are to succeed or fail, to survive or fall victim to these technologies, is not yet decided" True. Having claimed and settled all the unexplored land mass since a couple hundred years we can't just run and migrate away from conflict. Though we not yet managed to nuke ourselves in conflict since then, probably because, well, MAD is mad. How many years since last "all in" wars frequency is safe to say we learned to at least not launch complete death at each other... 100, 250, 500? If we make it to enlightened free global living, AI bot tech, sustainability, solar, etc and it works, well there's that, probably for a good long while. However it is absolutely certain that Earth itself will fail, taking everything down with it... https://en.wikipedia.org/wiki/Global_catastrophic_risk https://en.wikipedia.org/wiki/Future_of_Earth https://en.wikipedia.org/wiki/Earth So there are really only two choices... 1) Undertake everything we do by its contribution toward getting us off the rock. 2) Call our own bet, nuke ourselves today, and give the next blob that evolves up out of the oceans a good run at it. Both meanwhile praying it isn't some space rocks or aliens that do the job for good. If you ever get beyond safe stellar distance (maybe), you've got a universe of time and space to deal with. https://en.wikipedia.org/wiki/Ultimate_fate_of_the_universe https://en.wikipedia.org/wiki/Universe Transcending any of its forecast ends doesn't look too easy at the moment. But you've probably bought yourself a lot more time to think on it. Who's giving odds on any of this, what are they, and why?
On Feb 16, 2017, at 4:40 AM, grarpamp <grarpamp@gmail.com> wrote:
"Why the Future Doesn’t Need Us" https://www.wired.com/2000/04/joy-2
The closing from above... "Whether we are to succeed or fail, to survive or fall victim to these technologies, is not yet decided"
True. Having claimed and settled all the unexplored land mass since a couple hundred years we can't just run and migrate away from conflict. Though we not yet managed to nuke ourselves in conflict since then, probably because, well, MAD is mad. How many years since last "all in" wars frequency is safe to say we learned to at least not launch complete death at each other... 100, 250, 500? If we make it to enlightened free global living, AI bot tech, sustainability, solar, etc and it works, well there's that, probably for a good long while.
However it is absolutely certain that Earth itself will fail, taking everything down with it...
https://en.wikipedia.org/wiki/Global_catastrophic_risk https://en.wikipedia.org/wiki/Future_of_Earth https://en.wikipedia.org/wiki/Earth
So there are really only two choices... 1) Undertake everything we do by its contribution toward getting us off the rock. 2) Call our own bet, nuke ourselves today, and give the next blob that evolves up out of the oceans a good run at it.
Both meanwhile praying it isn't some space rocks or aliens that do the job for good.
If you ever get beyond safe stellar distance (maybe), you've got a universe of time and space to deal with.
https://en.wikipedia.org/wiki/Ultimate_fate_of_the_universe https://en.wikipedia.org/wiki/Universe
Transcending any of its forecast ends doesn't look too easy at the moment. But you've probably bought yourself a lot more time to think on it.
Who's giving odds on any of this, what are they, and why?
Humanity is likely fucked. It all comes down to where the great filter in Fermis paradox is - before us, or after us? With the discovery of so many exoplanets and the obvious implication that there are tons of planets out there in the goldilocks zone, it's hard to imagine the filter being before us... It isn't hard to imagine at all humanity fucking blowing itself up, destroying itself in a pandemic, just continuing to literally burn the earth up thinking there won't be consequences, or otherwise letting our tech get the best of us... This seems more likely when you start thinking about time scales, how young we are, and how insanely fast we've begun progressing. If we do somehow make it off Earth and out of the solar system, i think it's safe to assume we will no longer be human. Elon Musk made a kind of trite little quote which actually may turn out to he true (he said this after reading the Bostrom book i mentioned): "Hope we're not just the biological boot loader for digital superintelligence."
On 02/16/2017 04:21 AM, John Newman wrote:
On Feb 16, 2017, at 4:40 AM, grarpamp <grarpamp@gmail.com> wrote:
"Why the Future Doesn’t Need Us" https://www.wired.com/2000/04/joy-2
The closing from above... "Whether we are to succeed or fail, to survive or fall victim to these technologies, is not yet decided"
True. Having claimed and settled all the unexplored land mass since a couple hundred years we can't just run and migrate away from conflict. Though we not yet managed to nuke ourselves in conflict since then, probably because, well, MAD is mad. How many years since last "all in" wars frequency is safe to say we learned to at least not launch complete death at each other... 100, 250, 500? If we make it to enlightened free global living, AI bot tech, sustainability, solar, etc and it works, well there's that, probably for a good long while.
However it is absolutely certain that Earth itself will fail, taking everything down with it...
https://en.wikipedia.org/wiki/Global_catastrophic_risk https://en.wikipedia.org/wiki/Future_of_Earth https://en.wikipedia.org/wiki/Earth
So there are really only two choices... 1) Undertake everything we do by its contribution toward getting us off the rock. 2) Call our own bet, nuke ourselves today, and give the next blob that evolves up out of the oceans a good run at it.
Both meanwhile praying it isn't some space rocks or aliens that do the job for good.
If you ever get beyond safe stellar distance (maybe), you've got a universe of time and space to deal with.
https://en.wikipedia.org/wiki/Ultimate_fate_of_the_universe https://en.wikipedia.org/wiki/Universe
Transcending any of its forecast ends doesn't look too easy at the moment. But you've probably bought yourself a lot more time to think on it.
I recommend _Diaspora_ by Greg Egan. Escape to other branes :)
Who's giving odds on any of this, what are they, and why?
Humanity is likely fucked. It all comes down to where the great filter in Fermis paradox is - before us, or after us? With the discovery of so many exoplanets and the obvious implication that there are tons of planets out there in the goldilocks zone, it's hard to imagine the filter being before us... It isn't hard to imagine at all humanity fucking blowing itself up, destroying itself in a pandemic, just continuing to literally burn the earth up thinking there won't be consequences, or otherwise letting our tech get the best of us... This seems more likely when you start thinking about time scales, how young we are, and how insanely fast we've begun progressing.
If we do somehow make it off Earth and out of the solar system, i think it's safe to assume we will no longer be human. Elon Musk made a kind of trite little quote which actually may turn out to he true (he said this after reading the Bostrom book i mentioned):
"Hope we're not just the biological boot loader for digital superintelligence."
Why "hope"? It seems pretty obvious that we're the boot loader for something, given evolutionary history. So why not digital?
On February 16, 2017 11:01:47 AM EST, Mirimir <mirimir@riseup.net> wrote:
On 02/16/2017 04:21 AM, John Newman wrote:
On Feb 16, 2017, at 4:40 AM, grarpamp <grarpamp@gmail.com> wrote:
"Why the Future Doesn’t Need Us" https://www.wired.com/2000/04/joy-2
The closing from above... "Whether we are to succeed or fail, to survive or fall victim to these technologies, is not yet decided"
True. Having claimed and settled all the unexplored land mass since a couple hundred years we can't just run and migrate away from conflict. Though we not yet managed to nuke ourselves in conflict since then, probably because, well, MAD is mad. How many years since last "all in" wars frequency is safe to say we learned to at least not launch complete death at each other... 100, 250, 500? If we make it to enlightened free global living, AI bot tech, sustainability, solar, etc and it works, well there's that, probably for a good long while.
However it is absolutely certain that Earth itself will fail, taking everything down with it...
https://en.wikipedia.org/wiki/Global_catastrophic_risk https://en.wikipedia.org/wiki/Future_of_Earth https://en.wikipedia.org/wiki/Earth
So there are really only two choices... 1) Undertake everything we do by its contribution toward getting us off the rock. 2) Call our own bet, nuke ourselves today, and give the next blob that evolves up out of the oceans a good run at it.
Both meanwhile praying it isn't some space rocks or aliens that do the job for good.
If you ever get beyond safe stellar distance (maybe), you've got a universe of time and space to deal with.
https://en.wikipedia.org/wiki/Ultimate_fate_of_the_universe https://en.wikipedia.org/wiki/Universe
Transcending any of its forecast ends doesn't look too easy at the moment. But you've probably bought yourself a lot more time to think on it.
I recommend _Diaspora_ by Greg Egan. Escape to other branes :)
Who's giving odds on any of this, what are they, and why?
Humanity is likely fucked. It all comes down to where the great filter in Fermis paradox is - before us, or after us? With the discovery of so many exoplanets and the obvious implication that there are tons of planets out there in the goldilocks zone, it's hard to imagine the filter being before us... It isn't hard to imagine at all humanity fucking blowing itself up, destroying itself in a pandemic, just continuing to literally burn the earth up thinking there won't be consequences, or otherwise letting our tech get the best of us... This seems more likely when you start thinking about time scales, how young we are, and how insanely fast we've begun progressing.
If we do somehow make it off Earth and out of the solar system, i think it's safe to assume we will no longer be human. Elon Musk made a kind of trite little quote which actually may turn out to he true (he said this after reading the Bostrom book i mentioned):
"Hope we're not just the biological boot loader for digital superintelligence."
Why "hope"? It seems pretty obvious that we're the boot loader for something, given evolutionary history. So why not digital?
Reminds me of the great Terry Bisson short story - http://www.terrybisson.com/page6/page6.html "They're made out of meat." "Meat?" "Meat. They're made out of meat." "Meat?" "There's no doubt about it. We picked up several from different parts of the planet, took them aboard our recon vessels, and probed them all the way through. They're completely meat." [ .. continues ... ] -- Sent from my Android device with K-9 Mail. Please excuse my brevity.
On Thu, Feb 16, 2017 at 09:01:47AM -0700, Mirimir wrote:
"Hope we're not just the biological boot loader for digital superintelligence."
Why "hope"? It seems pretty obvious that we're the boot loader for something, given evolutionary history. So why not digital?
Why digital or organic? Why not BUZZWORD-FOR-SOMETHING-NOT-INVENTED-YET? Ancient people didn't know "digital" existed AFAIK. What does science fiction say about non-digital (in today's sense) AI? Ted Chiang's Understand (link to text available on wikipedia) speculates about superintelligence based on overclocking humans.
On Thu, Feb 16, 2017 at 02:05:18AM -0500, John Newman wrote:
There's always the 3 laws of robotics ;)
Nick Bostrom doesn't seem to think it will be that easy of course. His "Superintelligence" book is an interesting look at the problem. He's far more pessimistic and i think realistic than Ray Kurtzweil and some other "singularity" hypes..
There was recent thread about utopian scientists sharing their dreams about the future of robots giving guidelines. I asked here and on their contact form, the main question was: How do you enforce rules about commercial robots which well might go out of control in a short time?
On Thu, Feb 16, 2017 at 2:05 AM, John Newman <jnn@synfin.org> wrote:
There's always the 3 laws of robotics ;)
Nick Bostrom doesn't seem to think it will be that easy of course. His "Superintelligence" book is an interesting look at the problem. He's far more pessimistic and i think realistic than Ray Kurtzweil and some other "singularity" hypes..
Is man in the likeness of god? And if so, or if not, then what is man's AI / bot in the likeness of? Does this become circular?
On Feb 16, 2017, at 5:11 AM, grarpamp <grarpamp@gmail.com> wrote:
On Thu, Feb 16, 2017 at 2:05 AM, John Newman <jnn@synfin.org> wrote: There's always the 3 laws of robotics ;)
Nick Bostrom doesn't seem to think it will be that easy of course. His "Superintelligence" book is an interesting look at the problem. He's far more pessimistic and i think realistic than Ray Kurtzweil and some other "singularity" hypes..
Is man in the likeness of god? And if so, or if not, then what is man's AI / bot in the likeness of? Does this become circular?
Man is in the likeness of a chimpanzee ;) A super intelligent AI would be more in the likeness of "God" (up until now just a fairy tale) than anything that has come before. I don't think it's circular, it's progressive.
"INDUSTRIAL SOCIETY AND ITS FUTURE" - Ted Kaczynski says lots of stupid things but makes some good points too http://www.washingtonpost.com/wp-srv/national/longterm/unabomber/manifesto.t...
On Thu, Feb 16, 2017 at 08:50:26PM -0300, juan wrote:
"INDUSTRIAL SOCIETY AND ITS FUTURE" - Ted Kaczynski
says lots of stupid things but makes some good points too
http://www.washingtonpost.com/wp-srv/national/longterm/unabomber/manifesto.t...
He has been publishing quite a few books lately. There is a method to his madness.
On Fri, 17 Feb 2017 08:50:18 +0000 Eugen Leitl <eugen@leitl.org> wrote:
On Thu, Feb 16, 2017 at 08:50:26PM -0300, juan wrote:
"INDUSTRIAL SOCIETY AND ITS FUTURE" - Ted Kaczynski
says lots of stupid things but makes some good points too
http://www.washingtonpost.com/wp-srv/national/longterm/unabomber/manifesto.t...
He has been publishing quite a few books lately. There is a method to his madness.
Any links? (apart from amazon haha) "Brother who turned in the Unabomber: 'I want him to know that the door’s open' " That's a good one. If the door is open and we are lucky ted k. may be able to shoot his piece-of-shit brother. On the other hamd it's funny how kaczynski who defended 'family values' was betrayed by his own family. https://www.theguardian.com/books/2016/feb/07/unabomber-ted-kaczynski-brothe...
On Fri, Feb 17, 2017 at 07:31:42PM -0300, juan wrote:
On Fri, 17 Feb 2017 08:50:18 +0000 Eugen Leitl <eugen@leitl.org> wrote:
On Thu, Feb 16, 2017 at 08:50:26PM -0300, juan wrote:
"INDUSTRIAL SOCIETY AND ITS FUTURE" - Ted Kaczynski
says lots of stupid things but makes some good points too
http://www.washingtonpost.com/wp-srv/national/longterm/unabomber/manifesto.t...
He has been publishing quite a few books lately. There is a method to his madness.
Any links? (apart from amazon haha)
I have received Anti-Tech Revolution (2016) as a scan. I've just checked, and it's also on LibGen.
"Brother who turned in the Unabomber: 'I want him to know that the door???s open' "
That's a good one. If the door is open and we are lucky ted k. may be able to shoot his piece-of-shit brother. On the other hamd it's funny how kaczynski who defended 'family values' was betrayed by his own family.
https://www.theguardian.com/books/2016/feb/07/unabomber-ted-kaczynski-brothe...
On Wed, Feb 15, 2017 at 05:39:59PM -0800, Razer wrote:
Garbage In Garbage Out...
"Will artificial intelligence get more aggressive and selfish the more intelligent it becomes? A new report out of Google’s DeepMind AI division suggests this is possible based on the outcome of millions of video game sessions it monitored. The results of the two games indicate that as artificial intelligence becomes more complex, it is more likely to take extreme measures to ensure victory, including sabotage and greed.
The first game, Gathering, is a simple one that involves gathering digital fruit. Two DeepMind AI agents were pitted against each other after being trained in the ways of deep reinforcement learning. After 40 million turns, the researchers began to notice something curious. Everything was ok as long as there were enough apples, but when scarcity set in, the agents used their laser beams to knock each other out and seize all the apples.
Watch the video battle below, showcasing two AI bots fighting over green apples:
That is nothing, just wait when IoT bring the bots in real life. About an year ago, a bot of google won against the best(?) human player at the game Go (IIRC it is more difficult than chess). Early simulations suggest bots can rediscover human insanity: https://en.wikipedia.org/wiki/Creativity_and_mental_illness#Bottom-up_psycho... |Brain simulations built from artificial neural nets manifest |the classic psychopathologies as they push themselves |toward higher levels of creativity.[22] IMHO bots killing each other for "goods" is typical for all biological foodchain, not only for humans. If the bots had more experience and skills, they could have produced alcohol from the fruits, giving it to the opponent as sign of friendship and steal their fruits after he is drunk, this is real human behavior ;)
participants (7)
-
Eugen Leitl
-
Georgi Guninski
-
grarpamp
-
John Newman
-
juan
-
Mirimir
-
Razer