Re: Google’s Artificial Intelligence Getting ‘Greedy,’ ‘Aggressive’

John Newman jnn at synfin.org
Thu Feb 16 17:28:34 PST 2017



On February 16, 2017 11:01:47 AM EST, Mirimir <mirimir at riseup.net> wrote:
>On 02/16/2017 04:21 AM, John Newman wrote:
>> 
>> 
>> On Feb 16, 2017, at 4:40 AM, grarpamp <grarpamp at gmail.com> wrote:
>> 
>>>>        "Why the Future Doesn’t Need Us"
>>>>        https://www.wired.com/2000/04/joy-2
>>>
>>> The closing from above...
>>> "Whether we are to succeed or fail,
>>> to survive or fall victim to these technologies,
>>> is not yet decided"
>>>
>>> True.
>>> Having claimed and settled all the unexplored land
>>> mass since a couple hundred years we can't
>>> just run and migrate away from conflict.
>>> Though we not yet managed to nuke ourselves in conflict
>>> since then, probably because, well, MAD is mad.
>>> How many years since last "all in" wars frequency is safe to say
>>> we learned to at least not launch complete death at each other...
>>> 100, 250, 500?
>>> If we make it to enlightened free global living, AI bot tech,
>>> sustainability, solar, etc and it works, well there's that,
>>> probably for a good long while.
>>>
>>> However it is absolutely certain that Earth itself
>>> will fail, taking everything down with it...
>>>
>>> https://en.wikipedia.org/wiki/Global_catastrophic_risk
>>> https://en.wikipedia.org/wiki/Future_of_Earth
>>> https://en.wikipedia.org/wiki/Earth
>>>
>>> So there are really only two choices...
>>> 1) Undertake everything we do by its contribution
>>> toward getting us off the rock.
>>> 2) Call our own bet, nuke ourselves today, and give the
>>> next blob that evolves up out of the oceans a good run at it.
>>>
>>> Both meanwhile praying it isn't some space rocks
>>> or aliens that do the job for good.
>>>
>>> If you ever get beyond safe stellar distance (maybe),
>>> you've got a universe of time and space to deal with.
>>>
>>> https://en.wikipedia.org/wiki/Ultimate_fate_of_the_universe
>>> https://en.wikipedia.org/wiki/Universe
>>>
>>> Transcending any of its forecast ends doesn't
>>> look too easy at the moment. But you've probably
>>> bought yourself a lot more time to think on it.
>
>I recommend _Diaspora_ by Greg Egan. Escape to other branes :)
>
>>> Who's giving odds on any of this, what are they,
>>> and why?
>> 
>> 
>> Humanity is likely fucked. It all comes down to where the great
>> filter in Fermis paradox is - before us, or after us? With the
>> discovery of so many exoplanets and the obvious implication that
>> there are tons of planets out there in the goldilocks zone, it's
>> hard to imagine the filter being before us... It isn't hard to
>> imagine at all humanity fucking blowing itself up, destroying
>> itself in a pandemic, just continuing to literally burn the earth
>> up thinking there won't be consequences, or otherwise letting our
>> tech get the best of us... This seems more likely when you start
>> thinking about time scales, how young we are, and how insanely
>> fast we've begun progressing. 
>> 
>> If we do somehow make it off Earth and out of the solar system, i
>> think it's safe to assume we will no longer be human. Elon Musk
>> made a kind of trite little quote which actually may turn out to
>> he true (he said this after reading the Bostrom book i mentioned):
>> 
>> "Hope we're not just the biological boot loader for digital
>> superintelligence."
>
>Why "hope"? It seems pretty obvious that we're the boot loader for
>something, given evolutionary history. So why not digital?

Reminds me of the great Terry Bisson short story -

http://www.terrybisson.com/page6/page6.html


"They're made out of meat."

"Meat?"

"Meat. They're made out of meat."

"Meat?"

"There's no doubt about it. We picked up several from different parts of the planet, took them aboard our recon vessels, and probed them all the way through. They're completely meat."

[ .. continues ... ]
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.



More information about the cypherpunks mailing list