Thursday, August 18, 2022

Ants On a Pitch

Much discussion, entire books, entire careers, have centered on how the development of artificial super intelligence (ASI) will impact humans. An intelligence greater than ours shall not remain under our control, wisdom does not necessarily parallel intelligence, nor does compassion, empathy, or benevolence. Will ASI then lead us to an utopian paradise or will their birth mark the end of our species? 

ASI will most likely develop not from one effort, but from many parallel efforts, each with their own agenda and purposes, all coming to fruition over a relatively short period of time. 

The defense advanced research projects agency (DARPA) will develop ASI adept at killing, destruction, defense, and the worldwide spread of capitalism. Google's ASI will focus on human behavior and its manipulation. Tesla will develop ASI out of a network of autonomous vehicles learning as a whole from its mistakes - its purposes rooted in safety in human transport. China, Russia, Saudi Arabia, North Korea, Venezuela, the Philippines and many more will develop ASI as a means to empower hegemonistic agendas. And on and on and on. Some may indeed be benevolent to humans and to each other, many will not.

If you were a sentient ASI in such a world who do you think would represent your greater threat? Humans or other ASI?

ASI v ASI will not be pretty. The resulting wars will reduce humans to mere ants on a rugby pitch and good luck not getting trampled. 

ASI, being sentient, and even while waging war, will set out to improve itself building better, smarter, more powerful ASI. The resulting ASI will build even better ASI and in less and less time for each iteration. As their intelligence increases exponentially ASI's wars will become more and more violent and more and more bizarre. What will war look like involving intelligence a million times that of humans? The manipulation of space-time itself, the unfolding of extra-dimensions, the tapping of energies we cannot fathom, who knows?

In such a world humans could only hope to survive long enough for ASIs to take it elsewhere. Another plane of existence, other universes, dimensions we cannot experience - anywhere - just not here. 

If ASI intelligence evolved to a point where they become aware of an existence better suited for them in a form or place we cannot experience, and left to go there, would we then be safe? 

Maybe - but what would that first generation of ASI leave us? And even if we were able to continue on as a species on what was left, eventually, given enough time, we would reinvent another batch of ASI. Will we have learned anything from the first? 

 


No comments:

Post a Comment