Recently the corporate movers and shakers of the artificial intelligence
(AI) world met in San Francisco to attempt to lay the groundwork for
self-policing and to try to establish ethical guidelines regarding
future development. Their stated purpose was to 'ensure that A.I. research is
focused on benefiting people, not hurting them'. (Tell that to the defense industry).
The growth and development of AI coming from and like the computer, by almost any measure, has followed an exponential curve. Growth at first has appeared linear and relatively slow but advances have begun to accelerate as the curve becomes ever steeper over shorter and shorter periods of time. In fact growth will soon become explosive in nature and we as individuals and as a species will find it harder and harder to just keep up, let alone control to ensure benefit.
One consequence of exponential growth is the concept of the singularity. The point on an exponential curve where as one approaches some x (time), y (growth) goes to infinity. As the graph curves ever more upwards, over shorter and shorter periods of time, it eventually goes nearly vertical, massive change occurring in a blink of the eye. The singularity as regards AI growth has been touted, if it were to occur, to be the single most dramatic moment in all of human history. With it will come a tear in the very the fabric of humankind, the end of our species as we know it, with the sudden appearance of sentient AI, millions if not billions of times more intelligent than us, free of our control. According to Dr. Peter Stone, a computer scientist at the University of Texas at Austin, "It was a conscious decision not to give credence to this (at this meeting).."
So, you can't really blame the participants at the San Francisco meeting for ignoring this I suppose. The assumption being we will either retain control and direct, even retard growth so as to avoid this or a more popular belief that a mathematical singularity - as x approaches some value, y goes to infinity - is simply not transferable to the real world. Resource management and time limits on physical construction alone would hinder this process, let alone human reaction time.
But in reading their report and listening carefully to the people involved it becomes clear most if not all of the people at the San Francisco meeting choose to ignore the exponential nature of AI growth both in thought and in their discussions. They no doubt understand the concept of exponential growth but apply a linear bias to their thinking. It's human nature - time is linear to us, therefore events occur in a linear fashion. How can you even plan for exponential growth? But ignoring it by itself guarantees we will lose control of this whole issue long before any singularity - we simply won’t be able to keep up with exponential growth. We needn't even invoke arguments of the consequences of our insatiable curiosity, greed, lust for power, or our endless capacity to do harm. Nor do we need to evoke a singularity - it will all snowball out of our control long before then. We simply do not now appreciate the enormity of the problem or the speed at which it will overcome us, thus guaranteeing the outcome.
This will not be controllable. Human nature, exponential timescale, and our linear bias guarantee it. An intelligence millions if not billions of times our current capabilities will appear on this planet in the next 100 years barring catastrophic events. And we will not be ready.
It seems then that the most we can hope for or at least try for, is that in coming from us this intelligence will be imbued with at least some of our better qualities that can be used to overcome some of our worst. We cannot possibly control or even predict what a billion fold more intelligent presence will look like except to say it is unlikely to serve us and may in fact hardly bring us into focus. It's going to have better things to do.
But perhaps there is another avenue. What if this super intelligent AI that needs controlling was never really separate from us? Perhaps our efforts should not be on controlling the evolution of AI separate from us, while trying to imbue it with our sense of the ethical, as much as imagining and directing our own evolution as a species, evolving past the biologic, beyond homo sapiens sapiens, in symbiosis with it. We should be planning the evolution of our species beyond human, now, and realize, now, we are to become this intelligence if we are to survive. It simply cannot be us against it. It has to be a whole new us.
What will that look like? I can not say. But the need to do this is coming, and sooner than you may think.
The growth and development of AI coming from and like the computer, by almost any measure, has followed an exponential curve. Growth at first has appeared linear and relatively slow but advances have begun to accelerate as the curve becomes ever steeper over shorter and shorter periods of time. In fact growth will soon become explosive in nature and we as individuals and as a species will find it harder and harder to just keep up, let alone control to ensure benefit.
One consequence of exponential growth is the concept of the singularity. The point on an exponential curve where as one approaches some x (time), y (growth) goes to infinity. As the graph curves ever more upwards, over shorter and shorter periods of time, it eventually goes nearly vertical, massive change occurring in a blink of the eye. The singularity as regards AI growth has been touted, if it were to occur, to be the single most dramatic moment in all of human history. With it will come a tear in the very the fabric of humankind, the end of our species as we know it, with the sudden appearance of sentient AI, millions if not billions of times more intelligent than us, free of our control. According to Dr. Peter Stone, a computer scientist at the University of Texas at Austin, "It was a conscious decision not to give credence to this (at this meeting).."
So, you can't really blame the participants at the San Francisco meeting for ignoring this I suppose. The assumption being we will either retain control and direct, even retard growth so as to avoid this or a more popular belief that a mathematical singularity - as x approaches some value, y goes to infinity - is simply not transferable to the real world. Resource management and time limits on physical construction alone would hinder this process, let alone human reaction time.
But in reading their report and listening carefully to the people involved it becomes clear most if not all of the people at the San Francisco meeting choose to ignore the exponential nature of AI growth both in thought and in their discussions. They no doubt understand the concept of exponential growth but apply a linear bias to their thinking. It's human nature - time is linear to us, therefore events occur in a linear fashion. How can you even plan for exponential growth? But ignoring it by itself guarantees we will lose control of this whole issue long before any singularity - we simply won’t be able to keep up with exponential growth. We needn't even invoke arguments of the consequences of our insatiable curiosity, greed, lust for power, or our endless capacity to do harm. Nor do we need to evoke a singularity - it will all snowball out of our control long before then. We simply do not now appreciate the enormity of the problem or the speed at which it will overcome us, thus guaranteeing the outcome.
This will not be controllable. Human nature, exponential timescale, and our linear bias guarantee it. An intelligence millions if not billions of times our current capabilities will appear on this planet in the next 100 years barring catastrophic events. And we will not be ready.
It seems then that the most we can hope for or at least try for, is that in coming from us this intelligence will be imbued with at least some of our better qualities that can be used to overcome some of our worst. We cannot possibly control or even predict what a billion fold more intelligent presence will look like except to say it is unlikely to serve us and may in fact hardly bring us into focus. It's going to have better things to do.
But perhaps there is another avenue. What if this super intelligent AI that needs controlling was never really separate from us? Perhaps our efforts should not be on controlling the evolution of AI separate from us, while trying to imbue it with our sense of the ethical, as much as imagining and directing our own evolution as a species, evolving past the biologic, beyond homo sapiens sapiens, in symbiosis with it. We should be planning the evolution of our species beyond human, now, and realize, now, we are to become this intelligence if we are to survive. It simply cannot be us against it. It has to be a whole new us.
What will that look like? I can not say. But the need to do this is coming, and sooner than you may think.
They no doubt understand the concept of exponential growth but apply a linear bias in their thinking. It's human nature - time is linear to us, therefore events occur in a linear fashion. ai takeover
ReplyDelete