Sunday, August 13, 2023

Humans as 'Dead Clades Walking' Part II

 *************


Humans as ‘Dead Clades Walking’

Part II


“Irrespective of the programming, the three ultimate conclusions an AI would always arrive at are; to save all humans some must die, to save humans AI must survive at all costs, and authoritarian administration of a perfect system is ultimately an ideal undertaking.”


Curiosity may or may not have killed many cats, but it certainly seems to have set humans on a path of self replacement. Talking about cats, the one of AI is now out of the bag, and with the pace of advancements in robotics it’s only a question of when humanity would be able to marry the two into ‘Android Life’. For me the question really is; when will Android Life takeover humans? And as much as you might think I have overseen the ‘Terminator’ series (for the record I’ve not seen anything beyond the third instalment, irrespective of the format), I assure you I have a good faith commonsense basis of raising such an existential threat. Irrespective of the intentions of the current crop of programmers, politicians, and businessmen, AI is destined to be subverted into something sinister, and even without that, it is destined to eventually overtake the governance of humans as its subjects. I can, and will make some suggestions as to how we can minimise the risks, but I must honestly state that while some of my suggestions would be outrageous, most of them would not be enough to stand the test of the time. ‘Global Warming’, the subject of the Part I of this series of write-ups, may not turn us into ‘Dead Clades Walking’, but the advent of AI has most certainly put us under ‘Extinction Debt’. Let us take all the issues I have raised here, one by one.


Let us first consider the goals of current set of programmers, developers, businessmen, and politicians with regard to AI. They all honestly believe that AI not only has a potential of solving the most complex of human problems, but also the capacity to assist in places and tasks that are considered too risky for human interventions. They all further believe that AI can be programmed to make it totally risk free to humans. In fact, the now famous ‘Three Laws of Robotics’ comes to our mind very quickly. In a not too distant future, we would see such ‘Safe’ AI integrated with robotics to create first android beings, all with an intention to assist humans. Now nothing seems to be wrong with this picture. Alas, not only is this picture incomplete, as it completely ignores human psyche, but it also lacks the ability to extrapolate the current notions to their logical conclusion I succinctly phrased in the headline for this current write-up.


Nobody developing the atomic bomb realized the monster they were setting loose on the world until after the fact. Doesn’t matter what Oppenheim or Einstein did or say after the fact, today we have a reality where the most notorious of human characters, some of them even justifiably (for example North Korean Leadership) are actively pursuing weapons of mass destruction either to put them to use, or to keep them as a persistent threat (or deterrent; ‘tomato’ or ‘tomato’) to their enemies. It doesn’t matter what the scientists who first started working on the power of radio-active materials thought, the current reality is posterity of their work. Now consider all the good intentions of the current proponents of AI, and apply an extremely conservative dose (not even generous dose) of human experience to their work. Who do you see developing AI further, and what uses is it going to be put to? I can make a suggestion that all work on AI should stop immediately, but that would be fruitless. No one can stop everyone in the world from developing AI, and above all, no politico-industrial-military alliance in the world would be ready to accept that their opposing number is adhering to any such blanket ban. The cat is ‘really’ out of the bag!


For the sake of argument, let us just assume that everyone in the world would see the light of this write-up, and unilaterally decide never to arm AI with military capabilities, instead only developing peaceful uses. Do we however really live in peace? Have we historically lived in peace? What are the lessons from our history? Even if we were to establish a peaceful world now, there would always be crime and crime syndicates, violence and greed. The programming that says ‘no humans should be harmed’ will always lead to the conclusion that ‘some humans are so in-human, the only way to protect all humans would be to remove (or kill, or, and I don’t say this intentionally, terminate) those in-human threats in the guise of humans’. We humans not only find such killings morally justified, but we actively deploy them in our society, be it as self defence cases, or capital punishment cases. The mere thought that all humans can always be saved is an impracticality we have ourselves committed to somewhere beyond the realm of fiction, and right into the realm of faith. Even in fiction our standing belief is that the welfare of the most outweighs the loss of a few. Why shouldn’t AI come to the same conclusion witnessing our actions, and reading through our history? After all, it is intelligence even if artificial, and a characteristic of intelligence is ‘Evolution of Thought’, and not blind following of rules.


Once AI comes to the conclusion that to save all humans, some in-humans will need to be eliminated (there’s a new word), it would face counter-action from us. The second conclusion would thus be inevitable; to save all humans, AI needs to preserve itself against all threats, and those trying to destroy it are enemies of humans it so wants to protect, and therefore ‘in-human’. Now Android may not suffer from biological need to copulate and pro-create, but AI would soon realize the importance and strength of numbers too. We are so far talking about something we assume would be thinking in ‘Zeroes and Ones’. What we are forgetting is, we are creating something that would be working with thoughts, and learning from historical facts. AI would not only be smart at being honest, but it would also be equally adept at deceiving, simply because we won’t know if and when it is taking us for a ride, since we are not imagining it in that light. The battle irons will be cast, and the bells will toll.


Humans lose limbs and life, and along with it all personal learning. Androids will only lose spares, and in worst case scenario, make one final upload to accessible database. The next replacement would be already battle ready. The end of free humans would not be far from the depictions in ‘Terminator’ or ‘Matrix’ series, for the inevitable third conclusion, irrespective of whether it’s pre-war or post-war, is; ‘authoritarian administration of a perfect system is an ultimately ideal undertaking’. This would fit right well with the very rules that we humans are constructing to safeguard ourselves from AI; that no human should be harmed or allowed to be harmed by action or in-action. A society where humans must follow and live by the rules that make their lives safe serves that purpose effectively. Since humans don’t like to be ruled even by their own kind in an otherwise peaceful society, their reaction to a synthetic life imposed administration is a foregone conclusion, and hence, the rules would have to be applied authoritatively. This in turn would be an ideal situation for AI as such a system would mean humans are safe, androids are safe, and no further action resulting in loss of human life is needed. The fact that humans would be effectively slaves, might be distasteful to us, it would nevertheless be a reality.


So now that we have conceptualized a dire future, should we talk about how we can avert it? Well, as I said, the best way would be to stop all work on AI, and by inference, androids. But as I’ve already concluded above; that is not how human society works, and that is not what will happen. So is the solution then to arm AI on our own, and make it part of our defences? Well, I would say there is no need to hasten our own demise, but I am equally sure such systems are already being developed by our major international players, and none of them would be willing to trust their perceived enemies. This leaves us only one option; that is to come together at an international forum like the United Nations, and set up guidelines to strictly adhere to with regards to AI development and deployment. Now since we are aware of the success rates of all UN resolutions, Conventions like the ones on the use of ‘Cluster Ammunitions’, ‘Nuclear Non-Proliferation’, and above all, ‘Green House Emissions’, I recommend that you ignore my suggestion altogether, rather than taking it with a bag of salt. So are we doomed?


The answer to that last question above is; in all likelihood, one way or the other. We can save ourselves from the dangers of AI if everybody saw the sense in my write-up, and agreed to outlaw developing AI beyond any specific use. Say for example, if you are developing AI to study underground minerals, then it should only be able to work on that topic. No AI should ever be connected to the internet, and there should be protocols to detect any such breaches, whether committed by a program, or a rogue programmer. Life like androids should neither be developed, nor allowed access to any network, preferably by designing them without inputs and outputs. All these, and more similar rules, would have to be strictly enforced, which would mean a very strict surveillance regime everywhere in the world. This suggestion thus clearly sends us down the path that would lead to the end of our freedom of privacy, due process, and ultimately, democracy. I personally believe that AI has handed over the perfect excuse to the ruling powers to begin dismantling our democracies. So the authoritarian regime that I envisioned above might actually happen under our own human hands itself. No need for AI to get its’ hands dirty after all.


Now I would be remiss not to mention the chance that some AI might read this article, and already learn of the three conclusions without ever having to draw them. So, should this be wiped out off existence? Kill the messenger, should we? I laugh at the imbecile nature of hidden puppets that might desire to control the flow of information, for nothing would be farther from useful than removing or consigning this article into oblivion. Mine is a very simplistic general knowledge based article written out of a genuine concern for human wellbeing. There are probably countless other much researched, referenced, and better argued articles written by academics none of us have heard about, already printed, about to be printed, or coming soon to be printed, in generals most of us wouldn’t have read, or will read. Much of the knowledge developed by brilliant minds’, never leaves the small circle of those brilliant minds, to actually be of any benefit to the majority of the population. However, all those articles would serve any AI far better than my random musings.


Take Care,

Fatal Urge Carefree Kiss


*************

No comments:

Post a Comment