AI: Final Samaritan or Ultimate Threat?
It has been a couple of months now, since the release of GPT-3, the all-powerful language prediction and generation engine, developed by Elon Musk’s OpenAI. Applications of GPT-3 range widely- from generating code using natural prompts(which means that a layman can now create code with absolutely no knowledge of any programming language) to creating written content that is indistinguishable from the work of a professional writer.
The latter use case is illustrated to great effect by this piece, published by the Guardian. The article’s prompt was written by the Guardian and fed into GPT-3 by a computer science undergraduate at UC Berkley. Reading this piece- that was actually 8 different results edited into one by the Guardian’s editorial team- gives the reader an eerie sense that it could just as easily be written by any of the million writers that we know and read with such ferocious frequency in today’s world of express information. But, on closer inspection, another striking factor here is that the AI model seems almost self-aware. That it understands what it is- as it states its own case- with nuanced and intellectually reasonable arguments. The writing has personality, like the best writers, it is empathetic to overarching causes that we did not know we even cared for. An ability that would be a powerful tool in any writer’s utility belt, but in the hands of artificial intelligence, it is a testament to the sheer scope and magnitude of this remarkable technology. It is, by and large, modeled after(~inspired by) the human brain. Except for the fact that a human brain cannot nearly match its speed or volume of computation.
But the fact that it has generated such a piece, that seeks to appease humanity to its innocuous nature, is indicative of its rapid evolution. It is cause for us to revisit the many important people that have expressed concern- Elon Musk, one of the original founders of Open AI says- “I have exposure to the most cutting edge AI, and I think people should be concerned by it.”
Stephen Hawkings, the celebrated physicist had more pressing concerns when he said- "The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
But one of Stephen Hawkings’ peers, a detractor in many ways, Roger Penrose(an Emeritus Rouse Ball Professor of Mathematics at the University of Oxford) has a vastly different point of view on what it means to be consciously intelligent. He draws a non-definitive line between computation and consciousness, contending that our awareness of self, the thing that overlays our complex evolutionary drives, is a separate function from computation itself. Penrose, along with anesthesiologist, Dr.Stuart Hameroff, conducted studies with the premise that if consciousness is the “ON” state of the brain, then, while we are “unconscious”(vis a vis- sleeping) they would be able to observe experimentally the defining factor or underlying cause for consciousness. While these theories are yet mostly stipulated hypotheses without empirical consensus, Penrose’s view of consciousness brings the discussion of Hollywood’s Artificial Intelligence doomsday to question its own premise. If the AI cannot be self-aware(and hence not have any real needs or wants for itself) then where does the threat come from?
The greatest threat to our own existence has always been us. From gunpowder to Nuclear Fission the technology that we created for scientific and societal progress has always been used by elements(malicious or otherwise) within us, as weapons that could oppress other sections of this singular, life-supporting world.
So like all technology, like lightsabers can be either red or blue(or green or purple?) depending on the wielder’s chosen side of the force, the fate of AI and which side it serves will be decided by who uses it, and for what purpose.
-by Havaz Mhd