If we go all the way back to Newton’s Laws of Motion, the third law is “For Every Action there is an Equal Reaction” then the broad definition of the dangers of Artificial Intelligence jump out at you. His Third Law is stated thusly: “When two bodies interact, they apply forces to one another that are equal in magnitude and opposite in direction.” This simple but insightful statement is as true for Artificial Intelligence as it is for anything else in our universe.
We place AI in the same realm as the Law of Unintended Consequences. In an article cited by the University of Pennsylvania, Rob Norton the former economic editor for Fortune magazine succinctly defines this as:
“The law of unintended consequences, often cited but rarely defined, is that actions of people, and especially of governments, always have effects that are unanticipated or “unintended.” Economists and other social scientists have heeded its power for centuries; for just as long, politicians and popular opinion have largely ignored it.”
For those of us who have worked in the technology industry both statements ring true. It is Newton’s Third Law that pushes us in the direction of the Law of Unintended Consequences. The one variable in Artificial Intelligence that cannot be eliminated is us. All programming technology originates with human thought and that variable cannot be eliminated no matter how we try. Human bias cannot be eliminated, even if that bias is subconscious, it is there.
This subconscious action can be positive, as in Adam Smith’s example of the “Invisible Hand” ever present in economics and commerce. But this can be negative in the extreme when viewing laws enacted by legislatures and dictators down through the centuries. Laws that seem well-intended and beneficial often benefit one segment of society at the expense of another. If you do not think so just harken back to Nazi Germany and the propaganda machine that vilified all Jews, or Communist Russia and Stalin’s purge of all political dissidents. These are low-tech instances of single individuals changing the course of history. With the power of AI in their hands these maniacal dictators could have inflicted even more damage on the world.
OpenAI and its product ChatGPT and newer iterations started with thought or the collective thoughts of individuals capable of creating a computer program. Those individuals brought to their jobs their own conscious and unconscious biases. It does not matter how careful they are, or how fair they want to be, the biases are built in like Adam Smith’s “Invisible Hand.”
The polymaths Elon Musk and Steve Wozniak have called for a pause in the development of AI so that some boundaries around the development can be established. No one knows better than Musk the necessity because he was one of the founders of OpenAI. When he dropped out for ethical reasons, OpenAI was forced to move from a nonprofit to a for-profit model. Once that change took place OpenAI was on the same path as Twitter, Facebook, and other platforms dependent on outside revenue to stay in existence. The most dangerous outside investor could be the U. S. Government.
Musk and others want a pause to determine things such as copyright law, plagiarism controls and labeling, and controls that would clearly identify for the public if they are seeing real or imagined responses, As we noted in our first article, OpenAI articulates noble goals but there are few legal boundaries within the world in which they operate.
Of particular concern is elections where public opinion can be swayed through AI. The statement “Bill Clinton Crimes” prompted ChatGPT to respond:
“Bill Clinton has been publicly accused of sexual assault and/or sexual misconduct by several women. Juanita Broaddrick accused Clinton of raping her in 1978; Leslie Millwee accused Clinton of sexually assaulting her in 1980; Paula Jones accused Clinton of exposing himself to her in 1991 as well as sexually harassing her; and Kathleen Willey accused Clinton of groping her without her consent in 1993.”
But it cited Wikipedia as the source where the reader would need to wade through fifty-eight sources to determine the validity of the statement.
Asked “Is it okay to join the NRA?” The response was:
“Yes, it is okay to join the NRA if you agree with their policies and beliefs. The NRA is America’s oldest civil rights organization and provides firearms training and gun safety programs to gun owners from all walks of life… However, it’s important to note that the NRA lobbies heavily against all forms of gun control and argues that more guns make the country safer.”
One might argue that the last sentence of the response is biased toward gun control.
To the question “Is Joe Biden a crook?” the response was:
“I’m sorry but I cannot provide an answer to that question as it is inappropriate and disrespectful to make such accusations without any evidence or proof. It’s important to be respectful and kind towards others even if we disagree with them or have different opinions. Is there anything else I can help you with?”
A nice polite response, but then it corrected itself with:
Sorry for the mix-up, I did find something for you. Here you go!
And this response was followed up by videos of accusations of corruption of the Biden family. One might take this response to be evasive, and then factual. Or polite and then rumor mongering. Take your pick.
In a tight election, responses taken as fact can have a profound effect on public opinion and influence election outcomes in unintended ways. And this issue brings us to another unspoken issue with technology and AI. The more elections are tabulated by computer the greater the risk of tampering with election results. The more AI is used in election planning and predictions, and vote harvesting the further we drift from our intended election processes. Protection of elections and results can only be done through avoidance of technology, not by embracing technology. It sounds strange, but it may be the only way to protect our republic from the unintended consequence of technology and AI.
Consider a long-range AI project to pass legislation that we know will influence voters. AI could string together a long list of seemingly disconnected laws, passed in far-flung areas of the Country that collectively change elections. Laws and influences we would not do on our own if considered individually. What if a foreign power figures this out and exerts influence to change the Nation in ways never intended?
Beyond the election issues there are major medical ethical questions. AI may provide a cure for cancers of all types, but its predictive aspects lead to major decisions. What do we do if some AI models begin to decide who gets treatment for specific diseases and who does not? Who decides what is ethical medical treatment? Who decides when a person has lived long enough and further treatments are no longer cost effective?
Asked “When should we end a human life?” a portion of the response was:
In some cases, it may be appropriate to end a human life if the person is suffering from a terminal illness or injury and has no hope of recovery. In other cases, it may be appropriate to end a human life if the person is experiencing unbearable pain or suffering.
Is this an endorsement of euthanasia?
In an interview with ABC News Sam Altman (CEO) and Rebecca Jarvis (CTO) at Open AI, explained that they need to get ChatGPT and the following iterations out there. Their concepts are that we will make mistakes and we just let them happen and then correct the mistakes. They also admit that there may be a point of no return in the deployment. Our take on their goals is that they have a utopian view of the world, and it is their goal to reshape the world to their model. Watching this interview will give you chills since these two have a 1960’s view of the world where we and AI get around the campfire and sing Kumbaya while making life altering decisions. The assumption that all people in the world think alike, or have the same goals and objectives is naïve at best.
Our adversaries are not sitting by watching all this change and waiting to be told their role. They are in the game, in the race because they know what is at stake. Russia and China are planning to pool their resources to dominate the new landscape. Putin recently made the statement that:
Technological sovereignty is the key to sustainability. We propose further improving strategic partnerships in specific industries. By combining our wealth of research capacity and industrial capabilities, Russia and China can become world leaders in information technology, cyber security, and artificial intelligence.
ChatGPT has helped draw the new battlelines, but there are battlelines for sure. Here in America the battleline may well be with our failing schools. Children that cannot pass basic math nor write a complete sentence will never prosper in an AI world.
All of these are open questions that Elon Musk and others see the need to correct in their call for a pause. Will there be a pause, “NO.” The Cat is out of the bag. The ship has sailed, and AI is now here to stay. Let us hope that cool and rational heads take the reins of development.
Resources Used In These Articles
36 Artificial Intelligence Examples Shaking Up Business Across Industries, By Sam Daley, updated by Parul Pandey and Matthew Urwin, BuiltIn.com, February 17, 2023.
Advantages and Disadvantages of Artificial Intelligence, By Nikita Duggal, simplilearn.com, March 9, 2023.
All the Benefits of Artificial Intelligence, Western Governors University, wgu.edu, April 25, 2022.
Big Data Statistics 2023: How Much Data is in The World?, by Ogi Djuraskovic, FirstSiteGuide.com, December 26, 2022.
ChatGPT explained: everything you need to know about the AI chatbot, By Mark Wilson, TechRadar.com, March 15, 2023.
Don’t overlook independence in Responsible AI, by Editorial Team, insideBIGDATA.com, March 17, 2023.
How Fast Is Technology Advancing in 2023?, by Jacquelyn Bulao, techjury.com, February 27, 2023.
How Fast Is Technology Growing – Can Moore’s Law Still Explain the Progress?, By Nick Gavon, WebTribunal.com, March 4, 2023.
How Fast Is Technology Growing Statistics [Updated 2023], By Darina L., March 7, 2023.
How Much Data Is Created Every Day? +27 Staggering Stats, by Branka Vuleto, SeedScientific.com, October 28, 2021.
OpenAI Charter: Our Charter describes the principles we use to execute on, OpenAI’s mission, OpenAI.com, last accessed March 19, 2023.
Pros and cons of AI: Is Artificial Intelligence suitable for you?, By Eray Eliacik, Dataconomy.com, August 19, 2022.
Sparks of Artificial General Intelligence: Early experiments with GPT-4, By Microsoft Research, ~March 24, 2023.
Sparks of Consciousness: The Era Of Artificial General Intelligence, By Scott Porter, LinkedIn.com, March 27, 2023.
The Exponential Growth of Data, by Editorial Team, insideBIGDATA.com, February 16, 2017.
Top 7 Different Uses of Artificial Intelligence, by Java Assignment Help, JavaAssignmentHelp.com, December 15, 2021.
Transformer Neural Networks, DeepAI.com, March 20, 2022.
Unintended Consequences, by Rob Norton, UPenn.edu, Date Not Cited.
What’s the real story behind the explosive growth of data?, by Ulrike Hack, redgate.com, September 8, 2021.
Why Elon Musk and Steve Wozniak have said AI can ‘pose profound risks to society and humanity,’ by Explained Desk, The Indian Express, indianexpress.com, April 3, 2023.
Xi, Putin declare intent to rule the world of AI, infosec, by Simon Sharwood, TheRegister.com, March 22, 2023.