Good and Bad Actors in AI: Who Wins?
The allure and potential of artificial intelligence (AI) have captivated the imagination of many, from tech enthusiasts and futurists to businesses and governments. But like any powerful tool, AI can be wielded for positive and nefarious purposes. This duality begs the question: Who has the upper hand in the ongoing battle between good and bad actors in the AI landscape?
We have all heard the warning of job replacement and smart malware, but there is also the potential for good in medicine and mental health. Like any tool, how we use it determines which side of the fence the outcome falls on, and that takes monitoring, regulation, and voluntary compliance. We, as humans, are very inconsistent in all of these. Here are some of my thoughts on the subject. It is not an exhaustive list, so I am not trying to predict the future. I want to stimulate some discussion on the objective topic.
The Good Actors
Promoting Social Progress: Individuals and organizations leverage AI for societal progress. They create solutions for global challenges like climate change, disease diagnosis, and food production. For instance, AI-driven models help predict extreme weather events and machine learning algorithms can aid in the early detection of cancerous tumors.
All of this is good, but ensuring the purposes are legitimate and focused on solving the problems for everyone will require that we all are active participants in maintaining a constant vigil.
Generating Ethical Considerations: Many leading tech companies and research institutes have formed ethical AI teams or advisory boards. They tell us they aim to ensure that AI development respects human rights, is fair and transparent, and does not perpetuate biases. While this is encouraging, it is also critical to be skeptical of the offering because if there is no executive leaders’ buy-in and coordinated stakeholder input, then it will only be a marketing tool with no power to act or control. In other words, a façade.
Educational Initiatives: Recognizing the importance of AI literacy, various educational institutions and organizations promote AI awareness and understanding, offering courses and resources to the public. Education is never bad unless dictated and manipulated to provide a biased approach to a subject.
I have often suggested that people understand that gurus, educators, teachers, leaders, elected officials, and religious leaders are all humans, complete with flaws; therefore, never accept what someone tells you at face value, and always do your due diligence. You are part of a cult when you allow a person or organization to have power over your thinking, actions, and beliefs. Expect unity, not uniformity.
The Bad Actors
Misinformation and Deepfakes: With the rise of deep learning, creating realistic-looking but entirely fake content has become more accessible. Deepfakes can manipulate video, audio, or images to make misleading or harmful content. While there are some legitimate reasons to use avatars as representatives on video, deep fake videos are rising quickly. In 2018, there were 15,000 counterfeit videos on the internet. In 2020, there were 1 million. Unfortunately, it is up to each of us to be on the watch for these. It is likely a deep fake if it seems too good to be true.
Surveillance and Privacy Invasion: Authoritarian regimes can employ AI-enhanced surveillance to monitor, predict, and control their populations, leading to human rights violations. Unfortunately, it is not limited to dictators, and some companies are guilty of monitoring unwanted participants through severely biased algorithms that are intentional. While many will say that the government should take action to prevent this, our own is looking for ways to use AI to surveil us under the guise of protection.
While some surveillance may become necessary, we must advocate a fair and transparent approach. If legislature members are unwilling to promise this, we must elect those who will.
Malicious AI: AI can be programmed or “taught” to behave maliciously. This includes autonomous weapons or AI-driven cyberattacks that can adapt and evolve to exploit vulnerabilities. This is the giant monster in the room because it is generally built and operated secretly. Finding and disabling them will take many technical, legal, and resource hours.
The Balance of Power
The situation will continue to be fluid, but there are several determining factors:
Job Impacts: Much has been written in the last five years regarding the impact of AI on jobs, and yes, there will be an impact. For example, in March, this article from the BBC speculated that 300 million jobs may be lost to AI. They also indicated that new jobs would be created. We don’t realize that 60% of the jobs in the workforce today did not exist in 1940. This is another indication that change is inevitable.
Technological Advancements: While technology inherently doesn’t have a moral stance, the pace of advancement can give an edge to one side. Currently, defense against malicious AI (like deepfake detection tools) often lags behind the creation of such AI. It means that champions of doing the right thing must be found in schools and colleges so that detection and protection innovation can be continually improved and grown. This requires both a cultural and intellectual mindset change.
Regulations and Policies: Governments and international bodies can tip the balance with regulations. Ensuring robust data protection laws, for example, can deter bad actors. As usual, this will run the gamut of privacy and oversight concerns, especially with corporate lobbies and influence from outside concerned only with their selfish situation.
Awareness and Vigilance: As the public becomes more aware of AI’s capabilities and potential threats, they can demand more ethical applications and better protections. The key here is that we as individuals must be willing to understand and live within the necessary ethical constraints imposed by the need for protection. Living a moral life is challenging today, so we must wake up to the issues and ensure we are part of the solution.
The Future Outlook
It’s a dynamic tug-of-war between the two sides. While it’s tempting to imagine a clear winner, the reality is more complex. AI is a tool whose impact depends on the hands that wield it.
Collaboration between nations, industries, and the global community is essential. A combined approach — fostering innovation, ensuring strict regulations, and promoting public awareness and participation — will likely be the most effective strategy to ensure AI’s “good actors” come out on top.
In conclusion, while the challenges are real and the stakes are high, there’s a reason for optimism. With collective effort and an unwavering commitment to ethics and humanity, we can shape an AI-driven future that benefits all. The choice is ours.