The other facet of AI

Eti Sharma
Published

The growth of AI has been so significant over the last few years that it is almost impossible not be aware of it. But as they say Good and Evil are the two sides of the same coin, it has become important to discuss the other facet of AI, the evil side of this coin. Let’s read more to understand how excessive dependency on AI can be harmful.

ByPratyus Ghosal

There have been many concerns raised from leaders of different sectors about both, the increasing power of AI and its role in society. Even though millions of individuals are beginning to utilize ChatGPT, and other generative AI systems, thousands of CEOs, technologists, researchers, professors, and others signed an open letter in early 2023 advocating for a stop in AI deployments. The March letter begins by criticizing AI laboratories for their involvement in “an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control,” as well as for the “profound risks to society and humanity” that AI poses.

Little to no room for creativity

From computer code to visual art, AI has been entrusted with producing everything; but it lacks original thought. AI’s ability to write Forbes earning reports is exemplified by Quill, a bot. The information in these reports is limited to what the bot already has. Though a bot can only write on what it knows, but even so, it’s impressive that it can produce an essay on its own. It can’t think outside the box, no pun intended. “It’s not going to replace critical thinking; it’s just going to be another arrow in our quiver,” said Chaim Mazal, CSO at Gigamon, a maker of cybersecurity technology. Products will become monotonous and sound the same if humans depend too much on AI. This predictability also means one robot will always be able to sniff out another robot.

An example would be Marketing Sector where AI can automate various marketing tasks and generate data-driven insights, it may struggle to replicate the unique human elements of marketing, such as emotional connection, intuition, and creative thinking.

Skill loss in humans

While most experts point to AI’s ability to relieve humans of tedious and repetitive tasks as a benefit, some worry that this particular advantage comes with a drawback: a loss of human skills. People frequently learn and master simple repetitive tasks first, which helps them understand how those tasks fit into the larger chunks of work they must accomplish to complete an objective. This helps people advance their knowledge as well as their personal and professional crafts. However, some have expressed concerns that people may lose their ability to know and understand how to perform those tasks as AI takes over entry-level jobs. That might hinder human capacity to become true experts in a trade or profession and deprive them of the skills needed to step in.

Skill loss by AI

Lack of Ethics

Morality and ethics are significant aspects of human nature that can be difficult to incorporate into an AI. Many people fear that as AI develops quickly, it will eventually become uncontrollable and eventually wipe out humanity. This moment is referred to as the AI singularity. The privacy of customer data is one of the most often mentioned ethical issues. Thus, the question is: given AI’s quick development, how can we safeguard consumer privacy? Because we depend on the functionality of the apps, most of us sign lengthy contracts without fully comprehending the implications. All we can do is hope that the large tech companies aren’t abusing our trust in their automated systems.

An example would be Healthcare sector which rely heavily on patient data, including sensitive medical information. There is a need to ensure that this data is collected, stored, and used in a secure and privacy-conscious manner. Protecting patient privacy, maintaining data confidentiality, and preventing unauthorized access to personal health information are critical considerations.

Another example would be Education sector where AI systems collect and analyse a significant amount of data on students, including their performance, behaviour, and personal information. There is a need to ensure that this data is handled securely, with appropriate privacy safeguards in place.

Missing emotions

Robotic commerce would be perfect if people only make purchasing decisions solely on facts. In reality, emotions have a big influence on persuading someone or a company to part with their hard-earned cash in order to acquire a worthy good or service. The Brookings Institute estimates that in 2023 “four out of five American workers in the private sector are employed in the service economy, doing everything from delivering care in hospitals to serving food from ports to store shelves and into consumers’ hands.” This highlights the fact that the service sector dominates the U.S. economy. While AI can automate certain jobs, many of those roles still require empathy and touchpoints. While AI can be trained to identify human emotions like frustration, a machine is incapable of feeling or empathizing. In many contexts, including the workplace, humans have a significant advantage over emotionless AI systems because they can feel the emotion.

Bias & non-existence of transparency

Design flaws or faulty and imbalanced data that is being fed into algorithms can lead to biased software and technical artifacts. Thus, AI only serves to reinforce social and economic inequality by reproducing age, gender, and race bias that already exists in society. Most likely, you read a few years ago about Amazon’s experimental hiring practices. The tool ranked candidates from one to five stars using artificial intelligence. Because Amazon’s computer models were trained to screen applicants by looking for patterns in resumes submitted over a ten-year period, it was unfair to women because it effectively gave preference to male candidates. Furthermore, a study discovered that some facial recognition programs incorrectly classify less than 1 percent of light-skinned men but more than one-third of dark-skinned women. The producers claimed that the program is proficient, but the data set used to evaluate performance included more than 77 percent male and more than 83 percent white people.

Algorithms have been used by US courts to assess a defendant’s likelihood of committing new crimes and to guide decisions regarding bail, sentencing, and parole.  For instance, in the case of defendant Eric Loomis, the trial judge imposed a lengthy sentence on Loomis due to the “high risk” score he obtained from responding to a series of questions that were then input into Compas, a black box risk assessment tool. Neither the judge nor anyone else is aware of the methodology used by Compas to determine risk to society. For all we know, Compas may base its decisions on factors we think it is unfair to consider, it might be sexist and racist without our knowledge.

Violation of privacy

Use of personal data

AI can be used to create remarkably accurate person profiles. Since algorithms are designed to identify patterns, it was evident that they could forecast a user’s likely future location based on historical location data. The prediction was even more accurate using friends’ and social contacts’ location data. This drawback of artificial intelligence is sometimes downplayed. Since you have nothing to hide, you might believe that it makes no difference to you who knows where you go. Even if you do not do anything wrong or illegal, you might not want the public to have access to your personal data. Would you really feel at ease if someone made public all of your location information, including forecasts? Without a doubt. Knowledge is power, and the knowledge we give up is their power over us.

Threat to environment

Due to its high energy consumption, AI has the potential to seriously harm the environment. According to a 2019 study, the fuel requirements of the hardware for a specific kind of artificial intelligence (deep learning in natural language processing) result in a significant carbon footprint. According to experts, the training of a single AI model results in 300,000 kg of CO2 emissions, which is roughly the same as 125 round-trip flights from New York City to Beijing or five times the emissions of an average (American) car over its lifetime. Of course, there are other sources of emissions besides model training.

Disinformation

A rise in disinformation is a disadvantage of artificial intelligence that we are already witnessing. The Activist Group Extinction Rebellion produced a deepfake of Belgian Prime Minister Sophie Wilmès’s speech in 2020. The group used artificial intelligence (AI) to change the words in an authentic video address that Wilmès had given. The result: disinformation.GPT-3 recently sent out tweets with the subject line “They can’t talk about temperature increases because they’re no longer happening,” aiming to create  scepticism about climate change. Sadly, Deepfakes will progressively be used in targeted disinformation campaigns in the future, endangering our democratic systems and dividing society due to excessive dependency on AI .

Domination by Big Tech companies

Google has built a massive monopoly in AI technology since 2007 by purchasing at least thirty AI startups that were working on projects ranging from image recognition to more realistic-sounding computer voices. Out of an estimated global total of $39 billion, Google, Apple, Facebook, Microsoft, and Amazon collectively spent up to $30 billion on AI-related research, development, and acquisitions in 2016. It’s risky when businesses snatch up AI startups around the world, gain near-monopolies on user data, and start serving as everyone else’s go-to source for AI. A concentration of power of this kind puts democratically elected governments at risk of being dictated to by massive tech corporations.

Unemployment

Unemployment due to excessive dependency on AI

In advanced nations such as Japan, manufacturing businesses often use robots to replace human resources. Every company wants to replace the least competent workers with AI robots, which are more efficient at doing comparable tasks. Thus, AI has a negative impact on lower-paying jobs and helps to reduce the number of jobs available for those with less education. Automation may result in the loss of 300 million full-time jobs, according to a Goldman Sachs Research report from April 2023. Researchers and economists have forecast that AI will create new kinds of work and move some workers to higher-value jobs. By 2030, according to some experts, there could be significant changes in the vocations that people hold. Estimate is between 75 million and 375 million workers, or 3 to 14 percent of the world’s workforce, will need to change careers and acquire new skills. Mass unemployment and income polarization may increase as a result of excessive dependency on AI .

Final Thoughts

Making sure the “rise of the robots” doesn’t get out of control is crucial for humans. Additionally, some claim that if artificial intelligence falls into the wrong hands, it has the potential to wipe out human civilization. Economic instability can pose a serious threat to our democracies by damaging public confidence in political institutions. Thus, voters may be more inclined to sympathize with populist parties as a result of AI’s disruptive effects, which may also foster dislike for representative liberal democracies.

Ultimately, Artificial intelligence is not artificial general intelligence, which kind you see in the Terminator movie robot that can do everything human can do. But usually, AI is specialized to do only one task. Now what to do with AI, we are the ones who must determine the purpose.


Get in touch with Telenity to explore customizable Employee trackingFleet tracking, and other location based APIs for your business at [email protected]

Sources:

Related Articles

Revolutionizing Location-Based Services: The Future Lies in SaaS Platforms

In a rapidly evolving technological landscape, enterprises across the globe are recognizing the pivotal role of location-based services (LBS) in...

Read More

Basic Introduction to Automation Testing

Every project or software goes through a testing procedure, and the type of testing method depends on various factors like the budget, expertise,...

Read More

How Do Operators Improve the VAS Operation?

Some of the mobile VAS services are around for more than two decades, helping operators increase their revenue and customer experience. Telenity...

Read More