Artificial Intelligence – How worried should we be?

Artificial Intelligence – How worried should we be? Samuel Varkey, Trinity Business School winner of the CoBS 2022 student CSR article competition, highlights three core issues that shape people’s misconceptions about AI – deepfake, prejudice and self-driving cars – and which business and society must address.

Samuel Varkey, Trinity Business School winner of the CoBS 2022 student CSR article competition, highlights three core issues that shape people’s misconceptions about AI – deepfake, prejudice and self-driving cars – and which business and society must address. 

Artificial Intelligence – How worried should we be? By Samuel Varkey.

Artificial Intelligence or AI is a term that is heard a lot in today’s world. Many do not know exactly what it actually is, but everyone knows that it is definitely having an impact on our daily lives. In fact, AI is all around us. The shows that Netflix recommends to you and your voice assistants like Siri and Alexa are all examples of Artificial Intelligence. In fact, Tesla’s self-driving car seems to be the next AI revolution that will dominate the coming years (McFarland, 2021). However, with the rise of AI, there are several dangers and negative social impacts that can rise. In this article, we will be looking at the most-pressing issues that can rise due to AI and how we can solve them.

Deep-fake it till we make it?

We all may have seen the video of historical leaders singing a song together as a joke (Bananamare, 2021). Or we may have come across the video of former US President Barack Obama ‘insulting’ the then current US President Donald Trump (BuzzFeedVideo, 2018).  What do both these videos have in common? The answer – both videos combined have approximately 20 million views. But there is another commonality between the two videos that is probably one of the greatest dangers that humankind may face – Deepfake technology.

Deepfake technology is defined as when one person’s face or voice is replaced with another subject in order to create a fake scenario (Sample, 2020). In the two above examples, none of the people are actually performing the actions as the video shows. Barack Obama is not publicly ‘insulting’ his successor and the current world leaders definitely did not come together to sing a song. While the world initially found this funny, everyone was quick to point out the dangers of such a technology.

According to a Wall Street Journal article, a German energy firm paid £200,000 into a Hungarian bank account after being called by a scammer who used deepfake to mimic the voice of the company’s CEO (Catherine Stupp, 2019). In today’s digital world, where media platforms like Instagram, TikTok and YouTube are dominating, deepfake technology sets a dangerous precedent. What is even more dangerous is that these deepfake videos are extremely easy to make.

According to a Guardian article, it only takes a few easy steps and software (which is readily available) to create deepfakes (Sample, 2020). Deepfake is a dangerous tool that can create havoc upon society. When Elon Musk smoked a joint on a live show, Tesla’s stock price crashed (Neate & Wong, 2018). In a world where such small actions can have severe consequences, one can only imagine the threat that deepfake technology brings.

So, how exactly can we counter deepfake? There are multiple solutions for this and one of them is the use of technology. The US Defence Advanced Research Projects Agency (DARPA)’s Media Forensics department awarded non-profit research group SRI International three contracts for research into the best ways to automatically detect deepfakes (Bocetta, 2019). Furthermore, Researchers at the University of Albany discovered that analysing the blinking pattern of the individuals can help identify if the video is a deepfake or not (Li et al., 2018). However, according to Siwei Lyu, one of the aforementioned researchers, media literacy is the most important step to be able to combat this problem (Cartwright, 2020).

A lot of people are unaware of deepfake technology and can be easily fooled. Therefore, efforts must be taken to make people aware and instruct them to be more cautious with the videos they view on a daily basis. An additional technological solution that can be helpful in the case where deepfake video gets past detection technology is reverse searching (Engler, 2019). With the help of reverse searching, or reverse image search, people can upload images to understand the exact source of the image. This can be used by people to understand the source of the media that they consume. There is a gap existing in this space in that reverse video searching is still not possible (Engler, 2019). Due to the fact that most deepfakes are videos, it is essential to build technologies that can reverse search videos.

AI and Prejudice

Artifical Intelligence and prejudice. Samuel Varkey, Trinity Business School, highlights three core issues that shape people’s misconceptions about AI.

Racism is unfortunately still one of the problems that humankind is yet to solve. But what if even the AI systems built around us projected racism? This nightmare scenario is indeed a reality. At the end of the day, AI algorithms are built by humans, which leads to AI learning the biases and prejudices owned by their creators. One example of this is PredPol, a software developed by the LAPD (Los Angeles Police Department) to predict the areas where crime is most likely to occur. The software predicted that crimes are most likely to occur at areas where the majority of the residents were non-white. Another AI software that showed bias and prejudice is COMPAS, an algorithm used in the US to predict the likelihood of a criminal reoffending. However, this algorithm predicted a higher likelihood of reoffending for African Americans and lesser for white men (Larson et al., 2016). Certain AI systems have also proven to be misogynistic. For example, gender recognition AI systems showed an accuracy of 99% for white-skinned men whereas the accuracy dropped to 35% for dark-skinned women (Timothy Revell, 2018). Further examples of misogyny by AI occur during Google Image searches for the term ‘CEO’. Only 11% of the search results comprised pictures with women, whereas 27% of CEOs in the US are female (Daniel Cossins, 2018). Furthermore, a different study showed that men were more likely to be shown in higher paying jobs than women (Datta et al., 2014).

These examples prove that fighting human bias and prejudice within AI systems is important, or else it can have serious consequences. One solution to this is to include larger and diverse datasets when training the algorithm (Harini, 2018). This will enable the algorithm to learn from a diverse dataset and as such make the model less biased. Another solution is to ensure gender and racial diversity within the team that develops these algorithms. As mentioned earlier, these algorithms show prejudice due to the humans creating them. Therefore, to solve this problem, we go to the source and ensure that there is diversity within the team that develops such algorithms. This may not fully eliminate the problem, but it is a step closer to mitigate such bias from occurring. Finally, the algorithms should be tested on diverse datasets in order to ensure that there are no unnecessary prejudices occurring. In the above example of the gender recognition system, had the AI system been tested on people from multiple ethnicities, rather than just on white-skinned men, the errors would have been identified and the system would not have gone into production.

Self-Driven or Self-Destruction?

Artificial Intelligence – How worried should we be? Samuel Varkey, Trinity Business School winner of the CoBS 2022 student CSR article competition, highlights three core issues that shape people’s misconceptions about AI – deepfake, prejudice and self-driving cars – and which business and society must address.

As mentioned earlier, Tesla’s self-driving cars will most probably revolutionize the world in the coming few years. However, as one may assume, driverless cars bring a lot of new problems to solve. One CNN correspondent tried using the self-driving feature of the Tesla car and the car nearly crashed into a construction site, tried to turn into a stopped truck and attempted to drive down the wrong side of the road (McFarland, 2021). Furthermore, the vehicle also expressed a lot of hesitancy, especially in heavy traffic. Another situation that has not been solved yet with respect to self-driving cars is their decision making. You may be familiar with the hypothetical scenario set during ethics lectures where you need to kill 1 person or 20 people and students have to make a choice. This same scenario is given to the software of self-driving car – and it can lead to severe consequences. One study shows that the self-driving car is trained to protect the driver rather than reduce the overall casualties during a crash (GREG KEENAN AUTO, 2017). One way to solve this problem is to make self-driving cars ‘utilitarian’, where the car does not prioritize the life of the driver and tries to reduce the overall damage. Another potential solution would be to allow the human to take control whenever necessary. For example, if the road ahead is undergoing construction, the human should be able to gain control and navigate safely (Jonathan O’Callaghan, 2020). Furthermore, the car should be trained in diverse environments, especially with unpredictable situations like higher traffic etc. so that it can learn how to navigate through such scenarios.

One important solution to combat the negatives of AI as a whole is to ensure correct ethics and standards. One way to do this is to collaborate with the Government to create an Office of AI, similar to what is happening in the UAE (Minevich, 2020). The purpose of this office is to propose policies to create an A.I.-friendly and safer ecosystem (Artificial Intelligence Office, 2021).

AI has a lot of revolutionary applications that can improve our lives significantly. But if left unmonitored, it can cause unprecedented issues that will taint society for a long time. The most pressing issues are Deepfake technology, Prejudiced AI, and Self-Driving cars. We have proposed solutions for these problems, but as we all know, implementing solutions is a challenge in itself. A common misconception is that AI will take over the world. However, if it does, the root cause of this will be humankind. Our goal is to build AI that will support humankind and allow the development of our planet and civilization, rather than something that will grow to destroy its own creator.

A full list of references and sources used in this article article can be found here.

Samuel Varkey, Trinity Business School winner of the CoBS 2022 student CSR article competition, with his article: Artificial Intelligence – How worried should we be?
Samuel Varkey

Useful links:

Learn more about the Council on Business & Society

The Council on Business & Society (The CoBS), visionary in its conception and purpose, was created in 2011, and is dedicated to promoting responsible leadership and tackling issues at the crossroads of business and society including sustainability, diversity, ethical leadership and the place responsible business has to play in contributing to the common good.  

Member schools are all “Triple Crown” accredited AACSB, EQUIS and AMBA and leaders in their respective countries.

The Council on Business & Society member schools:
- Asia-Pacific: Keio Business School, Japan; School of Management Fudan University; China; ESSEC Business School Asia-Pacific, Singapore.
- Europe: ESSEC Business School, France; IE Business School, Spain; Trinity Business School, Ireland; Warwick Business School, United Kingdom.
- Africa: Stellenbosch Business School, South Africa; ESSEC Africa, Morocco. 
- South America: FGV-EAESP, Brazil.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.