AI and Ethics: How to break out of the matrix

AI and Ethics: How to break out of the matrix - Nicolas Julien, MiM-ENSAE student, ESSEC Business School Runner-up in the 2022 student CSR article competition, explores the fears and preconceptions of AI and calls for stakeholder cooperation to tackle the possible risks that AI may bring.

Nicolas Julien, MiM-ENSAE student, ESSEC Business SchoolRunner-up in the 2022 student CSR article competition, explores the fears and preconceptions of AI and calls for stakeholder cooperation to tackle the possible risks that AI may bring.

AI and Ethics: How to break out of the matrix by Nicolas Julien.

Student Voice at the Council on Business & Society

Are you afraid of Artificial Intelligence? You are likely to be sceptical and concerned about the significant advances of this mysterious and unfamiliar technology, suspicions that are warranted. But what exactly do we have to fear? Usually, we are afraid it will transform in an uncontrollable force, cause mass unemployment, or fall into the wrong hands.

In 1968, Stanley Kubrick already showed the dangers of artificial intelligence with the character HAL 9000 in 2001: A Space Odyssey. Since then, questions and concerns about this technology have continued to grow with its large-scale deployment. But what are the real ins and outs of artificial intelligence today? What is the potential negative social impact of AI, and what are the possible solutions accordingly?

AI is everywhere

What if AI is already governing us? Indeed, it is likely that you strongly underestimate how pervasive algorithms are in your life and society. For example, how often have you interacted with an AI today? Don’t forget all the Google searches, articles or videos from social network newsfeed suggestions, your route recommended by a travel app, the price of your Uber ride, etc. We interact with AIs daily without even realizing it.

More importantly, we must consider the bigger picture as banks, financial markets, medical research, the automobile industry, and corporate marketing departments are all spheres where AI sits in on a large scale as one of their main working tools.

In short, as Dr. Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence, says, “AI is everywhere. It’s not this huge, scary thing in the future. AI is here with us”. The democratisation of such a powerful tool then inevitably brings about negative social impacts that are already observable today.

AI: A threat to humanity?

To understand the genuine negative social impacts of AI, on must first take a step back from the fantasies of an AI that turns against humans. AI is merely a set of statistical tools and matrix calculations, very powerful, but which can’t compare to the calculation capacity and complexity of the human brain.

Today we are only at the stage of “weak” AI, specialised in solving a single specific task, which requires data scientist’s supervision. Consciousness, self-awareness and freewill in machines are all far beyond the capabilities of science today [1]. Therefore, ideas that AI will turn directly against its creators like Frankenstein’s monster will remain science-fiction for a long time to come.

AI: A threat to our jobs?

Another irrational fear is that machines will replace employees, leading to mass unemployment. Three-quarters of US adults believe AI will “eliminate more jobs than it creates”, according to a Gallup poll [2]. However, in reality, AI will probably create more jobs than it destroys [3]. Interestingly enough, the same Gallup poll revealed that fewer than a quarter of people worried that automation would affect them personally, which illustrates how irrational we are when estimating the impact of AI.

In terms of numbers, the consulting firm PwC evaluates that in the UK, 7.2 million jobs will be created for 7 million destroyed [4]. Like any technological transition, AI will inevitably lead to a profound transformation of the labour market which is more of a challenge to overcome than a real negative social impact.

AI: A weapon of mass nuisance?

In reality, the main social dangers of AI seem to come from its use by humans. As with any powerful tool, artificial intelligence can be used for the wrong purposes and thus have a negative impact. Perhaps the most telling example is that of China, whose embrace of AI enables it to assert its totalitarian vision and objective of mass surveillance. The widespread monitoring of the population is made possible by visual detection algorithms. No wonder Premier Li Keqiang stated that “the development of AI is a priority for the Chinese state”.

At the individual level, there is also no shortage of misuse. Imagine waking up one morning and discovering a video of yourself praising racism circulating on the net. Although you never shot the video or uttered such words, it looks very realistic because it was generated by an AI. Dystopian, isn’t it?

Such an application actually already exists, under the name FakeApp. More generally, deepFake, which consists of simulating false, ultra-realistic videos, can be extremely dangerous as it acts as an engine for inappropriate purposes like spreading misinformation, promoting an ideology or committing cyber-harassment.

AI and Ethics: How to break out of the matrix - Nicolas Julien, MiM-ENSAE student, ESSEC Business School Runner-up in the 2022 student CSR article competition, explores the fears and preconceptions of AI and calls for stakeholder cooperation to tackle the possible risks that AI may bring.

The real problem originates in the weaknesses and flaws of AI

Beyond misuse by humans, AI itself has flaws and vulnerabilities, which, if left unchecked, can generate significant social risks. AI always works with a database from which it can learns. The main problem with this is that an AI is very sensitive to the data it is fed. When the data is biased, the AI will reproduce the bias.  It is very common for an AI to over-generalise a database, which means that it relies too much on the patterns observed in the database provided to it. This problem is called “overfitting” and is closely monitored by data scientists to ensure that each algorithm is correctly generalizable, which is called “robustness”. This explains why data is considered the “Achilles’ heel of AI” [5].

AI reproduces human errors

The most telling examples are the cases of racist AI and discriminatory algorithms. For example, in 2016, ProPublica researchers showed that an algorithm assessing the recidivism risk of criminals in the United States obtained different results for people of different skin colours with exactly the same profiles [6].

The problem is that the AI performs better with this built-in racism, because social-economic parameters such as poverty, which mainly affects black people, can lead to more recidivism. An AI has no ethics; it only strives for performance, without differentiating between causality and correlation. Therefore, a human decision based on AI recommendations, such as a judge’s sentence, could then result in detrimental discrimination.

Today, many firms use such algorithms to make decisions: for example, Amazon was strongly criticised in 2018 because its recruitment algorithm suffered from gender bias [7]. Another example is Microsoft’s artificial intelligence, ‘Tay‘, which self-taught itself to speak from Twitter data, and made racist comments in less than 24 hours [8]. Thus, AIs present the significant risk of amplifying social divides and generating discrimination. The AI era is pushing us more than ever to rethink our human values and ethics, as we cannot outsource our responsibilities to machines.

Friedman’s shareholders 2.0

Nowadays, the major AI players, such as GAFA, do not consider the social impact that the use of their algorithms can have. Facebook, for example, deliberately offers content that is both pleasing and shocking to users, even if it is actually misinformation leading to a large-scale propagation of fake news. The GAFAs seek to maximise their performance, user retention and engagement, by any means. Moreover, the algorithms lead to “filter bubbles”[9] in that they isolate you from information and perspectives you haven’t already expressed an interest in, which in turn threaten critical thinking.

In 2018, Google also exemplified its disregard for negative social impacts when it fired its two directors of its ethics and AI team, Timnit Gebru and Margaret Mitchell [10], the two directors of its ethics and AI team, within two months. Google, the company that designs the most sophisticated AI and controls the most information flows, has literally dismantled its own ethics.

All this shows that the AI giants can be considered as shareholders 2.0, in the way Friedman conventionally considers shareholders, in that they are only seeking for profitability and monetary gain at the expense of a healthy fabric of society.

Stakeholders to the rescue

AI and Ethics: How to break out of the matrix - Nicolas Julien, MiM-ENSAE student, ESSEC Business School Runner-up in the 2022 student CSR article competition, explores the fears and preconceptions of AI and calls for stakeholder cooperation to tackle the possible risks that AI may bring.

The challenge now is to identify what solutions exist that can limit the negative impacts of AI and the Shareholder 2.0 mentality driving it. We need to change gears and transfer to a stakeholder system, wherein each actor has the capacity to act responsibly.

First of all, the academic sphere is mindful of the conflicting forces between ethics and IA and put in place numerous initiatives. These involve taking a step back, such as with the Ethics and Society Review at Stanford University whose system requires AI researchers seeking funding to assess their work for any potential negative impact on society before being green-lighted for funding [11]. Researchers are also turning to pinpointed research on how to better secure data in the face of ethical dilemmas, as is the case with the WeBuildAI initiative [12], a collective and participatory framework that enables people to build algorithmic policy. By compiling the views of different stakeholders on ethical dilemmas, researchers construct a computational model that enables an AI to make recommendations that are both efficient and ethical. A similar project exists to create an algorithm that recommends videos by assessing their ethical and social impact, called Tournesol [13]. You can participate right now, by rating YouTube videos on how recommendable, actionable, educational, or entertaining you think they are.

Secondly, projects like WeBuildAI or Tournseol therefore also require individual commitment: we can all help limit the negative social impact of algorithms. More generally, it is important that we use AI-related tools responsibly. This requires an effort to think critically, to get out of our “filter bubble”, and step out of our matrix. We thus become better informed while we in turn teach the algorithms that the content they send our way is not always the content we want.

Additionally, firms and their employees must also learn to assess the negative social impact of their algorithms, and stop focussing only on profitability and performance factors. They must consider ethics in their processes and become accountable through transparency. We are witnessing a trend towards uniting a common will among tech giants to address these issues, for example with the creation of the Partnership on AI coalition by Amazon, Apple, Facebook, Google, IBM, and Microsoft, which focuses on the development of benchmarks and best practices for AI. This is a good starting point, but we still have a long way to go [14].

Finally, it is essential that the governments require more regulation of GAFA. The European Union is taking the lead, notably with the creation of the RGPD, a regulation aimed at controlling the use of data in general, or with the publication of a White Paper [15] by the European Commission’s, which proposes a framework for managing these new tools in a responsible manner. States must therefore remain at the forefront of AI issues while consulting specialised experts, so that they can play an active role in regulating and monitoring the use of AI.

The importance of education

The engagement of all stakeholders will only be possible with an awareness of the issues at stake in AI. By persisting as a source for fantasies and preconceptions, AI is ultimately a very opaque tool that few people understand. The use of deliberately vague lexicon such as “neural network” or “deep learning” leads to a misunderstanding of how to approach AI and understand its dangers.

An employer like Amazon refusing to employ someone because they are black would immediately fall into illegality and cause a scandal. However, this does not apply to AI, as it is still very much misunderstood today.  Thus, NGOs are emerging such as the AI Impact Alliance [16], which organises workshops and conferences on AI strategies and solutions in order to increase AI’s impact on social welfare and raise public awareness on the conflicts between ethics and AI.

In conclusion, there is a genuine effort to educate and raise awareness on the issues posed by AI that we all need to embrace. And the good news is that by reading this article, you have already taken a step in the right direction.

For a full list of footnotes and references used in this article, click here.

Nicolas Julien, MiM-ENSAE student, ESSEC Business School Runner-up in the 2022 student CSR article competition, explores the fears and preconceptions of AI and calls for stakeholder cooperation to tackle the possible risks that AI may bring.
Nicolas Julien

Useful links:

Learn more about the Council on Business & Society

The Council on Business & Society (The CoBS), visionary in its conception and purpose, was created in 2011, and is dedicated to promoting responsible leadership and tackling issues at the crossroads of business and society including sustainability, diversity, ethical leadership and the place responsible business has to play in contributing to the common good.  

Member schools are all “Triple Crown” accredited AACSB, EQUIS and AMBA and leaders in their respective countries.

The Council on Business & Society member schools:
- Asia-Pacific: Keio Business School, Japan; School of Management Fudan University; China; ESSEC Business School Asia-Pacific, Singapore.
- Europe: ESSEC Business School, France; IE Business School, Spain; Trinity Business School, Ireland; Warwick Business School, United Kingdom.
- Africa: Stellenbosch Business School, South Africa; ESSEC Africa, Morocco. 
- South America: FGV-EAESP, Brazil.

One response to “AI and Ethics: How to break out of the matrix

  1. Pingback: The Potential Negative Impact of Artificial Intelligence and Potential Solutions – Council on Business & Society Insights·

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.