AI and its Problems – An unavoidable reality or a dystopia?

AI and its Problems – An unavoidable reality or a dystopia? 
David Santos, IE Business School BBA student finalist in the CoBS 2022 student CSR article competition, clears the mist on AI and contends that legislation, education and setting society’s values have a vital role to play.

David Santos, IE Business School BBA student finalist in the CoBS 2022 student CSR article competition, clears the mist on AI and contends that legislation, education and setting society’s values have a vital role to play. 

AI and its Problems – An unavoidable reality or a dystopia? by David Santos.

From Alan Turing’s paper “Can machines think?” written in 1950 until present time, AI has experienced an exponential and vertiginous evolution. AI went from programming a computer to able to play chess in 1957 to online assistants like Siri or Alexa integrated into our phones. And the truth is, AI has not hit the ceiling yet and we will not stop witnessing the countless disruptive possibilities yet to come.

Interacting with AI technology in devices in our daily life is more common than ever, even if we do not notice it. Have you ever thought about how Netflix can recommend you the best choices for your taste? Or how social media tailors content to submerge you in their platforms for as long as possible? All these examples are part of how AI has been gradually introduced into our lives.

This undeniable increased use of it has provoked the rise of ethical and social concerns regarding the implementation and use of AI. Can AI turn our lives into a dystopian science-fiction movie or it is just the worst of omens? And if that were the case, how could we tackle that situation? This article aims to answer these questions.

Right here, right now

Knowing where AI is at the moment and where can it go is the key to identifying the potential social negative impact that it can cause. Artificial Intelligence and technology has not deployed its fullest potential yet. To understand where AI is at the current moment and where it can be in a (not so distant) future, differentiation among the different types of AI is needed. There are four types of AI sorted into two different groups: the machines that we have already created and the machines that have not been built yet.

The first group, the one composed of the machines that we already have, is made up of reactive machines and limited memory machines. Reactive machines execute specific actions – the computer playing chess, for instance, receiving stimulus from a player, and the computer responding with the most suitable options.

Limited memory machines are those capable of performing determined actions after collecting data and making predictions that will enable them to respond (Hintze, 2016). Self-driving cars are the flagship example of this type of AI, which can already be seen on roads using sensors to collect data regarding current traffic, road limitations, and other vehicles. All this information collected by self-driving cars is stored for a limited amount of time. It is transient, as opposed to the human experience of driving which gradually improves.

The second group, composed of machines that do not yet exist, is divided into the theory of mind and self-awareness, which are concepts that have not been developed as of now. The theory of mind attempts to aid machines to understand the fact that living things have feelings and emotions, stating that there will be a moment in which machines will be able to identify other’s thoughts. This would imply that machines could adjust their behavior and the way they interact with others depending on what they perceive from around them.

The last step for Artificial Intelligence technology after having implemented the theory of mind concept to machines, and perhaps the potentially most dangerous one, is self-awareness. Self-awareness consists of providing human awareness to a machine, a more advanced consciousness than the one explained in the theory of mind. This advanced consciousness would mean for a machine to be given self-awareness of itself, being aware of its own processed emotions while understanding external stimuli and the emotions involved in them (Hintze, 2016).

It is undeniable that AI has gradually contributed to the improvement of our welfare up to the present. For instance, during the recent pandemic the world suffered, AI enhanced the achievement of medical goals: reporting accurate information and increasing the efficiency and the speed of the medical processes and research efforts. Firstly, on a molecular scale, it helped with vaccine discovery and testing. Secondly, on a clinical scale, it contributed to hospital capacity planning, which was clearly exceeded. Finally, on a societal scale, it offered reliable forecasts and predictions of COVID-19 cases while fighting against infodemiology, an information overload at anyone’s fingertips (Bullock et al., 2020).

AI, technology, computers, programming, data, healthcare, education

AI and Technology: Can anything go wrong?

In 2018, Elon University and Pew Research Center published the results of a survey that asked technology experts among others if people would be better off or not in 2030 with the presence of AI. Results were as follows; 63% of respondents answered that people’s lives will be improved thanks to AI while the remaining 37% answered just the opposite (Anderson et al., 2018).

What AI can offer so far has been proven during the last years, not only during the pandemic but also contributing to the achievement of the Sustainable Development Goals (SDGs), among other examples. In this sense, AI has enabled the creation of projects related to the aim of advancing into a more sustainable model of cities by reducing waste or increasing the efficiency of processes like commuting (Gupta et al., 2022). Without any doubt, it is among the greatest disruptive technologies of the last few years. Therefore, the question arises: What can go wrong with AI?

Since AI is not completely developed, we cannot safely measure to what extent AI could affect our lives. However, to the point at which it has been developed, we can identify the impact that is currently having on our society as well as on what could result. Within the 37% of the survey respondents with a negative perspective regarding AI, some experts like John Sniadowski or Erik Brynjolfsson stated that AI, if not used correctly, would contribute to a faster and more intense concentration of wealth and power. This was the most recurring problem found in the survey caused by the AI as it could provoke a massive increase of inequality between the group of people that control this technology and the rest.

As it happens when implementing a disruptive technology, it is difficult to identify in time the long-term hidden negatives. This concern was seen after implementing the internet in our lives when problems like internet addiction or information overload appeared in our lives for the first time. When implementing new technology into our lives, the purpose is to make our lives easier. AI is not an exception. It aims for making the world a better place to live by reducing diseases, diminishing waste, ending inequality, and improving our quality of life. Unfortunately, certain problems can appear down the road, as has happened in the past.

Since the implementation of AI, many jobs have been replaced by machines, making them outdated. Examples include customer support robots or self-driving cars, replacing human assistants and drivers respectively. Due to this technological shift, a fear of losing your job and being replaced by a machine has appeared in society.

Plus, AI has raised concerns about data privacy. To what extent can companies know and control our data? Data has been deemed the new currency of the Fourth Industrial Revolution, in which AI and Big Data play a key role. While you are reading this article, a huge amount of data is being collected by cookies and then processed by multiple companies and servers for different sorts of purposes. Implementing AI in our world implies assuming that we will have to share our data as it works hand in hand with big data. However, it does not mean that all is fair. There have to be limits that aim to protect the user and their personal information.

Technology, AI, technology, artificial intelligence, computers, micro-chips, semi-conductors

AI: We are not heading for a dystopia

Even though we cannot draw an accurate picture of how the future will be, we can try to do so in the near future on the basis of what we have so far. Artificial Intelligence has been created by humans with the objective of improving our day-to-day lives. Hence, if it is created by humans, we should always have control over it to ensure that its objective is always met and it is not used for other purposes. Understanding how far AI could go and its wide-ranging possibilities demonstrate that at a certain point it could exceed human intelligence and capabilities.

To avoid this dark scenario and the problems it would entail, we need to ensure that technology matches society’s values in every process it is required. But it is not also a matter of knowing those values, it is all about spreading them amongst society as well as providing such society with a critical and ethical sense. Bryan Johnson, founder and CEO of Kernel, said in his interview for Pew Research’s survey that the solution lies in prioritizing human quality of life over human improvement. Failing to do so would lead to human irrelevance as fast progress would be over individualized welfare.

As said before, to prevent a small group of people who owns this technology concentrating more power and wealth, we, as a society, have to establish and clarify our values. AI is a powerful technology that should only be used for fair purposes that legitimize its use. The government then has to play a key role in raising awareness from the beginning, integrating ethics and technology-awareness programs in schools. If people know when technology is being used unfairly, they will be able to identify such conduct and stop it on time.

Regarding the growing fear of job loss, the disappearance of certain jobs is inevitable, but that does not mean that it is the end of the labor market as we know it. According to Gartner Inc, the AI industry will create more jobs than it eliminates, with more than 2 million new job positions in 2025 (Loten, 2017). Therefore, the solution is to acknowledge this shift in the labor market and adapt our education systems so that they can teach the right skills.

New technologies have meant a very abrupt and fast change for us. For this reason, legislation has not been updated accordingly and we face dangerous legal loopholes. The fourth industrial revolution has to be accompanied by enforcement of regulation in order to protect the user and avoid increasing the inequality gap. Companies using big data and artificial intelligence need to be clear on how are they going to use data and for what purposes.

Above all, users need to have full control of their data, meaning that they could stop sharing it with firms and eliminate their digital fingerprints. To ensure this, legislation needs to be prepared to face all types of issues regarding data policies so that the user is protected in each step of the process.

AI and Technology: A bright future

As a society, we need to take advantage of all the opportunities that technology offers us. In this scenario, AI is a technology that provides a wide variety of possibilities, some of them still unimaginable. Not only that, it is our responsibility to use all these new means for fair purposes to prevent bigger issues. Unfortunately, despite the fair use of AI, problems like job loss, wealth concentration, and data abuse could appear either way.

For this reason, it is important to raise awareness about the upsides and downsides that new technologies could have. Governments that succeed in doing so will give society a better chance to live together with AI and make the most of it.


– Anderson, J., Rainie, L., & Luchsinger, A. (2018). Artificial intelligence and the future of humans. Pew Research Center, 10, 12.

– Bullock, J., Luccioni, A., Pham, K. H., Lam, C. S. N., & Luengo-Oroz, M. (2020). Mapping the landscape of artificial intelligence applications against COVID-19. Journal of Artificial Intelligence Research, 69, 807-845.

– Gupta, S., & Degbelo, A. (2022). An Empirical Analysis of AI Contributions to Sustainable Cities (SDG11). arXiv preprint arXiv:2202.02879.

– Hintze, A. (2016). Understanding the four types of AI, from reactive robots to self-aware beings. The Conversation.

– Loten, A. (2017). AI to drive job growth by 2020: Gartner. Wall Street Journal.

David Santos, IE University
David Santos

Useful links:

Learn more about the Council on Business & Society

The Council on Business & Society (The CoBS), visionary in its conception and purpose, was created in 2011, and is dedicated to promoting responsible leadership and tackling issues at the crossroads of business and society including sustainability, diversity, ethical leadership and the place responsible business has to play in contributing to the common good.  

Member schools are all “Triple Crown” accredited AACSB, EQUIS and AMBA and leaders in their respective countries.

The Council on Business & Society member schools:
- Asia-Pacific: Keio Business School, Japan; School of Management Fudan University; China; ESSEC Business School Asia-Pacific, Singapore.
- Europe: ESSEC Business School, France; IE Business School, Spain; Trinity Business School, Ireland; Warwick Business School, United Kingdom.
- Africa: Stellenbosch Business School, South Africa; ESSEC Africa, Morocco. 
- South America: FGV-EAESP, Brazil.

One response to “AI and its Problems – An unavoidable reality or a dystopia?

  1. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.