The forthcoming digital revolution is the key to both the door of happiness and the lid to Pandora’s box. Dongdi Chen, Warwick Business School Runner-up in the CoBS 2022 student CSR article competition, opens it all up.
The Inevitable Digital Revolution: Building a harmonious society with AI by Dongdi Chen.
Have you ever been shocked by benevolent ideals of AI in fantasy films, such as Star Wars C-3P0 & R2D2 and Wall-E, which reflect the hope, dreams, and anticipation of AI? And whether you were surprised that a common purchase on Amazon platform could bring an amazing series of customized services, like extensive recommendations catering to your preferences? These are just two microcosms of AI-based functionalities in the era of digitalization, one of the most typical characteristics of the 21st century. Similar to the disruptive impacts of the Industrial Revolution in the 18th century, the digital revolution has exploited the power of computers to substitute, supplement and amplify the routine mental tasks performed by humans (Makridakis, 2017) as well as seamlessly incorporating AI into our daily lives, ranging from the management of social networks (Gadek et al., 2018) to organizational decision-making behaviours (Jarrahi, 2018), especially making predictions for sophisticated social or natural phenomena (Armstrong，2014).
In terms of data-based powerful intelligence and automation capabilities, current research has reached an emerging consensus that advanced digital tools may offer substantial economic benefits, particularly boosting productivity levels and elevating GDP growth trajectories through 24/7 automation capabilities and relatively precise decisions based on data analysis, which frees workers from monotonous jobs and makes them gain unprecedented opportunities to enjoy a higher quality of lives. However, as a coin has two faces, novel social issues arise with the widespread use of AI & automation robots too, and the most prominent problems that should be taken into account can be attributed to the following three categories: Unemployment and Income Equality, Cybersecurity and Privacy, and Ethical Issues represented by bias and discrimination. In the following paragraphs, we will analyze these sources respectively to figure out their interrelations and causes respectively, based on which we will then carry on exploring effective methods to properly deal with the emerging social problems as well as ensure smooth digital transformation from the perspectives of corporate and government governance.
Analysis of socio-technical problems: The conflict of human-machine interactions and mismatch between technologies and social institutions
The first two industrial revolutions in human history, whether the Age of Steam or the Age of Electricity, unexceptionally took at least 200 years to promote the transformation of human society, during which people had gradually formed an entirely novel way of working and living with steam engines as well as electricity (Peters, 2020). However, in recent years, the rapid development of digital technologies represented by AI with strong flexibility and disruptive power have generated novel definitions of work, organization, and lifestyle, which caused a mismatch between digital technologies and current inflexible social institutions that developed and stayed in the industrial age – thus causing negative social problems and conflict of human-machine interactions.
Among three categories of social impacts, the first and foremost one is massive technical unemployment and accompanying income inequality. As mentioned, a majority of the work required by the economy of the industrial age is fundamentally routine (Ford, 2013). Due to its repetitiveness and low level of skill, an increasing number of labour-intensive agricultural and manufacturing industries have gradually adopted automotive robots to engage in manual work and replace lower-skilled labour to reduce labour costs (Goyal and Aneja, 2020)To considerable extent this results in higher unemployment and disruptive changes in existing-working position. For instance, the U.S and Japan installed one automatic vending machine for every 50 labourers to save human power (DecResearch, 2020). Moreover, accompanying the change in work is a worsening income distribution structure and income inequality. As companies continue to displace workers with robots and automatic machines on a large scale, increasing demand and limited talent supply significantly enhance the income level of high-tech talents and simultaneously widen the income gap between high-and-medium-skill and low-tech workers (Schang and Almirall, 2021).
Moreover, if the intelligent functions of AI are used unreasonably and illegally, they will exacerbate existing social problems and lead to devastating consequences, especially in the areas of Cybersecurity and social ethics through which people easily recognize the pain points of society. On the one hand, a social network is one of the vital features of the novel ecology of work in the digital era, of which cybersecurity and core interaction are two interrelated sides. With the emergence of digital platforms and cloud technologies with powerful storage capabilities, the information and knowledge sharing of B2B, B2C, and C2C have shown exponential growth (Vuori and Okkonen, 2012). In this case, a large number of potential hackers have attempted to utilize AI to identify vulnerabilities existing in the network and then carry out data theft and privacy violations. This not only violates the interests of individuals and organizations but also worsens the instability of the network society, especially within widely used software systems like Zoom, thus making AI-enabled cybersecurity a social issue.
On the other hand, just as technically sound cloning technology cannot pass the test of social ethics, AI unexceptionally raises ethical concerns owing to the bias and discrimination in machine learning algorithms. In recent years, gender and racial discrimination have constantly emerged in several popular AI application fields, such as AI-added resume screening, facial recognition, and criminal assessment of the judicial systems. Particularly, a typical example concerned is that the news feed algorithm of Facebook had distributed deliberate fake news stories and misinformation that unfairly biased voters against Hillary Clinton and influenced the election outcome in favour of Donald Trump while increasing societal divisions among Americans through “filter bubbles” (Isaac, 2016). As such, because these algorithms are designed by conscious humans, they will inevitably, and often inadvertently, reflect societal values, biases, and discriminatory behavior.
Solutions: Building a better world with AI requires the concerted efforts of enterprises and government
As Elon (2021) pointed out, ‘’AI doesn’t have to be evil to destroy humanity if AI has a goal and humanity just happens to come in the way’’. It is the same case when it comes to AI or even nuclear weapons, the fact being that technology itself is neither good nor evil. Overall, the above social problems are not caused by AI itself but are rooted in the improper use of AI and the conflict between innovative technologies and the current social system. Therefore, the key idea of mitigating these negative effects lies in the combination of two main strategies: the formulation of AI usage norms and the regulation of human-machine relationships, which require the joint participation of enterprises and government.
Firstly, AI-induced technical unemployment could be categorized as structural unemployment, the solution to which lies in the promotion of knowledge sharing and human-machine interaction by enterprises and the employment promotion policies of the government. On the one hand, if a firm is thinking of adopting AI to reduce costs in the future, the leadership term should take an overall view of current working positions within the organization’s strategic framework to seek an effective human-machine collaboration mechanism. Here, human and digital workers play to each other’s strengths and cover their weakness simultaneously, instead of rudely replacing human labor with AI. In addition, enterprises should provide employees with necessary AI skills training and sufficient education programs, so that employees can build trust and recognition of AI based on their full understanding of AI. On the other hand, for high-risk occupations affected by automation, such as manufacturing, the government should proactively invest in diverse types of vocational training programs, thus developing varieties of useful courses for workers to help them easily adapt themselves to new technological job positions. Finally, firms should assume corporate social responsibility actively within the framework of social security policies to offer material aid to groups or communities affected by temporary unemployment, thereby reducing the income gap to a certain extent and supporting social stability during businesses’ digital transformation (Siegrist and Cvetkovich, 2000).
Moreover, in addressing AI-enabled cybersecurity and privacy issues, technical support from enterprise and the social regulation guarantee are also the key. Interestingly, AI is both a facilitator for cyberattacks and a dedicated supporter for effective cyber defence. In terms of efficient data processing capability and ongoing learning features of AI, firms could not only identify unknown threats and existing vulnerabilities through automatic network tracing and research, but also secure authentication anytime a potential user attempts to log into their accounts via AI’s facial recognition or fingerprint scanners functionalities (Daniel, 2021). However, the smooth operation of sustainably effective cybersecurity programs is inseparable from external institutional guarantees, which necessitates AI and Privacy Protection legislation. The government should promptly enact and modify laws and regulations documents to offer external support for network security and privacy protection that caters to the needs of data and privacy protection.
Furthermore, as mentioned above, AI ethics are fundamentally derived from the ethics of creators in the context of organizational and social ethics, and, like a mirror, the technologies reflect a narrow and biased vision of current society. Therefore, to solve the problem of social prejudice and discrimination caused by AI, eliminating discrimination in the code design of machine learning algorithms is a direct technical measure from the enterprise perspectives. Meanwhile, the elimination of gender and racial discrimination through moral education and regulations is a more critical solution to the root cause. On the one hand, enterprises should strengthen self-regulation to standardize organizational ethics, especially carrying out strict supervision in the design and application of AI technologies, thus making these digital workers comply to basic business ethics: fairness and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability, and robustness of the technology (Parloff, 2016). On the other hand, anti-discrimination education and legal norms should be promoted actively at the level of government to contain any conscious or unconscious discriminatory behaviour, and thus enable each individual to equally thrive in society.
How to better embrace changes is the most significant lesson AI will teach us
All in all, the societal issues raised by AI illustrate an important truth: it is vital that the development and application of AI technologies are shaped by a diverse range of voices including social scientists, ethicists, philosophers, economists, lawyers, and policymakers in addition to engineers and corporations (Knight Foundation, 2017). Embracing disruptive technologies, together with allowing social systems and mindsets to actively adapt to such changes, could maximize the dividends brought by transformation to some degree. Moreover, this may be the crucial lesson that AI brings us in addition to technological assistance.
- Link up with Dongdi Chen on LinkedIn
- Read a related article: Artificial Intelligence – how worried should we be?
- Discover Warwick Business School
- Apply for the Warwick Business School EMBA programme.
Learn more about the Council on Business & Society
The Council on Business & Society (The CoBS), visionary in its conception and purpose, was created in 2011, and is dedicated to promoting responsible leadership and tackling issues at the crossroads of business and society including sustainability, diversity, ethical leadership and the place responsible business has to play in contributing to the common good.
Member schools are all “Triple Crown” accredited AACSB, EQUIS and AMBA and leaders in their respective countries.
- ESSEC Business School, France-Singapore-Morocco
- FGV-EAESP, Brazil
- School of Management Fudan University, China
- IE Business School, Spain
- Keio Business School, Japan
- Stellenbosch Business School, South Africa
- Trinity Business School, Trinity College Dublin, Ireland
- Warwick Business School, United Kingdom.