
Daria Perevertailo, Winner of the 2024 CoBS Student CSR Article Competition at Trinity Business School, explores the power and influence of algorithms and their role in promoting political polarization
Political Polarisation, Artificial Intelligence, and the Proliferation of Filter Bubbles in the Digital Environment by Daria Perevertailo.
“The tremendous expansion of communications in the United States has given this Nation the world’s most penetrating and effective apparatus for the transmission of ideas […]. Words hammer continually at the eyes and ears of America. The United States has become a small room in which a single whisper is magnified thousands of times.”

Thus starts the 1947 article titled “The engineering of consent” by an American sociologist and the “father of public relations” Edward L. Bernays. Despite the 77-year gap, the relevance of Bernays’ words is more evident today than ever, considering the meteoric technological advancements in the field of online communications and AI cause concern and awe in equal measure. An immense amount of information is now available to any user, which is both an extraordinary achievement and a source of numerous problems.
For once, it may be incredibly difficult to navigate through the enormous pool of resources. However, recommendation-based AI-powered algorithms can assist in effectively finding relevant resources online by analysing users’ actions and providing personalised content. Yet they can also severely limit what information is going to be seen by said user and subsequently lead to intellectual isolation.
In a personally tailored informational environment it can be harder to assess existing beliefs and biases. The perpetual reiteration of users’ opinions fosters an ecosystem of conforming viewpoints and creates an impression of universal agreement on certain topics. The concentration of like-minded individuals may turn these opinions more radical and less prone to reconsideration. The realm of politics is especially susceptible. A recent controlled laboratory experiment by Cho et al. (2020) suggests that algorithm-recommended content can consolidate previously held ideological beliefs and escalate political separation. Thus, the gap between the opposite sides of political spectrum, i.e. the right and the left, grows bigger.
AI-powered recommendation algorithm — a well-intentioned enemy?
Algorithm-based recommendations are ubiquitous on the Internet, as they help users filter a boundless amount of information. They are also designed to make user experience as engaging as possible in order to make people stay online longer. Therefore, the information netizens encounter tends to cater towards their interests, as it is more likely to be interacted with. This is certainly true in the case of TikTok, one of the most popular social media platforms today. Its algorithm has become incredibly accurate at generating a stream of recommended clips on the For You Page (FYP) by utilising information about not only your interactions with other users, but also your own content, people you follow, hashtags you use and videos you like (Hern, 2022). Soon enough you are surrounded by videos of people who have the same interests and opinions as you. But is it always a positive thing?
The ample degree of control users exercise over their online experience allows them to access any kind of information and therefore subject themselves to a variety of perspectives and opinions that may not align with their own. However, it can also result in netizens exclusively pursuing information that validates and reinforces their preexisting beliefs, thus creating a filter bubble — a situation when one is completely insulated against opposing viewpoints.
By constantly searching only for desirable information, users provide the algorithm with data that will be later used to make personalised recommendations, which, in turn, can make it harder to come across resources with opposing views and interact with them in a sensible manner. If you have been surrounded by hundreds of sources that confirm your opinions, why should you trust and fact-check one that doesn’t?
Defining political polarisation
The term political polarisation is generally understood to mean the process of the right and the left distancing from the ideological centre and heading towards more extreme, radicalised views. It is usually divided into two categories: ideological polarisation and affective polarisation. The former is concerned with the rift between opinions on policies and the latter described as proliferation of the severely negative views of the opposing group.
A review of relevant research papers suggest that political polarisation has been on the rise in the United States of America. The updated findings of the Pew Research Centre indicate that the Democratic party and the Republican party have been exponentially drifting apart: the former have become slightly more liberal, and the latter have adopted more conservative views (DeSilver, 2022). Moreover, another article states that the emotional factor has become much stronger, and the ideological divide has gotten beyond strictly political concerns (Doherty, 2014). Additionally, countries all over the globe, including South Korea, UK, France and so forth, witness the same ideological rupture (Silver, 2022).
Among the more traditional explanations for political polarisation, like the growing cohesiveness of ideologies and the increasing distinction between parties, the influence of media is being given more and more attention in academic circles. The majority of studies come to the conclusion that social media, indeed, exacerbates ideological and affective political division (Kubin, von Sikorski, 2021).
The research on the impact of search-recommend AI-algorithms specifically is scarcer and generates a lot of discussion about its magnitude. Despite that, plenty can be said in regard to the fragile autonomy of the netizens. For example, several studies have indicated that YouTube’s recommendation algorithm leaves little room for users to actually control their FYP, therefore making it more likely for them to only witness content they’ve previously interacted with (Murthy, 2021). Another example can be found in an article (Little, Richards, 2021) about how TikTok algorithm turns its recommendations into more radical and hateful over time.
Why is political polarisation perilous?

While it has been argued that there are certain benefits to political polarisation, such as a more politically engaged population (Kubin, von Sikorski, 2021), the general consensus is that the ever-increasing ideological distance between the right and the left leads to dichotomous thinking and animosity towards the people positioned outside the group. The higher the degree of hatred and distrust towards the opposition, the higher the risk of disregarding any information provided by said opposition (Cho et al., 2020). High levels of resentment can also cause deliberate creation of disinformation with an aim of vilifying and dehumanising people with opposing views (Osmudsen et al., 2021).
Furthermore, politically polarised masses are easier to manipulate. Their ideological views become deeply engraved in their identities. Thus, any attack on their group could be perceived as a personal offence. They can adopt a black-and-white, all-or-nothing attitude and refuse to make any compromises with their opposition. Individuals inside the group will strive to justify all actions of their fellow members. They may start perceiving undemocratic measures against the opposition as reasonable, which may imperil the freedom of speech and rationalise the suppression of voters. Additionally, they might even be willing to sacrifice their own rights and principles in order to advance their cause. Historically, this has been a symptom of a wobbling democracy and impending totalitarianism (Arbatli, Rosenberg, 2021).
For businesses political polarisation means the necessity to take a stand on every issue, lest they want to alienate customers from both groups. Their silence can be interpreted as an unspoken support for one side of the conflict. If they choose to instead be vocal about their position, they risk losing revenue from people who disagree with their stance. On top of that, if the company has a significant social presence, their opinion on a controversial topic can spawn even more tension around the issue.
Having now discussed the negative effects of personalised social media algorithms and political polarisation, it would be reasonable to contemplate whether recommendation-based algorithms can also provide a solution. Is it possible to utilise machine learning technology to foster a more politically diverse environment online and amplify political literacy?
AI as a force for good?
Artificial intelligence is a tool with endless capabilities; thus it is not implausible to think that it can help bridge the divide. One way is to make recommendation AI algorithms as transparent as possible, so as to increase users’ awareness of how their actions influence their experience. Furthermore, user interaction with online content is rather nuanced, as netizens do not necessarily deliberately avoid opposing opinions (Cho et al., 2020).
Nevertheless, the unrelenting bombardment with the same information may bring about the notion of false universal consensus regarding certain topics. To pierce the filter bubble, the algorithms will need to focus on providing users with search options that are not confined to their personal views. In other words, it can be helpful to concentrate on the general topics that relate to the interests of the user, but do not necessarily conform with them.
In addition, AI could be mobilised to combat false information more effectively, for instance, by analysing both the search history and the content to detect hate speech. Then it can inform the user about the inaccuracies present in the resources and encourage them to be cautious about particular sources if they had a history of producing harmful content.
Or perhaps, it isn’t
Understandably, AI will continue to evolve and improve in the future, yet the question worth asking is whether the direction it will choose is going to benefit our society. To date, the advancements in machine learning technology in regard to social media environment has been largely aimed at making users feel engaged. If the vast ecosystem of the Internet perpetually validates our thoughts, why does it matter that people in the real world disagree? Isn’t it much nicer to be surrounded by individuals who support you?
Likewise, user interaction is lucrative, and sometimes outrage caused by the content online generates more traction than simply witnessing the ideas you agree with (Oremus et al., 2021). However, it does not make this type of content more thought-provoking. On the contrary, it can make users more susceptible to emotional responses, rather than logical ones. It can also induce hostility towards opponents. Therefore, it is vital for the developers of AI algorithms to operate on the basis of understanding their responsibility and the influence their technology can have on Internet users.
Something worth more than money
AI-powered recommendation algorithms online are omnipresent. While they help us effectively find the information we need, they can also prompt more drastic political polarisation and spread animosity among people with different opinions by creating and consolidating informational bubbles. Although the consensus as to the extent artificial intelligence facilitates this issue hasn’t been reached and the topic demands more research, we still need to endeavour to make algorithms more transparent to netizens.
Currently platforms like YouTube and TikTok are trying to make the users stay online longer and actively interact with the content, since it generates substantial revenue. Accordingly, we should strive to find an incentive for corporations to start being more open about their practices as well as attempt to combat false information and hate speech that is being actively spread on their apps.
For a list of the references and sources used in this article, click here.

Useful links:
- Link up with Daria Perevertailo on LinkedIn
- Read a related article: How rogue AI and social media are widening the ideological rift
- Read this student article and others in the special June issue of Global Voice magazine #30
- Discover Trinity Business School and apply for the Trinity MBA.
Learn more about the Council on Business & Society
The Council on Business & Society (The CoBS), visionary in its conception and purpose, was created in 2011, and is dedicated to promoting responsible leadership and tackling issues at the crossroads of business and society including sustainability, diversity, ethical leadership and the place responsible business has to play in contributing to the common good.
Member schools of the Council on Business & Society.
- ESSEC Business School, France, Singapore, Morocco
- FGV-EAESP, Brazil
- School of Management Fudan University, China
- IE Business School, Spain
- Keio Business School, Japan
- Monash Business School, Australia, Malaysia, Indonesia
- Olin Business School, USA
- Smith School of Business, Queen’s University, Canada
- Stellenbosch Business School, South Africa
- Trinity Business School, Trinity College Dublin, Ireland
- Warwick Business School, United Kingdom.

Discover more from Council on Business & Society Insights
Subscribe to get the latest posts sent to your email.
