
Artificial intelligence is no longer just about smarter systems, it’s about smarter societies. Atul Gupta winner of the 2025 CoBS CSR Article Competition at IIM Bangalore contends that as AI quietly rewires how we work, decide, and govern, it demands more than technical mastery. It calls for moral clarity. The real question isn’t how intelligent our machines become, but whether we can build a world where that intelligence serves justice, dignity, and the common good.
The question of how humans interact with artificial intelligence (AI) to ensure the common good demands a multidisciplinary examination of the ethical, economic, and sociopolitical dimensions of AI integration. At its core, this inquiry investigates whether AI systems—despite their transformative potential—are being designed and governed in ways that prioritize collective welfare over private profit, democratic values over algorithmic control, and equitable outcomes over efficiency-driven optimization.

The common good, in this context, refers to societal conditions that allow all individuals to flourish, including access to justice, healthcare, education, and economic opportunity. Yet, as AI increasingly mediates these domains, its deployment often reflects and exacerbates existing power asymmetries, raising urgent questions about who benefits, who governs, and at what cost.
Need for a Holistic framework
The urgent need for a robust analytical framework to examine how humans interact with AI to ensure the common good stems from technology’s growing, yet ambiguous, role in shaping societal outcomes. As AI systems increasingly mediate access to healthcare, education, employment, and justice, their design and deployment often reflect and amplify existing power asymmetries, whether through opaque algorithmic decision-making that erodes human agency, extractive business models that concentrate benefits among tech elites, or surveillance infrastructures that normalize unprecedented social control.
Without systematic tools to dissect these dynamics, we risk perpetuating “solutionism” – the naive assumption that AI inherently progresses humanity – while overlooking how its implementation frequently entrenches inequality under the guise of innovation.
Framework developed herein is rooted in the need to address critical blind spots in current AI governance discourses. Traditional approaches tend to focus narrowly on technical optimization and regulatory compliance, often neglecting the deeper, structural forces at play. By interrogating how power and capital manifest in technological systems, this article reveals that AI’s influence extends far beyond efficiency metrics to shape human behaviour, economic relations, and social hierarchies.
This broader perspective necessitates a three dimensional analysis, one that critically examines the interplay between psychological conditioning (psycho-politics) , economic exploitation (Marxian productivity), and institutionalized control (social control).
The Triaxial Framework
Axis 1: Psycho-Politics
The inclusion of psycho-politics as a core dimension for analysing AI’s societal impact is grounded in Byung-Chul Han’s (2017) seminal critique of neoliberalism’s evolution from overt oppression to subtler, technology-mediated forms of control. Building on Foucault’s (1977) framework of disciplinary power, Han argues that contemporary governance operates through “smart” coercion—algorithmic systems that engineer voluntary submission by shaping desires, behaviours, and self-perception.
This psycho-political paradigm is exemplified by AI-driven platforms that deploy hyper-personalized nudges (Yeung, 2017), gamified compliance (e.g., Uber’s behavioural incentives), and affective computing (e.g., emotion-tracking in customer service AI), all of which erode autonomy while masquerading as empowerment.
To ensure AI serves the common good, organizations must conduct a psycho-political analysis, scrutinizing how AI influences power dynamics, worker autonomy, and behavioural control. This involves auditing AI for hidden coercion (e.g., surveillance, algorithmic nudging), assessing employee well-being under data-driven management, and democratizing AI governance to resist exploitative optimization.
By prioritizing transparency, human oversight, and solidarity-driven design—replacing efficiency fetishism with equitable outcomes—organizations can dismantle AI’s neoliberal grip. Such corrective measures not only foster ethical workplaces but also model societal resistance against psych political domination, realigning technology with collective welfare over corporate and state control.
Axis 2: Marxian Productivity
The Marxian axis is necessitated by the demonstrable ways AI replicates and intensifies capital’s extractive logic. While mainstream economics (e.g., Brynjolfsson & McAfee [2014]) frames AI as a neutral productivity booster, Marx’s (1867) analysis of alienation—updated by Braverman’s (1974) deskilling thesis and Fuchs’ (2020) digital labour critique—reveals how AI systems dispossess workers of expertise while centralizing value in tech firms. For instance, gig platforms prove AI-enabled “optimization” often means wage suppression and task fragmentation.
This dimension is indispensable because purely managerial approaches (e.g., Bostrom’s AI risk theory) ignore how AI’s material benefits are distributed—a fatal omission when most of AI-generated wealth flows to the top 10% in the income bracket (WEF, Future of Jobs 2020).
To counter AI’s extractive tendencies, organizations must enforce symbiotic design (ensuring AI augments rather than replaces labour), value redistribution (tying productivity gains to worker profit-sharing and wage increases), and worker governance (embedding unions or co-determination in AI deployment). This requires strict audits to prevent deskilling, binding contracts to share AI-derived wealth, and institutionalized labour oversight—as seen in German co-determination models. The goal is not to reject AI but to subordinate its deployment to worker welfare, ensuring productivity gains benefit labour rather than entrench inequality.
Axis 3: Social Control
The social control axis completes our tripartite framework by exposing how AI systems institutionalize structural power through algorithmic governance. Where psycho-politics examines subjective conditioning and Marxian analysis reveals economic extraction, this dimension uncovers how AI embeds discrimination in bureaucratic and social systems—automating bias under the veneer of objectivity.
From predictive policing’s racial profiling (Eubanks, 2018) to workplace surveillance’s erosion of autonomy, AI transforms Weberian bureaucracy into a real-time modulation machine (Deleuze, 1992), privileging efficiency over equity. Without this lens, analyses miss how AI materializes power: not just through individual manipulation or class domination, but through systemic rules that codify hierarchy.
To counter AI’s undemocratic governance, three key interventions are essential: mandatory algorithmic audits for high-stakes systems (modelled on NYC’s accountability law), enforced human oversight for consequential decisions (like parole or welfare approvals), and decentralized data sovereignty models (such as Barcelona’s DECODE project) that transfer power from corporations to communities – collectively ensuring AI systems remain transparent, accountable, and aligned with public interest rather than institutional or commercial control.

The triaxial framework’s dialectical power lies in exposing how AI’s governance logic operates holistically: psycho-politics manufactures consent (via behavioural nudges), Marxian dynamics extract capital (through labour alienation), and social control enforces compliance (via surveillance) – with each dimension revealing critical blind spots when examined in isolation.
Only their integrated analysis captures AI’s totalizing nature, as ethical guidelines alone miss exploitation, labour critiques overlook internalized control, and surveillance studies ignore economic drivers, making all three perspectives indispensable for understanding and redirecting AI’s societal impact toward the common good.
Humanizing AI in Healthcare: A Triaxial Framework for the Common Good

The integration of AI into healthcare provides a powerful case study for applying the Triaxial Framework—examining how psycho-politics, Marxian productivity, and social control shape human-AI interaction and determining how these forces can be harnessed to serve the common good.
Axis 1: Psycho-Politics in AI-Assisted Diagnosis
Neo-liberal healthcare models drive AI adoption through efficiency narratives, using algorithmic nudges to position tools like IBM Watson as objective authorities, subtly conditioning physicians to defer to machine judgment. This creates a threat of eroded autonomy, where clinicians feel compelled to comply with AI outputs even against their expertise, reducing medicine to algorithmic compliance.
The counterforce emerges through participatory design, exemplified by Mayo Clinic’s explainable AI, which restores agency by providing transparent reasoning paths—allowing doctors to interrogate suggestions rather than passively accept them, thus reframing AI as consultative rather than authoritative. Together, these elements reveal the psycho-political dynamics of AI in healthcare.
Axis 2: Marxian Productivity in Hospital Operations
Capital accumulation drives AI investment in healthcare, prioritizing cost-cutting measures like AI-powered staffing algorithms that reduce nurses to mere data-entry operators, exacerbating labour alienation. This creates a threat of widening wealth inequality, where productivity gains disproportionately benefit administrators and insurers while frontline workers suffer deskilling and deteriorating working conditions through AI-imposed care plans.
The counterforce of labour sovereignty, demonstrated by Kaiser Permanente’s partnership model, counters these effects by redistributing efficiency gains through improved wages and staffing ratios while involving clinicians in AI design to ensure tools enhance rather than replace human expertise. This triad reveals the Marxian tensions in healthcare’s AI transformation.
Axis 3: Social Control in Predictive Patient Management
Surveillance-enabled care drives AI systems like Epic’s predictive analytics to categorize patients by risk levels, leveraging data extraction to rationalize invasive interventions that disproportionately target marginalized groups through tools like opioid monitoring algorithms. This creates a threat of entrenched structural bias, where opaque algorithms institutionalize racial and socioeconomic disparities—evident in skewed readmission predictions for low-income patients—while normalizing perpetual health surveillance that erodes privacy.
The counterforce of collective governance, exemplified by the VA’s patient advocacy review boards, implements bias audits and mandates human oversight, establishing accountability through “algorithmic due process” that empowers patients to contest AI-mediated decisions. This triad exposes the social control mechanisms embedded in healthcare AI systems.
Triaxial Tensions and Interventions
The framework reveals interdependent dynamics: psycho-political conditioning (Axis 1) converges with social control (Axis 3) when AI steers clinicians toward surveillance-driven care, while Marxian alienation (Axis 2) intensifies as privatized productivity gains deepen workforce precarity.
Effective interventions must simultaneously uphold the subsidiarity principle (ensuring AI augments but never overrides human judgment), enforce value redistribution (directing AI-derived efficiency gains toward worker benefits and patient subsidies), and implement algorithmic justice (through independent bias audits and patient-worker co-governance structures). This tripartite approach realigns AI systems with collective welfare rather than neoliberal or extractive logics.
The triaxial framework, However, proves universally applicable beyond healthcare, systematically diagnosing AI’s societal impacts through three constitutive forces: psycho-political conditioning (reshaping professional authority), Marxian productivity (extracting commodified value), and social control (institutionalizing bias).
Its analytical power lies in revealing these interconnected dynamics across sectors—from education (adaptive learning’s pedagogical influence) to criminal justice (predictive policing’s discriminatory outputs)—while providing transferable governance strategies. By exposing AI’s embedded power structures, the framework transforms fragmented ethical concerns into actionable interventions for equitable technological governance.
A critical lens to measure AI’s social impact
This article has demonstrated that the triaxial framework—through its integrated analysis of psycho-political conditioning, Marxian productivity, and social control—provides a critical lens for interrogating AI’s societal impact, ensuring its alignment with the common good.
By exposing how AI systems erode autonomy, entrench inequality, and institutionalize bias across healthcare, education, and criminal justice, the framework transcends sector-specific critiques to offer a unified approach for governance. Its dialectical strength lies in revealing the interplay between hidden coercion, extractive capital, and systemic control—transforming abstract ethical concerns into actionable interventions that prioritize equity, democratic oversight, and collective welfare.
As AI’s influence expands, this structured analysis becomes indispensable for redirecting technological development away from neoliberal exploitation and toward a future where human agency, distributive justice, and public accountability define AI’s role in society.

Useful links:
- Link up with Atul Gupta on LinkedIn
- Read a related article: Code Green: How AI is Reshaping Sustainable Finance
- Discover Indian Institute of Management Bangalore
- Apply for the IIM Bangalore 1-year or 2-year MBA.
Learn more about the Council on Business & Society
The Council on Business & Society (CoBS), visionary in its conception and purpose, was created in 2011, and is dedicated to promoting responsible leadership and tackling issues at the crossroads of business, society, and planet including the dimensions of sustainability, diversity, social impact, social enterprise, employee wellbeing, ethical finance, ethical leadership and the place responsible business has to play in contributing to the common good.
- Follow the CoBS on LinkedIn
- Download magazines and learning content from the CoBS website downloads page.
Member schools of the Council on Business & Society.
- ESSEC Business School, France, Singapore, Morocco
- FGV-EAESP, Brazil
- School of Management Fudan University, China
- IE Business School, Spain
- Indian Institute of Management Bangalore, India
- Keio Business School, Japan
- Monash Business School, Australia, Malaysia, Indonesia
- Olin Business School, USA
- Smith School of Business, Queen’s University, Canada
- Stellenbosch Business School, South Africa
- Trinity Business School, Trinity College Dublin, Ireland
- Warwick Business School, United Kingdom.

Discover more from Council on Business & Society Insights
Subscribe to get the latest posts sent to your email.
