
AI holds vast promise for societal good, but its true impact hinges on how we shape its purpose. As Biswayan Das, MBA participant at IIM Bangalore contends, the challenge lies not in expanding AI’s capabilities, but in anchoring them to conscience and control. Without ethical direction, even the most advanced systems risk becoming tools of inequity and overreach, rather than engines of collective progress.
AI for Common Good: The tug-of-war for control and conscience by Biswayan Das.

Picture dedicating more than 50 years of life to mastering an art form, investing countless hours, a labour of love. It’s the work that gives the work life – the alchemy of humanness. And then March 2025 rolls around. GPT-4o arrives and bangs! It can do the same thing in four seconds. That’s what’s occurring with the latest Ghibli fad – computer-generated pictures in imitation of the legendary design of Hayao Miyazaki, swamping each corner of the social media platform.
AI-created images simply cannot equal the essence of the original piece. But I just can’t help but feel that it’s only a matter of time before even more advanced AI models come along, ironing out the flaws and creating copies that are uncomfortably close to the original. So, what’s actually happening here? Simply a tribute to the genius of Miyazaki, a celebration of his work? Or is it a low-cost copy, satisfying the never-ending hunger for instant fulfilment and over-consumption in today’s world? Are we enjoying art, or are we converting it into another throwaway fashion? It poses a deeper question of the AI role in our life.
It’s simple to understand the convenience it presents — automating processes, streamlining workflows, assisting students with tough concepts and much more. From disease diagnosis to forecasting stock markets, AI has become a silent companion to our daily lives. But where do we draw the line? At what stage does AI become a tool for efficiency and then turn into the brain for us – giving us the illusion of thinking, as it thinks for us? And when that time arrives, will we even realize what we’ve lost?
The Bright Side of AI
Artificial Intelligence (AI) has been a groundbreaking development for major businesses, enabling innovations while improving efficiency, productivity, and decision making. Companies are incorporating AI to simplify their processes and react rapidly to evolving market dynamics. AI-based solutions have been universally helpful across industries such as manufacturing, finance, analytics and healthcare. In manufacturing, AI robotics is applied on production lines, increasing speed and accuracy. In finance, AI algorithms process large datasets to automate fraud detection, model investment strategies, and give predictive analytics for decision-making. Likewise, in healthcare, AI diagnostics help physicians detect diseases early and suggest individualized treatment regimens. These are just a few of the uses that AI has to provide.
The Big Four accounting firms are also taking important leaps in mechanization through AI. Deloitte’s Zora AI is a prime example of this new direction, and it is an independent AI agent that can automate tasks, process data, and make strategic recommendations. Zora AI provides actionable information and streamlines sophisticated processes. EY has also announced the launch of the EY.ai Agentic Platform, an AI platform that incorporates generative AI, ML, and automation. This platform is supposed to be effective in enhancing productivity, risk management, and decision-making for audit, tax, and advisory services (The Huge Four Bet on AI Agents: Deloitte and EY Lead the Way, n.d.).
Utilizing such AI agents, many firms are solving consulting problems and enabling them to address challenges more quickly. In non-traditional industries, AI is also driving innovation in new domains. Amazon and Walmart, among many others, leverage AI to manage their inventory, forecast customer demand, and tailor customer experiences. In the automobile sector, AI systems are being used to develop self-driving cars with real-time data analysis and decision-making. Even in the creativity sphere, AI is leveraged to generate music, design ads, and generate visual content.
As AI continues to improve and new developments emerge, so do questions around control, accountability, and ethics. Concerns about data privacy, transparency, and the role of human oversight on AI become increasingly prevalent.
Are We in Control, or Just Along for the Ride?

While the potential of AI is irrefutable, AI solutions are only as capable as the data it learns from, and when that data is faulty, biased, or partial, the implications can be severe. From facial recognition software biased against certain racial groups to black-box nature of the decision-making in financial services, AI shortcomings have already led to many mishaps. And yet large corporations, and even the military keep developing AI towards greater autonomy.
A study by Lanne, Nieminen, and Leikas, titled “Organisational tensions in introducing socially sustainable AI” brings to the fore how AI practitioners struggle with values-related, implementation, and impact-related challenges while implementing AI. Tensions between the desire for technological progress and the need to ensure ethical supervision usually confront organisations (Lanne et al., 2025b).
Values conflicts arise when corporations are unable to reconcile profit goals with ethical AI implementation. For example, though AI can optimize business decision-making, it can also entrench biases against underrepresented groups. Likewise, in medicine, AI can boost accuracy of diagnoses but jeopardize patient privacy if information is handled carelessly.
Businesses need to manage these trade-offs by integrating ethical considerations into AI design and release processes. Implementation issues also make AI adoption more complex. Too much regulation could stifle the potential of AI, while inadequate regulation could cause unforeseen problems. In addition, insufficient interdisciplinary collaboration can create siloed decision-making where technical specialists are concerned with functionality and ignore ethical issues. This gap can only be bridged by strong governance structures that combine multiple viewpoints.
AI systems also raise a significant accountability challenge. Autonomous applications for AI systems introduce a future range of ethical concerns. If a self-driving car is involved in a deadly crash or an AI medical aid incorrectly diagnoses a patient, the issue of blame is complicated. Is it the programmers, the company that deployed the technology, or the AI system itself responsible?
Existing legal frameworks are not clear on how to apportion liability in such a situation, resulting in a massive lack of accountability and justice. Apart from that, societal influence caused by AI is the most impacting dilemma. The polarization vs. unification challenge raises the issue of whether AI would narrow inequalities or create more. From discriminatory recruitment algorithms to inscrutable lending approval processes, AI technologies can entrench systemic injustices. Transparency and explainability must be prioritized by organizations, so AI systems are interpretable and accountable.
The black-box nature of the algorithms further exacerbates these issues. Algorithms operating without clear explanations for the logic behind their decisions, make it difficult to challenge unfair or unethical outcomes. In critical sectors like law, healthcare, and financial services, this can cause unwanted consequences. Privacy issues are another increasing challenge (The Ethics of AI Agents: Can We Trust Autonomous Decision-Making?, n.d.). AI-powered surveillance systems monitor human behaviour, online activity, and personal information, frequently with insufficient control.
This bulk data collection is a serious concern regarding privacy infringement, government intrusion, and abuse of information. Last but not least, military AI raises profound ethical issues. Autonomous weapons with the ability to target and strike targets autonomously undermine the divide between technology advancement and ethical responsibility.
For the Common Good: Transformative Force or Just a High-Tech Illusion?
Considering everything AI still has provided an unprecedented opportunity to solve global issues and drive development forward. Balancing competing interests such as privacy, intellectual property, and transparency is one of the significant challenges in implementing effective AI governance (Cheong, 2024).
AI-assisted medical systems are diagnosing diseases with high accuracy, allowing early treatment and resulting in significantly improved outcomes. In areas where access to doctors is low, AI solutions can prove to be crucial. Also this makes healthcare more accessible for all the sects of the economy. AI also accelerates drug discovery, shortening development time and cost. For example, systems like BenevolentAI search through extensive data sets for possible treatments, accelerating the release of life-saving medicines.
The Education sector also benefits from AI’s transformative capabilities. AI-personalized learning portals sort out content for students of all levels of education, closing learning loopholes and rendering education simpler. With rising climate change and natural disasters, AI is a trump card in forecasting and responding to disasters. Humanitarian groups and NGOs utilize AI to scan satellite images and meteorological conditions and forecast floods, earthquakes, and wild-land fires. AI communication platforms offer real-time information in a time of crisis, facilitating and coordinating more rapid relief management. In farming, AI programs are utilized to keep track of the health of soil, forecast harvests, and streamline water and fertilizer use in relation to varying parameters.
While these inventions hold such promise, efforts at utilizing AI must be done with utmost care. Confronting algorithmic prejudice, keeping information confidential, and implementing openness are all key to upholding the public trust. Programs like UNICEF’s Responsible Data for Children program aim at the responsible handling of data, supporting accountability in using AI (International Telecommunication Union, 2025).
Also, partnerships with the private sector by humanitarian actors deliver valuable sources for AI design and research but should be oriented towards ethical consideration so that individuals will not get exploited and their reasonable benefits may not be harmed. AI’s environmental impact also warrants careful consideration. The processing power to train massive AI models produces a lot of carbon emissions. Businesses and scientists are looking for means to reduce this effect by using energy-efficient algorithms and data centers powered by renewable energy. Sustainable AI development is a priority to achieve the full potential of technological advancement while being eco-friendly.
AI for the Common Good: The Way Forward
To ensure that AI serves the common good, a balanced solution is necessary. Policymakers, corporations, technologists, and society must collaborate to establish regulations and safe practices that promote transparency, fairness, and accountability.
Investment in AI literacy will empower people and organizations to make knowledgeable decisions regarding the use and boundaries of AI. Encouraging open research and cross-sector collaboration can also promote responsible AI innovation. Initiatives such as UNICEF’s Responsible Data for Children serve as a standard for ethical data management, with AI benefits being shared fairly.
Giving environmental concerns, energy-efficient models, sustainable data management, and renewable energy sources priority is equally needed and can limit AI’s carbon footprint. Human-centered design must be the backbone of AI development, where technology is used for social purposes without violating individual rights.
The AI future is not a zero-sum game. Through ethical innovation and participatory governance, AI can be an extraordinary engine for progress. With careful treatment of its challenges, AI can be a collective good, catalysing positive change across sectors and societies.

Useful links:
- Link up with Biswayan Das on LinkedIn
- Read a related article: A Leader’s Blueprint for Navigating AI and the Future of Work
- Discover IIM Bangalore
- Apply for the IIMB MBA.
Learn more about the Council on Business & Society
The Council on Business & Society (CoBS), visionary in its conception and purpose, was created in 2011, and is dedicated to promoting responsible leadership and tackling issues at the crossroads of business, society, and planet including the dimensions of sustainability, diversity, social impact, social enterprise, employee wellbeing, ethical finance, ethical leadership and the place responsible business has to play in contributing to the common good.
- Follow the CoBS on LinkedIn
- Download magazines and learning content from the CoBS website downloads page.
Member schools of the Council on Business & Society.
- ESSEC Business School, France, Singapore, Morocco
- FGV-EAESP, Brazil
- School of Management Fudan University, China
- IE Business School, Spain
- Indian Institute of Management Bangalore, India
- Keio Business School, Japan
- Monash Business School, Australia, Malaysia, Indonesia
- Olin Business School, USA
- Smith School of Business, Queen’s University, Canada
- Stellenbosch Business School, South Africa
- Trinity Business School, Trinity College Dublin, Ireland
- Warwick Business School, United Kingdom.

Discover more from Council on Business & Society Insights
Subscribe to get the latest posts sent to your email.
