
As AI grows smarter, the real test is not keeping up with machines but staying fully human. Mich Herbst, MBA participant at Stellenbosch Business School, argues that in a world shaped by algorithms, we must adapt not at the speed of change but its rhythm. With discernment, emotional intelligence and ethical action, we can build a future that protects the common good by leading with what makes us human.
HUMAN FIRST: NAVIGATING AI WITH SOUL, SACRED, AND “GRAICE”, by Mich Herbst.

Imagine an AI that knows your mood swings better than your best friend, tracks your focus levels better than your boss, and predicts burnout long before your therapist. Now imagine it’s owned by a profit-maximising tech organisation. As artificial intelligence grows in capability and infiltrates every corner of our lives, the question is not only how we interact with AI for the common good, but how we adapt in order to do so.
I am a futurist
That does not mean I predict flying cars and robot wars. It means I study change to help humans best align with what could be on the horizon. I use models and tools to map the possible futures, but I always come back to humans and how we relate to our environment.
When I think about how humans could interact with AI to ensure the common good, I don’t go straight to algorithms or policy papers. I go back to the early 2000s, to my favourite show at the time: Grey’s Anatomy. In Season 2, Cristina says to Meredith, “You’re my person.” The phrase was an instant viral hit. It spoke to a general need in people for a powerful, not-necessarily-romantic belonging that said: “Here, I am safe. Here, I am seen. Here, I matter”.
How We Begin
In the past our brains knew what to expect from ‘our person’. This intuitive trust developed from millennia of evolution and social wiring. Today, the world is changing fast with AI taking over mundane tasks, operating quietly in the background. Tomorrow it could take over the social roles we assign to ourselves and those we trust the most.
If we are not neurologically, morally and ethically ready to hand over that much influence, how do we adapt to ensure the common good of humans remains central to our actions?
Sometimes when we connect with ‘our person’ our Heart Rate Variability (HRV) could unconsciously and spontaneously synchronise. It doesn’t mean our hearts are beating at the same speed, but they have the same rhythm between beats. HRV is a biological marker of deep emotional resonance and emphatic attunement. In the same way, humans could start matching the rhythm of change. Not its speed, because it need not be about running faster, but its rhythm, becoming more aware, attuned, and connected.
In a time of AI we don’t need to become like machines. We need to become better humans. This article proposes that to find that rhythm we need to be able to analyse with SOUL, build a SACRED capability, and act with GRAICE (pronounced “grace”).
How We Discern: In the age of intelligence, we need a new learning model
Not that long ago, learning was restricted to humans and books and classrooms. The amount of information available to us was a drop in the ocean compared to today, and what we were taught was mostly beyond reproach. The system worked well, until incremental change made the world faster than the classroom, more volatile than a curriculum and more complex than black-and-white thinking.
More than fifty years ago, futurist Alvin Toffler foresaw that the illiterate of the 21st century “will not be those who cannot read or write, but those who cannot learn, unlearn, and relearn”. In a changed world we have to unlearn both ‘what’ and ‘how’ we learned in the past. The quantity, source, validity and intention of information today has become overwhelming, meaning an even more urgent skill is also required: that of discernment.
Learn. Unlearn. Discern. Relearn.
Discernment is the capacity to think critically about what we learn, who we learn it from, and why it matters. It means paying attention to the SOUL of the information we engage with:
- Source: Where does it come from? Is the source truly credible?
- Outcome: What impact might it have?
- Underlying Agenda: What is the intention? Who benefits if I believe this information?
- Logic or Lens: Does it make sense? What worldview does it reflect? Is it biased?
Learning shapes our neural reality – what we believe to be true, ethical and moral. Without discernment, we run not only the risk of passive absorption, but of living according to what we learn. Discernment prevents algorithms defining beliefs for us, and allows us to remain our ‘own person’ in the noise.
Who We Become: We need a different skillset for the future

Not only does the quality of the information matter, but so does what we (re-) learn. Global organisations such as the United Nations and the World Economic Forum highlight a core set of skills for thriving in a technologically changing world. For me, these are the SACRED skills that are fundamental to our humanity in the age of AI. They are:
- Systems thinking: In a complex world, systems thinking helps us see the bigger picture. We see beyond symptoms and consider root causes, and move from reaction to response.
- Adaptability: Adaptability is the foundational human operating system for responding to constant change. It helps us bend without breaking
- Creativity: Our human fingerprint that allows us to imagine, play, and innovate.
- Resilience: Constant disruption is our reality. Resilience is what keeps us bouncing back, time after time.
- Emotional intelligence: Machines are getting smarter. We need to become kinder, grounded in empathy, self-awareness and intentional integrity.
- Digital literacy: This skill allows us to critically, effectively and ethically engage with technology.
Once labelled soft skills, they now represent the holy grail of human capability. In an artificial world, building a SACRED skillset means we are ready when called upon to be ‘someone’s person’.
How We Act: We need to return with GRAICE.
Discernment and relearning shape our internal world, but how we return to the world and act shapes our shared human reality.
Learn. Unlearn. Discern. Relearn. Return.
The lines between creator and user, between human and machine, are blurring. Regardless of whether we are interacting with systems or building them, we have a responsibility towards how the future plays out. For this, we need a north star.
Inspired by UNICEF and Lego’s RITEC-8 framework (designed to protect children’s well-being in digital gaming environments) I suggest a broader human-centred evolution of its core principles. I call it GRAICE: a set of six interconnected pillars for responsible interaction in a world of accelerating intelligence. GRAICE means:
- Growth: AI should not replace humans, but support their growth. We should prioritise the creation and use of systems that support lifelong learning and meaningful upskilling. Humans learn best through trial and error. Let’s build systems that honour this rhythm.
- Relationships: Technology and AI has the potential to isolate and polarise. Interactions with intelligent systems should drive human connection and collaboration, and foster belonging and empathy.
- Agency: People should have autonomy to influence their own lives and the societies they live in. Digital literacy plays a role here, but real control in systems, not just the illusion of choice, is what empowers agency.
- Integrity: In a digital space integrity refers to the alignment between values and actions. When we value humans first, it means protecting the physical, psychological, and emotional safety of people. Personal information should be protected, data use should be ethical and transparent, and systems should not be designed to manipulate or covertly influence people.
- Creativity: The risk of generative AI replacing the creativity of humans is real. Protecting and supporting this dimension means supporting human meaning, joy, and innovation. Aim to design and use systems that augment human capability rather than automate it. Creativity values the journey, efficiency only chases the destination.
- Equity: AI systems show a marked possibility for bias. A truly human approach means designing and supporting systems that are fair, accessible, and sensitive to all users. Be aware of algorithmic exclusion, and actively involve diverse views as a sanity check.
Being: We discern. We relearn. We return. And then we show up. Fully human
AI is steadily infiltrating the world. It can talk for us, write for us, and create for us. It can even comfort us. AI brings speed, scale, memory, pattern recognition, and instant information.
Being ‘someone’s person’ means being present, involved, trustworthy and deeply attuned to other humans. It means bringing context, emotion and awareness into every action.
Combining these dimensions offers us an opportunity at the best case scenario: progress for the common good. We choose to adapt to the rhythm of change. We show up, all the time. We choose to look for the SOUL in information. We develop a SACRED skillset, and apply it with GRAICE.
In the end, ensuring that we interact with AI for the common good, is up to every one of us.
Are you ready to be ‘humanity’s person?’
References:
- Doty, J. R. (2024). Mind magic: The neuroscience of manifestation and how it changes everything. Avery.
- Future Today Strategy Group. (2025). 2025 Tech Trends Report: 18th Edition. https://www.ftsg.com/trends
- Rhimes, S., & Grossman, P. (2005). Raindrops keep falling on my head (Season 2, Episode 1). Grey’s Anatomy. American Broadcasting Company.
- Toffler, A. (1970). Future shock. Random House.
- UNESCO. (2022). Reimagining our futures together: A new social contract for education. https://unesdoc.unesco.org/ark:/48223/pf0000379707
- Unicef. (2024, November 19). UNICEF unveils design toolkit for digital creators. Unicef. https://www.unicef.org/innocenti/press-releases/unicef-unveils-design-toolkit-digital-creators
- World Economic Forum. (2023). Future of jobs report 2023. https://www.weforum.org/publications/the-future-of-jobs-report-2023/

Useful links:
- Link up with Mich Herbst on LinkedIn
- Read a related article: How do humans interact with AI to ensure the common good?
- Discover Stellenbosch Business School
- Apply for the Stellenbosch MBA.
Learn more about the Council on Business & Society
The Council on Business & Society (CoBS), visionary in its conception and purpose, was created in 2011, and is dedicated to promoting responsible leadership and tackling issues at the crossroads of business, society, and planet including the dimensions of sustainability, diversity, social impact, social enterprise, employee wellbeing, ethical finance, ethical leadership and the place responsible business has to play in contributing to the common good.
- Follow the CoBS on LinkedIn
- Download magazines and learning content from the CoBS website downloads page.
Member schools of the Council on Business & Society.
- ESSEC Business School, France, Singapore, Morocco
- FGV-EAESP, Brazil
- School of Management Fudan University, China
- IE Business School, Spain
- Indian Institute of Management Bangalore, India
- Keio Business School, Japan
- Monash Business School, Australia, Malaysia, Indonesia
- Olin Business School, USA
- Smith School of Business, Queen’s University, Canada
- Stellenbosch Business School, South Africa
- Trinity Business School, Trinity College Dublin, Ireland
- Warwick Business School, United Kingdom.

Discover more from Council on Business & Society Insights
Subscribe to get the latest posts sent to your email.
