
From Plato’s timeless ideals to today’s AI-driven realities, mathematics continues to shape how we perceive perfection and ourselves. Carlos Koreivo, BBA student at FGV EAESP, warns that while AI can bring us closer to flawless execution, it also risks distorting our sense of identity and creativity if left unchecked. As we navigate this accelerating evolution, the challenge is not just to harness AI’s power, but to cultivate wisdom and balance in the human mind.
From Plato to AI: How mathematics can shape human perception by Carlos Koreivo.

Human evolution is far from being a linear process. Technological advancement has shown that growth resembles an exponential curve much more than it does a line. Given such exponential nature, humans have had much less time to adapt to change than ever before, which could possibly lead to unforeseen side-effects of progress.
AI is at the forefront of discussions regarding unwanted consequences, as its ability to aid in the execution of ideas has blinded people from criticism related to the step preceding execution, that is ideation. That said, what is AI’s relationship with ideas? How does it impact the population’s worldview and its self-perception, from the broadest of lenses to the individual?
Furthermore, and just as importantly, what are the factors that allow AI to function as it does, and how can developers and users leverage them to avoid disastrous societal outcomes, instead benefitting future generations with its implementation into day-to-day activities?
Plato’s interpretation of how humans perceive the world
Mathematical Platonism is a concept discussed by Plato in his work The Republic (Plato. (2024). The republic. Start Classics) which, among other maxims and practices, proposes that there are two worlds: the physical, inhabited by all living beings, ever-changing and imperfect in nature; and the abstract, which represents the ideal, the immutable and perfect. The latter is a world dictated by mathematics, cold and unfeeling towards external influences, impervious to the alteration of space or the passing of time.
This contrast can be exemplified in one’s idea of a sphere. A football is, technically, a sphere, however, due to many physical phenomena, it is simply impossible for it to resemble, and remain, the mathematical concept of a sphere, drawn from the geometric formula. Thus, the mathematical representation of a sphere is more authentic to the perfect idea of a sphere than the physical representation of that same shape.
Stylistic objectivity through the use of AI
Renowned artists through the ages developed their own, previously-thought inimitable artistic style, based on their own perception of the real world, and the events and resources around them. However, with the recent evolution in AI, this inimitability has been challenged, one such example coming from the latest update in ChatGPT and its new ability to transform pictures from their real-world equivalent to a drawing in the style of famous director Hayao Miyazaki’s Studio Ghibli, difficult for the average eye to distinguish from the original work. Mr. Miyazaki himself said, in response to having been shown an animation made with the assistance of AI, that he strongly felt that this type of art was an insult to life itself, and that humans are losing faith in themselves.
The method through which ChatGPT can create these images is unknown even to itself, that is, even if asked to do so, the AI would not be able to properly explain how it got to the “conclusion” that the image should look like what it looks like. This brings into question the possibility that there is an implicit archetype, which can be translated into a mathematical structure and that, even though it was originally conceived in the real world, its perfect abstract equivalent, as theorised by Plato, still exists only in the realm of ideas.
Through this same process, one would be able to find the “mathematical equation”, or a standard, a pattern for any sort of style, artistic or not. This is true because a cubist art, for example, must resemble cubism in every way, and for everyone, not a single individual or a select group. Thus, there must necessarily exist a single formula for an archetype which, when applied to any input, image or prompt, will transform that input, through a formula that draws from Plato’s abstract world of perfect ideas and archetypes, into the closest physical world equivalent of that concept. Similarly, an archetypical “perfect man” exists, built up using the multitude of posts, pictures and comments available online, all fed to an AI’s database.
The human relationship with perfection

It is said that it is impossible to achieve perfection, and that fact is corroborated by Plato’s theory. AI, as we’ve established, has made it possible for humans to much more closely simulate perfection than ever before, with apps like Grammarly correcting spelling mistakes, so one may seem more literate and unfailing, and AI-powered filters, that can “correct” any imperfections on one’s face and body, as well as their voice in real-time, based on the abstract perfect concept that the mathematical formula created by AI, using numerous sources fed to it in its database.
This phenomenon will only grow along with AI, its adoption more widespread as this technology becomes available to more and more people. Although it could positively impact education, for example, just as personalised tutoring has proven to be more effective than a general approach to teaching, the adverse effects of having a virtual, in the virtual image sense of the word, perfect persona could be catastrophic, especially to young minds, as people use AI to portray themselves as their abstract perfect version, without their inborn or developmental flaws, completely characterised by an AI’s archetypical representation of what the perfect human being should be. The cognitive dissonance generated by the mind trying to reconcile one’s portrayal of their perfect selves, generated, inevitably, by a formula based on group-think, and their actual selves could be damaging to the development of a healthy relationship with oneself.
AI can also become a weapon of sorts, taking advantage of the concept of memes, coined by Richard Dawkins in The Selfish Gene (Dawkins, R. (2006). The Selfish Gene. Oxford University Press.). An ill-intentioned person, for example, could use AI to generate a meme that is close to the perfect concept of a meme, that is, one that spreads like wildfire, “infecting” the minds of many people with an idea that disrupts society, with much more ease than this person could with only their own creativity, especially in a world wherein ideation is more and more coming from AIs instead of humans.
The psychological and philosophical outcomes of AI-assisted evolution
Carl Yung once said that “People don’t have ideas. Ideas have people”, suggesting that the conceptual and the abstract are the drivers of actions, that is, that one is in service of their own thoughts, and not that those thoughts are a product of their intentions.
This maxim was furthered by Nick Land and the Cybernetic Culture Research Unit (CCRU), a cultural theorist group formed in 1995 in England’s Warwick University, in their coinage of the term “hyperstition”, or the belief that a narrative, when spread and believed en masse, will eventually come to be, realised through the feedback of speculation and action to further it. The creation of artificial intelligence, in a way, is a product of hyperstition, given that it was discussed and sought after way before it became usable like it is today.
And now, again, AI is involved in hyperstition, not as the narrative, but, this time, as part of the feedback loop that feeds the narrative. In other words, AI enables the narrative of archetypical perfection, which can be both achieved and improved through its use, considering that its own outputs are then inserted back into its database, consolidating the formula to reach Plato’s abstract dimension of ideas.
How can an AI’s potential be leveraged to produce favourable outcomes to humanity?

With the inner-workings of AI philosophically dissected, we may more accurately provide a diagnosis of the current status-quo. As it stands, AI is a tool that does not have a will of its own, however, through its usage, can shape the thoughts and, therefore, the actions of people. In itself, and like any tool, AI is amoral.
Besides the rules implemented in it by its creators, like avoiding answering questions that may lead to crimes, which, albeit, can be circumvented by using certain prompts, AI does not offer any restrictions to what can be asked of it. Nevertheless, these rules are still subject to their host country’s, or even its corporate creators’ political biases, which may very well reflect on the AI’s ability to generate the formula it uses to produce outputs, making it a very powerful indoctrination tool, in that regard.
While this provides nigh-unlimited freedom to those who would use it, the adverse effects of untrained or unrestrained usage are apparent. One such effect is ideation stunting, especially on younger people, who start relying on the technology to not only shape the execution of their activities, but also the formulation of what should be done. As discussed previously, this function may even extend to their own personalities, seriously hampering their natural development and possibly leading to physical and psychological outcomes that may only be noticed when it is too late to alter them.
On the other hand, a “healthy” usage of AI would idealistically expedite bottleneck processes and increase overall productivity, allowing people to spend more of their time in the creative process, formulating their own perceptions of the world and being guided by their own thoughts, in contrast to Nick Land’s description of the drone-like state of a person involved in hyperstition, furthering an idea for the sake of that idea, and not for a beneficial end-goal.
Incentivising a healthy relationship with AI inevitably falls on the shoulders of State regulators, who should focus primarily on the younger generations. Soft enforcement has proven to not be very effective, with students blatantly ignoring professors’ pleas to refrain from abusing AI, the latter having to capitulate and promote a sort of “symbiotic” relationship between lessons and AI usage or having to work around it entirely.
To rely on the companies that develop those AIs would be counterintuitive, as it would go against their interest to simply block a certain portion of the population from using their services and feed the AI’s database with new information that could train it for future interactions, not to mention the financial outlook on the matter, as AI developers and investors alike foresee this product as a key driver for the creation of new markets. Thus, the only viable solution is for hard enforcement through the power of the State to create laws that may prevent, or disincentivize AI usage in schools, although AI has grown to be able to even fool AI-checking tools, so actual enforcement, that is, applying the law could be impossible.
Finally, one last suggestion would be to educate, rather than punish. Positive reinforcement may be more effective than the current way of dealing with AI usage. Using an AI is no different than piloting a car, in that both are tools that, when badly used, can lead to serious negative consequences. To that end, it would be in the interest of both the State, the companies and the individuals that AI usage become incentivised, but with the added detail that one must take classes in their early life that explain the benefits and dangers of AI, much like in driving school.
AI is here to stay, and denying its usefulness is tantamount to ignoring technological and, similarly, human evolution. Embracing evolution and adapting to it is the path humanity’s always taken, and we’d be foolish to change now simply because the pace of progression has increased. Rather, we should use technology to increase our ability to keep up with technology itself.

Useful links:
- Link up with Carlos Koreivo on LinkedIn
- Read a related article: Is AI creating incompetent experts?
- Download this and other finalist student articles in the special issue Global Voice magazine #32
- Discover FGV EAESP, Brazil
- Apply for the FGV EAESP OneMBA.
Learn more about the Council on Business & Society
The Council on Business & Society (CoBS), visionary in its conception and purpose, was created in 2011, and is dedicated to promoting responsible leadership and tackling issues at the crossroads of business, society, and planet including the dimensions of sustainability, diversity, social impact, social enterprise, employee wellbeing, ethical finance, ethical leadership and the place responsible business has to play in contributing to the common good.
- Follow the CoBS on LinkedIn
- Download magazines and learning content from the CoBS website downloads page.
Member schools of the Council on Business & Society.
- ESSEC Business School, France, Singapore, Morocco
- FGV-EAESP, Brazil
- School of Management Fudan University, China
- IE Business School, Spain
- Indian Institute of Management Bangalore, India
- Keio Business School, Japan
- Monash Business School, Australia, Malaysia, Indonesia
- Olin Business School, USA
- Smith School of Business, Queen’s University, Canada
- Stellenbosch Business School, South Africa
- Trinity Business School, Trinity College Dublin, Ireland
- Warwick Business School, United Kingdom.

Discover more from Council on Business & Society Insights
Subscribe to get the latest posts sent to your email.
