
The launch of ChatGPT in November 2022 produced both massive subscriptions to the tool and a massive outcry from academia. Aymeric Thiollet, ESSEC Business School, Runner-up in the CoBS 2023 Student CSR Article Competition, explores a win-win scenario for AI, its student users, universities, and instructors.
ChatGPT: A paradigm shift for universities? by Aymeric Thiollet.
Wonderment. This is perhaps the first reaction one might have when using ChatGPT for the first time. For this artificial intelligence (AI) conversational chatbot, developed by the American company OpenAI and publicly launched in November 2022, the response was spectacular to say the least. In just 5 days, ChatGPT surpassed one million users worldwide, the world record for an online service launch (Statista, 2023). This massive adoption was then met with an equally massive outcry. How do we think about and evaluate its impact on society? How will our daily lives be affected?

At our scale, within the student community, the question of the academic impact of ChatGPT is now essential. Therefore, in this article we will explore different questions: How is ChatGPT impacting higher education today? Should this tool be part of our universities? What approach could be adopted to frame its use? But first, let’s go back in time and understand the origins and the nature of ChatGPT…
An “old” technology in a tool now made conversation-friendly
The term “old” is here obviously relative, but it points to an interesting reality: the technology used in the first version of ChatGPT, GPT-3 (a family of large language models), has been around since 2020. Then why did we start talking about it just now? Seemingly because the developers of OpenAI, through ChatGPT, have achieved a particular feat: making this AI-powered tool accessible, credible, and useful to the general public. By definition, ChatGPT is an online conversational agent capable of generating answers to questions, completing sentences, translating texts, writing articles, and even holding conversations with humans. It can also synthesize texts according to a given set of constraints, such as tone, style, and topic.
Based on natural language processing, ChatGPT can understand human prompts and remember passed interactions. What makes ChatGPT particularly interesting (and different from its predecessor, InstructGPT) is the conversational data used in its training process. Indeed, the chatbot required fine-tuning in order to meet the demands of human conversation. Here, its secret lies in RLHF (reinforcement learning from human feedback) i.e., the back and forth between the chatbot and developers to rank the chatbot’s answers and make them as human-friendly as possible (MIT Technology Review, 2023).
Today, ChatGPT is available to the public in 2 versions: a free one (powered by GPT-3.5) and a $20 per month one (ChatGPT Plus, powered by GPT-4). GPT-4 is essentially an upgraded version of GPT-3.5 with better creativity, safety, factuality, and the added ability to accept images as inputs. The most impressive advantages of GPT-4 compared to the previous versions is its ability to understand a lot more languages and solve more complex problems. As of now, GPT-4 can pass the Bar or the Advanced Placement Biology exams (both challenging college-level exams) with remarkable success. All in all, with its grammatical, synthetic, conversational capabilities, but also its ability to code or generate creative content tailored to the user’s instructions, ChatGPT can undoubtedly be very useful in the academic context. Does this outstanding set of skills signal the perfect academic tool?
ChatGPT: A skilled but fallible chatbot

Before erecting ChatGPT as an all-powerful tool, capable of replacing both the student and the teacher, it appears necessary to question its capacities. As an AI language model trained on web data, ChatGPT’s first obvious weakness is its cut-off date: after September 2021, the chatbot has no data about any events which occurred. The chatbot doesn’t have the ability to search for references in real-time, which makes it irrelevant for several academic initiatives.
If we want to go deeper and discover the hidden flaws of ChatGPT, we need to understand exactly what kind of information has been subjected to the chatbot’s algorithmic training. As OpenAI remains vague about this, let us look at an original source: the academic paper which introduced the GPT-3 language model, Language Models are Few-Shot Learners (Brown et al., 2020). It suggests that the training data used for ChatGPT comes from most or all of Wikipedia, Reddit pages as well as billions of words supposedly excerpted from open-access books (as others are protected by copyright law). From this, we can see that ChatGPT is mainly based on English texts which are not necessarily factual. This makes the possibility of algorithmic bias very significant, even when accounting for the chatbot’s training and moderation by OpenAI.
We can safely assume the presence of social, cultural, and political biases in the vast corpus of texts behind ChatGPT which in turn can present monocultural and questionable point of views on sensitive topics to its users. Another important issue, especially in the academic context, is plagiarism. Where do the sources of what ChatGPT says come from? As the chatbot works on a probabilistic model with billions of parameters, is it even possible to determine them? OpenAI recognizes this inherent difficulty: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers”.
This appearance of truth without concrete evidence of veracity can make the use of ChatGPT dangerously hazardous, especially when it strays into hallucinations i.e., the confident responses of an AI that don’t seem justified by its training data (Ziwei et al., 2022). In short, as things stand, the reliability of ChatGPT appears questionable and its educational use should be assessed with caution. Before attempting to do so, one question remains: what is the current impact of ChatGPT on students, teachers, and universities?
Between adherence and prohibition, the polarized use of a tool at its genesis

The arrival of ChatGPT marked the “end of high-school English” said California high-school teacher and author Daniel Herman in The Atlantic (2022). Seeing the recent performance of GPT-4, could it be the more dizzying end of higher education as a whole? In January, a prestigious French political studies institute, Sciences Po Paris, issued a statement strictly prohibiting students from using ChatGPT to produce written or oral work, under penalty of expulsion from the institution and even from higher education (Sciences Po, 2023). Similar phenomena have also been seen elsewhere: in the University of Hong-Kong (Yau and Chan, 2023) or in New York City schools (GovTech, 2023).
Plagiarism, academic integrity, student learning… today many universities are still assessing the negative impacts of ChatGPT and its appropriate use in academic settings. On one hand students can now effortlessly accomplish many tasks currently used by teachers to evaluate them. On the other hand, teachers are mostly left without a clear framework as to how to handle this pressing situation. What can be done? Should we simply ban the use of ChatGPT altogether, everywhere? Italy has recently done so at a national level, temporarily banning ChatGPT for data privacy purposes (Financial Times, 2023). For his part, Jean-Noël Barrot, the French Minister for Digital Affairs, recently described this approach as a bad solution in France (Le Figaro, 2023).
As things stand, it would appear that Pandora’s box has been opened and it is likely that ChatGPT will remain just a click away from any student. Preventing its use at home would be like preventing the use of the internet, an illusory approach. In the face of the unprecedented pedagogical challenge facing teachers, some might suggest the use of AI-generated text detectors as a solution, but even the one offered by OpenAI is only 26% effective (OpenAI, 2023) and students can use online paraphrasers to disguise their productions. Seeing that 57% of the new generation would like to learn with AI in the next few years (GoStudent, 2023), it seems clear that ChatGPT is here to stay. Given its increasing adoption among students and the difficulties of AI-generated texts detection, how can we think of an appropriate response from higher education institutes and teachers? Is it possible to envisage a compromise in the use of ChatGPT?
Towards a middle ground: The Teacher’s ChatGPT Use Framework

I believe that teachers will be best positioned to develop a reflection that promotes their students’ learning, I would therefore like to use their situation as a starting point for thinking about academic adaptation to ChatGPT. To make this article truly useful would be to offer them proven methods for effectively managing ChatGPT in their classrooms.
In my small way, I can only try to offer professors a framework for thinking about the use of ChatGPT in their classes, based on significant assumptions (cf. diagram). The main one is that higher education is about instilling analytical reasoning (breaking down a problem into smaller parts to find solutions) and critical thinking skills (making logical and reasoned judgement that is well thought out) in students for their future lives as citizens and employees. By extension, my second assumption is that all subjects taught in higher education can be deconstructed into tasks or modes of assessment that mobilize these two skills. It is through these two lenses that I propose that professors reflect on their courses and allow or disallow their students the controlled use of ChatGPT.

To better understand this framework, let us consider two fairly different course examples: computer science and philosophy. It seems reasonable to say that these two courses mobilize our two key competences in contrasting ways: on a student’s point of view, the computer science course will mobilize a majority share of analytical reasoning skills for a smaller share of critical thinking ones, while the opposite may be observed in philosophy. My observation is that ChatGPT, in its ability to structure content and provide solutions, can be very useful for the development of a student’s analytical reasoning. However, it can only be an assistant and cannot substitute for the student’s critical thinking abilities.
For this reason, in our example, I would encourage its controlled use for analytical tasks (e.g., debugging exercise within a coding challenge in a computer science class) but I would not recommend it for critical thinking tasks (e.g., writing of philosophical essays). This distinction of tasks is to be made by the teacher himself as certainly every class involves our two skills in some way: for example, some philosophy teachers could feel that the writing of a plan as a basis for an essay could benefit from ChatGPT’s help while some computer science teachers might feel the need for their students to step away from their screens and reflect on the best algorithm to pick for a given problem. Of course, this may mean that now, some assignments should only be done in class, and not at home, to control for ChatGPT’s use.
In the end, the aim of this framework is to encourage a profound reconsideration of what it means to teach a class today and how student learning could benefit or not from the aid of ChatGPT. A rule of thumb could be to remember that ChatGPT should be at most an assistant and never a substitute for the student’s critical thinking nor the teacher’s education.
ChatGPT and Universities: Next steps
Through this article we have explored the nature of ChatGPT, its impact on the university system and an approach to moderating it in the classroom. ChatGPT is a very powerful tool which could greatly assist the academic world with its ability to provide and structure information efficiently, however it has inherent biases and can encourage academic dishonesty and hinder student learning.
With this in mind, it is important for universities to train their lecturers to have a clear understanding of how ChatGPT could be used (or not) in their courses. The technology that GPT-4 represents is still in its infancy and it is safe to assume that future versions of ChatGPT will be ever faster, more creative, more factual, more convincing, and indistinguishable from human speech. Judicious use of ChatGPT could lead to real pedagogical progress where students learn in a way that is more tailored to their learning style.
However, ChatGPT will remain a tool, it won’t replace the human touch of a teacher, and the nature of its impact will always depend on its creators and users. So let us now see the arrival of ChatGPT as an opportunity to open up a wider dialogue about academic ethics and learning with students as they enter university.
Useful links:
- Link up with Aymeric Thiollet on LinkedIn
- Read this article and others in the special summer issue of Global Voice magazine
- Read a related article: AI: Resurgence in the art of rhetoric and composition?
- Discover ESSEC Business School and apply for the Master’s in Strategy & Management of International Business (SMIB).
Learn more about the Council on Business & Society
The Council on Business & Society (The CoBS), visionary in its conception and purpose, was created in 2011, and is dedicated to promoting responsible leadership and tackling issues at the crossroads of business and society including sustainability, diversity, ethical leadership and the place responsible business has to play in contributing to the common good.
Member schools are all “Triple Crown” accredited AACSB, EQUIS and AMBA and leaders in their respective countries.
- ESSEC Business School, France, Singapore, Morocco
- FGV-EAESP, Brazil
- School of Management Fudan University, China
- IE Business School, Spain
- Keio Business School, Japan
- Monash Business School, Australia, Malaysia, Indonesia
- Olin Business School, USA
- Smith School of Business, Canada
- Stellenbosch Business School, South Africa
- Trinity Business School, Trinity College Dublin, Ireland
- Warwick Business School, United Kingdom.
