Imitation is the Highest Form of Profitability: AI versus Art

Imitation is the Highest Form of Profitability: AI versus Art. AI may be redefining the boundaries of artistic creation, but without consent and compensation, it risks eroding the very human expression it seeks to emulate. Alex Rogers, MBA participant at Smith School of Business, argues that the real measure of progress is not how convincingly machines can imitate creativity, but how ethically we choose to wield their power. As we move from brushstrokes to algorithms, it’s not just originality we must preserve but ownership.

Imitation is the Highest Form of Profitability: AI versus Art by Alex Rogers.

On Tuesday March 25th, 2025, OpenAI released its new AI image generator. This tool equips all ChatGPT users with the ability to produce any form of digital image from headshots to logos using a simple text prompt or an externally supplied reference photo. Within the week, a trend rapidly emerged. Users are excitedly generating and sharing selfies, memes, and more in the signature artistic stylings of Studio Ghibli, the Tokyo-based animation studio that created Spirited Away and Ponyo.

These users are quick to relay their process to their friends and followers. One Reddit account shared the prompt she used with over 10 million members through the r/ChatGPT subreddit: “restyle image in studio ghibli style, keep all the details”. There is no need for users to include a reference image of Studio Ghibli’s art style – ChatGPT scrapes the internet for all publicly available content, and more recently, accesses supplied artistic works and data sets to further train their Large Language Models (LLMs).

Imitation is the Highest Form of Profitability: AI versus Art by Alex Rogers.

By March 27th, just two days after this new product launch, OpenAI CEO Sam Altman took to X (formerly Twitter) to share that “[their] GPUs are melting” from the volume of demand for image generation. He made sure to note that ChatGPT’s free tier will continue allowing its users to generate three images per day with paid subscriptions extending usage limits. Meanwhile, not a cent of this revenue is being returned to the artists originating this viral demand who never approved OpenAI’s use of their life’s work as LLM training material in the first place.

There is a growing chorus of voices raising serious concern over the ethical implications involved. How are artists to continue making a livelihood in their artform when they can be imitated by the masses overnight? What protections do they have to prevent this, if any? Most dauntingly, what does this practice signal for the future of all-consuming generative AI and our access to these tools?

It is not only OpenAI bringing these new GenAI capabilities to mass markets, nor is this the first notable controversy in LLM training. In July 2023, a group of US authors, including Sarah Silverman, George R. R. Martin, and Richard Kadrey, filed a lawsuit accusing Meta of copyright infringement for using their written works to train its AI models without consent or compensation. The group claims that CEO Mark Zuckerberg approved the use of pirated literary works through online “shadow libraries” like Library Genesis. Meta has since been sued for this same practice by author groups in the United Kingdom, Australia, and France – though the company does not deny these accusations. Rather, their legal teams are well-versed in the legislative loopholes that allow them to inform their algorithms with any digital content so long as it is not replicating them. Meta and its peers are unapologetic; they see this as a natural progression and necessity for improving their product’s output to the benefit of their users (and bottom line).

Earlier versions of marketed GenAI products circa. 2020 could be described as having been “trained on the internet”. These LLMs were limited to scraping public digital content to inform its responses to user prompts, which were consequently ridden with data accuracy and restricted in scope. In the years since, developers have expanded training sources to improve output quality. Reinforcement Learning from Human Feedback utilizes human interaction to train LLMs through text-based conversation encouraging adherence to user instruction and appropriate language. In line with this strategy, developers aggregate all data collected from their users’ engagement to continuously modify their algorithm with ChatGPT reportedly generating over 10 billion user data-points per day.

Most pressingly, developers have identified non-public digital content as most valuable to the future of its model training. Whether directly or through contractors, these companies commonly make agreements with third parties to feed their LLMs with massive volumes of emails, proprietary manuals, chat logs, phone recordings, and internal reports – anything it takes to expand the product’s volume of reference points.

With this track record, it is not surprising that OpenAI, Meta and their peers find no issue with accessing pirated literary and other artistic works to improve the effectiveness and monetary value of their marketed GenAI products. The surprised are the artists, who either discover their property is being used as free training material through public scandal or visual evidence of their work being replicated across the internet. They are alone to wonder how this could ever hold up in court.

Imitation is the Highest Form of Profitability: AI versus Art. AI may be redefining the boundaries of artistic creation, but without consent and compensation, it risks eroding the very human expression it seeks to emulate. Alex Rogers, MBA participant at Smith School of Business, argues that the real measure of progress is not how convincingly machines can imitate creativity, but how ethically we choose to wield their power. As we move from brushstrokes to algorithms, it’s not just originality we must preserve but ownership.

Meta’s legal defense against accusations of copyright infringement relies firmly on the “fair use” doctrine under U.S. law, arguing that their use of published literary works in training it’s LLMs is transformative of the original content rather than replicative. The Kadrey-fronted lawsuit argues that Meta needs to use these books, which company representatives have described as “vital” to their training models, to improve “their [product’s] expressive output – the very subject matter copyright law protects.” While these lawsuits are ongoing, many have questioned whether our legal systems and current definitions of intellectual property rights are equipped to defend and protect our common good in the age of AI.

One glaring example of our lagging legal system is how AI-generated works are not eligible for any form of copyright protection on the basis of this output “lacking meaningful, human authorship”. These blind spots in our laws effectively allow a creative’s body of work to be used for profit by a third party without consent, credit, or compensation. To contextualize the profitability of this product development strategy, OpenAI more than doubled its revenue in 2024 to over USD 4Bn following the global launch of their retrained Chat-GPT 4.0 and is expecting to triple revenue to USD 12.7Bn in 2025 driven by improvements of their expressive and visual generative capabilities.

Aziz Craig explores this ethical and legal crisis in The AI-Copyright Trap. He discusses whether AI systems deserve copyright protection over generated work, and if so, who should own these rights? He affirms that granting AI systems legal authorship over their generated works would diminish the value of human creativity and that creatives whose works are used to train these systems should be compensated through licensing and shared legal ownership of referenced LLM output. This proposed model is a far cry from the current landscape that AI developers are looking to preserve, though progress is underway.

It is likely we will see new legislation and revised legal interpretations introduced to combat this threat of fair use ambiguity as we have seen through past technological revolutions. Napster, the controversial pioneer of audio streaming, infamously allowed their users to download unlimited MP3s of their favourite songs with zero compensation to the artists or their labels. The landmark lawsuits filed against Napster helped shape the enforcement of the Digital Millennium Copyright Act, which was introduced to address evolving copyright issues in the digital age. While in retrospect this was a clear cut case of unlicensed distribution, there was still significant conflict in applying copyright law in this new context.

The many lawsuits filed by global coalitions of news organizations, authors, and artists against OpenAI and its peers could act as a similar catalyst to continued evolution of copyright law interpretation as it applies to LLM training and output – though we are in the earliest stages of proceedings. Napster and its contemporaries caused the US music industry alone to lose an estimated USD 12Bn in revenue; the damage of unchecked GenAI models to creative livelihoods is already happening.

The common good refers to the collective well-being and shared benefits that all members of a society can enjoy. To some, the common good is best serviced through expansion of public access to information and technology. In 2023, Microsoft stated that they aim “to democratize artificial intelligence, to take it from the ivory towers and make it accessible for all,” while Stability AI affirmed that their product “empower[s] billions of people to create stunning art in seconds”. These goals are founded in altruistic principle, though in practice, threaten to jeopardize the benefits that all human beings derive from creatives owning the rights to their art.

GenAI may be here to stay, but uncredited, costless leverage of otherwise protected property must not. It is difficult to qualify the impact of this technology discouraging artists and authors from engaging creatively. It is not that artists will cease to exist, but rather that corporate demand for creative works will accept “good enough” output from AI at lower cost, or that the emergence of an oversaturated content market will make it difficult for any artist to gain recognition.

This argument is not made to futilely oppose technological advancements that improve efficiencies but instead push hard against the growing notion that creations belong to everybody if they are shared digitally, or that there could ever be a technology that replaces the creativity of a human mind – perhaps our most important asset to protect in defense of the common good.

If we are to accept this shameless theft of creative works as an acceptable precedent, what does that mean for a nearby future where our visual likeness is used to create new faces for a stranger’s GenAI video output without our consent? This is no far-off dystopia; these products are being developed today – and they’re using your subscription dollars to fund them, and this article to train them.

To protect our shared interests and creative communities, we must take collective action to address the antiquated legalities and moral grey enabling LLM developers’ open season on artistic works. Each of us can make choices today to align our humanity to our interaction with AI systems.

AI-enablement is on track to concern every facet of our lives and will likely continue outpacing protective legislation. Taking an active stand on data privacy infringement is not reserved for famous

authors and prolific artists. If you are a ChatGPT user, OpenAI has a setting that allows users to exclude their own AI engagements and shared files from being used in LLM training – though the transparency of this process ends there. Next, be thoughtful of what you post on the internet now more than ever. The involved risk has surpassed the possibility of your boss finding your Instagram. Finally, consider using opt-out tools such as HaveIBeenTrained.com and including language, such as “Do Not Train”, on websites footers or digital asset metadata as pre-emptive legal defense.

Media literacy continues to fall as consumption grows – and the algorithms love us for it. As we continue to question the intent behind our media, we must learn how to do the same with AI. Each of us has a responsibility to evaluate whether the texts we read or visuals we face are a product of GenAI subject to persisting algorithmic and user biases.

Most importantly, the digitally native must vocalize this identification. Mass-market LLMs are only getting better at masking their output’s inception, which will prove an effective toolkit for users looking to craft messaging that targets vulnerable populations. This action requires patience for those who have yet to consider the ramifications and realities of a forced commoditization of creative works used for public manipulation.

Today’s political climate makes it easy to lose sight of our governing bodies’ primary responsibility: act as a voice of the people. Your local office representative may not be able to champion the necessary legislation themselves – but they are not your lone avenue.

Each of us have the capacity and right to address our federal governments, copyright offices, and key decision makers through direct personal statements that call for revisions of our intellectual property and data privacy protections against GenAI. Familiarize yourself with your country’s relevant legislative systems and articulate your concerns over the unmitigated violation of our artists’ rights and creative common good.

Click here for a full list of references used in this article.

Alex Rogers, Smith School of Business, writes on AI versus Art
Alex Rogers

The Council on Business & Society (CoBS), visionary in its conception and purpose, was created in 2011, and is dedicated to promoting responsible leadership and tackling issues at the crossroads of business, society, and planet including the dimensions of sustainability, diversity, social impact, social enterprise, employee wellbeing, ethical finance, ethical leadership and the place responsible business has to play in contributing to the common good.  

Member schools of the Council on Business & Society.

The schools of the Council on Business & Society (CoBS)


Discover more from Council on Business & Society Insights

Subscribe to get the latest posts sent to your email.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.