Hello and welcome to yet another episode of Technocracy. I am Heena Goswami, editorial consultant with IGPP - Institute for Governance, Policies and Politics, New Delhi.

In March this year, OpenAI’s GPT-4 launched a new feature that allows users to re-create their pictures in an animated manner, in a style that closely resembles the animation art created by Ghibli studio. This new feature thrilled the users leading to the flooding of social media with Ghibli style pictures. However, it also re-ignited the debate about copyright violation, creativity and if AI will spell the doom for artists.


image

Before, we go into the larger debate, let us understand what triggered it all. The Ghibli studio is a famed animation studio of Japan, known for its animation that, even in this day and age, created by hand-drawn animation and has detailed backgrounds. So, with GPT-4’s new feature the art that took long hours and years of training and talent to be created can now be replicated in seconds. Further, the founder of the studio, Hayao Miyazaki, is no fan of AI-generated art. Of AI art, he is reported to have said, “I am utterly disgusted. If you really want to make creepy stuff, you can go ahead and do it. I would never wish to incorporate this technology into my work at all. I strongly feel that this is an insult to life itself.”

Now, as the founder clearly expressed his distaste for incorporating AI into his work, the unveiling of the new feature sparked debates as to whether OpenAI violated copyright laws, as the image generator is closely mimicking the hallmark animation style of the studio and its animators. This is not the first case of copyright infringement that large techs are dealing with, there have been several controversies regarding the use of copyrighted material as datasets to train AI models.


image

In the USA, comedian Sarah Silverman and fellow authors sued ChatGPT maker OpenAI and Meta, alleging that her copyright has been infringed in the training of the firms' AI systems. The case against OpenAI alleges that without the authors' consent "their copyrighted materials were ingested and used to train ChatGPT". Similarly, the New York Times sued OpenAI and Microsoft for the unpermitted use of Times articles to train GPT large language models. Actress Scarlett Johansson also accused OpenAI of using a voice for their AI assistant that is similar to hers, despite her declining an offer to voice the AI herself. Closer home, the news agency ANI filed a lawsuit against OpenAI for alleged copyright violations. ANI says OpenAI used its news content to train ChatGPT without permission.

The U.S. Copyright Act, Indian Copyright Act along with international treaties such as the Berne Convention, grants creators exclusive rights over their original works and creators must provide consent before their works are reproduced or adapted. However, there are certain principles of copyright law that puts AI generated work into a grey area.

For example, whether AI generated content is to be considered derivative or transformative. A derivative work is one that is “recast, transformed, or adapted” from an original work and on the other hand, transformative work must add something new to the original work, altering it with new expression, meaning, or a message. The creation of a derivative work requires permission from the copyright owner of the original work, while that of transformative work does not.


image

Further, copyright safeguards the expression of an idea, not the idea itself, a principle known as the Idea Expression dichotomy. This means that while a specific artwork (such as a painting or illustration) is copyrightable, the broader artistic style or technique used to create it is not. There is also the principle of the Fair Use doctrine, a legal principle in copyright law that allows limited use of copyrighted material without permission from the copyright holder. This use is permitted under certain circumstances, such as for purposes of criticism, comment, news reporting, teaching, scholarship, or research.

At the moment, the originality and legality of AI generated art remains a grey area, as none of the copyright laws around the world were originally designed to address AI created works and most of the cases we discussed haven’t been resolved satisfactorily yet. But the whole controversy surrounding Ghibli art also raises some important ethical questions.

If AI can, in seconds, reproduce generations of training and hard work, what is the incentive for artists to create art? In this world of fast-consumption, traditional artforms, be that of painting, craft, décor, were already under attack and as GenAI becomes more commonplace, artists stand to lose more than ever. As the situation stands today, design studios and marketing agencies are already doing away with designers, illustrators and simply prompting AI to create art that suits their needs. What will happen to those who are being displaced or won’t be hired now?


image

OpenAI CEO Sam Altman has defended AI-generated art, saying it’s not meant to replace artists, but to help people be more creative. Would we tell our children to become painters, singers, designers and put years of training into it if AI is there to create and replicate everything. No matter how unique one’s art is, AI will mimic it in seconds.

Furthermore, all those artists whose work was used to train AI, was their consent taken? Were they aware of the ways in which their art would turn into data that would be fed into an application that will creatively replicate the same for the use of others? Would they have said yes to such a proposition if they were aware.

Lastly, the cost of generating AI art is not insignificant for the environment. The frenzy of Ghibli style photos was such that OpenAI was struggling to keep up with the demand and as its GPUs were overheating, rate limits were introduced to restrict Ghibli-style image generations. The question is, just because AI can create art, should it?

To answer such questions and understand the legal and ethical implications surrounding AI generated art, today we are joined by Mr. Matt Blaszczyk. Matt Blaszczyk is a Research Fellow in Law and Mobility at the University of Michigan Law School and Managing Editor of the Journal of Law and Mobility. Matt's scholarship focuses on intellectual property law, antitrust, contracts, and the intersections of law and technology.


So to my first question. In light of the recent updates by ChatGPT, which allows the users to create animated versions of their pictures, it has ignited a debate on artistic rights. Some say that if AI is going to produce art, it might stifle creativity and innovation by humans in the long run. What is your take?


Well, that's the big question, right? I don't know. I don't think there's an easy answer. There are commentators who argue that AI will bring an era of semiotic democracy, where basically everyone can participate in the cultural discourse. Some scholars think that it would be a fallacy to think that it's AI that generates stuff. No, it's actually we, the people, who are making cultural outputs with the aid of AI.

That said, obviously, with limited prompts, there is a danger of AI doing most of the work. So, in copyright scholarship and in the guidance that the Copyright Office has provided, there is this difference between AI-assisted and AI-generated works.

One of the key considerations is how much creative involvement, how much creative control, a human being has versus how much the machine actually dictates, generates—how much of it is unpredictable. And I think that, well, at least so far, the U.S. Copyright Office's perspective is as follows: if a human being has creative control over the work, the work is protectable, at least in principle.

Whereas, if the work results from simple prompts, then it will not be protectable, because the AI will break this causal creative chain between the human author and the output, and the work will be AI-generated. So, the law is supposed to promote human creativity, human progress, and whether it will—and whether that will be sufficient—I guess we'll see.


How about copyright violation? From what we know about Ghibli art and the pictures that are taking over the internet, we do know that the founder of the Ghibli Art Studio was not a fan of AI art. So does it raise valid concerns as to whether this is a copyright infringement? Because if he did not care for AI, then how is OpenAI or GPT-4 able to create pictures in a style that closely mimics his?


Right. He famously said that looking at AI, it seems like we humans are giving up on ourselves. Maybe we are. That goes back to your first question. Maybe it is really true that in the AI era, human creativity as such will diminish, or at least as traditionally conceived. But with Ghibli art, it's quite interesting because there are several copyright questions here.

One is about the protectability of artistic style. Style is generally not protectable. In copyright, we have this foundational concept of the idea-expression dichotomy, and generally, artistic style is thought of as more of an idea, which is unprotectable, than an expression, which is protectable. I think that many of these outputs that people generate would be borderline cases. They would be possibly infringing, but with claims that are quite difficult to make out.

So I think we can see here that there is a limit to cultural safeguards that copyright law can provide. I think that we are actually entering a new era of creativity, of memetic culture, where people make this stuff. These are new phenomena, and we may differ in how we assess them, how we react to them, but I'm not sure that the law will ever step in to stop it.

There are currently, I think, 40 lawsuits in the US related to AI model training and infringement. The claims that pertain to artistic style would be some of those that are most difficult to make out, I think.

As for the more orthodox claims, it's still very much an open debate, and we'll see what happens. I don't think there are settled questions. Scholars disagree. The courts so far aren't giving clear answers. The first orders that we've received basically leave it open still. Right?


So as I understand, none of the original laws were meant to deal with AI-generated work. So, laws have a lot of catching up to do. Can you briefly explain to our audience what are the existing regulations that deal with copyright infringement? Although I know that, you know, it's still in the gray area, laws are evolving, we don't have a clear answer from the courts as well. And also, in case of—again, another speculative question—in case of Ghibli art that is being generated, would that be, in your opinion, would that fall into AI-generated art or AI-assisted art?


Right. Well, so here I think we should distinguish between the approach that the US has taken so far, which—you know—no new regulations have been introduced, only Copyright Office's guidelines, which—you know—they aren’t law as such. The Copyright Act has not been amended. And this we can differentiate from, say, the European approach, the one that Japan or the UK have taken, where several jurisdictions have introduced explicit text and data mining exceptions.

In Europe, there is the framework in the directive which allows for text and data mining for research purposes to proliferate and introduces this opt-out mechanism for authors to be able to reserve their rights in the commercial context. The UK is currently debating this, and there is a lot of discussion both among politicians and in the Parliament and among the wider public about what should be done. And in the US, the discussion is also on, but the rather interesting thing is that apart from stakeholders who are pushing for a more permissive legislation and regulatory framework, I think that for now at least, the future direction will be actually battled out in courts.

And that means that we go back to copyright principles—the principles of infringement analysis and of fair use, which is the primary exception, primary defense in US copyright law. And basically, when one makes out a prima facie case for infringement, then there's a question of whether the defense of fair use applies. And there, according to the 1976 Act, the courts are supposed to look at four factors:


- the purpose of the use of a copyrighted work,

- the nature of the work,

- the amount and substantiality of what was taken,

- and a more economic consideration of what the use will have on the potential market and the value of the copyrighted works.


So basically, whether the use will bring economic harm to the work that was infringed, whether they will compete on the same market, and whether there's a substitution going on.

And there are—it's, you know, four factors—it's not very clear. There isn’t a very clear way how to apply them in every single case. There are different theories of how they should be applied to the generative AI context. So, some scholars would like to focus on transformativeness, claiming that, say, the Ghibli art is transformative. It takes some unprotectable elements of that Studio Ghibli—that are associated with Studio Ghibli's work—and it applies it to whatever you want. Right? It can be whatever you put into it.

There are scholars who claim that non-expressive use is the theory to follow—so again, focusing on those foundational principles—what is expressive, what is non-expressive, what is protectable, what isn’t, and conducting analysis along those lines. And at the same time, factor four—the more economic one—will also be important. It already has been discussed quite a lot in all of the cases, but also in the order that was rendered in Thomson Reuters, the case that focused on headnotes of cases. Right? So, slightly different context than generative AI, but still—how much should the company be able to take, what actually is transformative, and whether there is economic competition going on.

And I think that there's also the element of political reality that comes into play. I think that it would be difficult to expect a turn in the courts which would just stop the technology from being developed. In a lot of the policy discussions in the US, the question of being the leading jurisdiction for development of generative AI is definitely a factor. I think that with the emergence of DeepSeek and of models in other jurisdictions that are able to compete with the ones being developed here, a lot of people are focusing on the geopolitical issue—on the question of not allowing for technology to be developed offshore rather than in the country.

And I think that these are the considerations that will be applicable in every jurisdiction, whether it's the US, or Europe, or India. So, the orthodox doctrine—however difficult to understand—it also interplays with those more realist considerations. And I, you know, there are forty losses out there. There will not be one answer to the question of what's allowed. Obviously, they depend on the facts of each particular case. And I expect the general direction to be rather permissive than otherwise.


Taking into account the political realities, we know that there are litigations which have been filed to address the question of whether it was right for companies to use the copyrighted material for training their models. But we do know that there are only a few large companies which are dominating the gen AI market. So can litigation effectively address the fundamental issue of monopolistic business practices? And we know that many regulatory frameworks are not equipped to handle data-driven monopolies. What can be the alternative mechanism?


Well, so right, this is the big question. The previous administration in the US, and also in the UK and in the EU, were quite vocal about the fact that there is no AI exception from antitrust law—that regulators should scrutinize monopolistic practices, they should scrutinize concentration in the market, and they should scrutinize unfair practices, whether collusive or on the consumer protection side. The FTC was very vocal about this. But then the elections happened, and I think that some of these questions are being re-evaluated. There are currently congressional hearings happening.

And I think that it's, again, difficult to imagine that antitrust regulation will stop AI technologies from being developed. I think that especially with the global competition angle coming in, the more orthodox principles may be followed, but a structural analysis of the market—one that would be more interventionist—will not.

So I think that it's not that antitrust will save us, because copyright cannot. I don't think there's anyone who will save us from AI taking over the world. I think it will just happen, and maybe that's the scary part. But I think that the only way forward is to, you know, embrace it.


I'll come to the scary part later, but my... another question remains to me is that tech companies such as Stability AI have proposed an opt-out method for excluding copyrighted work from training datasets. And to me, that defies common logic — that if I'm an artist, why do I have to opt out of your training datasets rather than you taking permission to use my work, right? So such a mechanism, you know, persuaded by the political realities of the day — would such an opt-out mechanism crumble the existing copyright mechanism as we know it? And would my work be free for all AI techs to use, and I can do nothing about it?


Well, so this is the subject of debate in the UK, especially because the government has first considered introducing the opt-out mechanism, then backed out of it, then reintroduced that into consideration, then kind of backed out, but then is considering it again. There is a lot of debate, and I think that there is an argument — an argument to be made — that the opt-out mechanism does actually conflict with settled copyright principles. As you said, I think that it introduces formalities where there shouldn't be any, and that conflicts with the international framework, with the Berne Convention. It conflicts with the set of principles quite simply.

But I think that whether — and, you know, the same applies to any jurisdiction that considers it — the EU introduced the opt mechanism for text and data mining in the context of commercial text and data mining. And there are technical difficulties with implementing it. There are also practical difficulties with people being able to litigate, affording litigation, being able to win any damages that are satisfactory.

And I think that the opt-out mechanism is an option, but we are slowly entering a climate where I think the biggest stakeholders will be pushing for eradication of copyright altogether. Last week — or this week, maybe — Jack Dorsey and Elon Musk were chatting on Twitter about getting rid of IP completely. And I think the opt-out option is kind of in that direction.

Maybe the big tech companies will decide soon enough that they don't need copyright at all. That's probably not realistic. I think the opt-out mechanism would be more probable. But yes, it definitely isn't an orthodox way of doing things.

The question is whether that's justified — whether copyright should strive at coherence or whether it should try to adapt to new technological scenarios. And I think that a lot of people think of it as just responding to the market, just responding to the technology, and trying to be flexible. Then we may think that an opt-out mechanism is better.

Is it fair to the artists and copyright stakeholders? Well, probably not.


I think also opt-out mechanism fixes responsibility in the wrong place. It should be the responsibility of Pickaxe to seek information permission, rather than for artists to go around, see who's using their work for what purpose.

Now let's come to the sad part of Gen AI. If AI can paint, if it can crack jokes, if it can write news, novels, you know, make illustrations, wouldn't human creativity be stifled and jobs be lost? In a country like us, you make the argument that we need to embrace the reality. But in a country like India, we have about 5 million people who are added to the workforce every year and who don't find a job. If AI is able to do jobs of illustrators, designers, writers, I mean, more people would be unemployed every year.

So are we sure that this is the direction that we need to take AI in? Do we really need AI in newsrooms, museums, law firms?


Yeah, well, so I think the problem is that you'd need someone to be willing to stop this technological progress from happening and this economic change from happening. And I don't think there is anyone who will intervene.

So this is the scary part of the reality — that, you know, the courts will not stop the technology from progressing. Regulators will not do it. And I think that automation is happening, it's going to happen, and I don't think that realistically there's a way to stop it.

I think that, you know, it's simple laws of economics — that if one company can generate 10 times more output or, you know, the same amount but much more cheaply than the other one, then they will win, right? And obviously, it's cheaper to pay for GPT than for a team of human beings. So I think that's unstoppable, realistically, right, taking into account the political reality.

The difficult part is that I think it will come with unemployment, it will come with job displacement, and I don't think there's anyone who has a good answer for what to do about it.

There are people who try to minimize the danger and say, "Oh, we've always thought the same when, you know, the Industrial Revolution happened or when computers came about." But I think, realistically, AI is slightly more powerful as a tool than, you know, the steam engine, right? It automates more. It actually is able to produce a lot of outputs that are as good as the ones made by human beings.

And you know, there are people who think that if AI can do it, then, you know, it just should — and maybe the humans who would have done it otherwise aren't that good at their jobs. And I think that's one way to look at it.

The other is that, you know, soon enough, most of us will not be good enough to compete. And maybe the way forward would be to consider universal basic income or one of those solutions.

Again, I think much more difficult to implement in developing countries than in developed ones. I think quite difficult to implement in the US, given its political reality. So I don't think there is an easy answer.

I think that what Gen AI will bring is a fundamental reshaping of our political regimes. I think that a concept of liberal democratic culture, you know, designed for a capitalist society where we kind of are all equal because we all participate in the economy and we all have economic value as workers — I think that's put into question more and more.

And again, you're like, what can we do about it? It's just happening.


I think politically we are so polarized that we cannot sit on the same table and think of solutions at a global level in a way that benefits humanity as a whole. And I think we are going in a direction that there's benefit or profit for tech corporates, where the vast majority suffers. And the discourse on immigration, migration, and the divides has become cultural divides, has become such that we don't think it is useful to think of other people. And it makes more sense for us to think good of tech corporates than the people who really need help or who really need to be thought about.

Thank you so much for joining us today, and thank you for agreeing to be part of this show.


Of course. My pleasure. Thank you so much for having me.

Night
Day