This is going to be a long one, so strap in.
I sincerely believe AI is here to stay and that it has come too soon. It was unleashed upon the world where the only consideration was being first and making all the money, and no one stopped to think that perhaps it’s not a good idea to unleash something so powerful without fully analyzing the detrimental effects. Maybe one company will play by a code of ethics and have their AI unable to do certain things, but there’s no guarantee every individual will think the same. I can almost guarantee you that they won’t.
That’s why I feel that complaining about AI has big James Chapman Energy.
It’s a new arms race on every front. Strap in.
I also think we (as a society) are in the infancy of using AI, and still exploring potential applications to see where it fits. The current applications are driven by hype beast tech bros who have not a creative bone in their body but seek to apply the newest iteration of late-stage capitalism to every aspect of the creative arts. AI will never be a sufficient replacement for a human artist because a human can be creative and an AI cannot. Right now, an AI is just a giant remix machine. I’ve been extensively using Copilot since Microsoft forced it upon every Windows 11 user, and it’s barely passable as a reverse search engine. It’s even worse at tackling music theory questions, often getting basic concepts completely wrong and trying to bullshit it’s way through the prompt. Even image generators are not creative; they’ve been trained on a set of data and can only generate new images based on those parameters. It cannot create a new style or do anything new. And as we can see from the Dir en grey “The Devil in Me” PV, AI isn’t going to replace VFX artists anytime soon.
As long as AI hallucinates to the degree that it does, y’all are good. I can only hope that we get to a place where AI finds its place in more subtle ways, and that AI tools augment the artist’s capabilities instead of replacing the artist outright.
But this leads me to my next point:
OpenAI is employing a very deceptive argument. They scraped a bunch of data from the internet for use in its training model without consideration for whether this art was copyrighted or not, and without compensating anyone. OpenAI also stated in that submission that, without access to copyrighted works, its tools would cease to function. They cannot simultaneously claim that the data has no value (so they can scrape data with impunity) and that the data does have value (because they need it to function). There are way too many precedents to go into to establish that data has value, and that copyright can extend to data. If I were an artist impacted by these tools, I’d definitely demand my payment for my non-consensual contributions to these training models, because if humans never created the data there would be nothing to train AI on!
However, I can’t claim if it’s fair use or not because the law has not caught up to AI. I’ve said it before and I’ll say it again and I’ll even quote it so it stands out:
Technology is taking a wrecking ball to the axioms of society faster than we can put it back together
Is AI transformative because it’s creating new information or insights from the data? Or does it infringe upon the rights of copyright holders because AI systems can reproduce copyrighted works as a part of their output? Fair use is too complicated to call at the moment. It’s definitely not an absolute.
If you really care about this issue, you have to just wait for someone to push the envelope too far, too fast. Like I said earlier, the law lags behind, but it has an astounding ability to catch up when it affects the Right People. If you need an example, look no further than Taylor Swift deepfakes. Deepfakes have been around for a while - reddit closed the r/deepfake subreddit six years ago - and celebs have been dealing with Photoshop jobs since even before that. But the combination of convincing Taylor Swift deepfakes circulating on Twitter like wildfire changed the tone of the conversation from silence to “this needs to be regulated”. There are other examples like this yet to come.
My main concerns are the increasing number of scams involving deepfakes and the proliferation of both AI content and AI-regurgitated content that is flooding the internet and taking us many steps closer to a Dead Internet. I’m no longer on Facebook, but I’ve heard it’s getting quite bad.
If you can no longer trust what you observe on the internet, does it begin to lose its usefulness?
Having completed some solid exposition, I will wrap up by saying the AI use in the Dir en grey PV is objectively shit. I don’t like it, it looks weird, and trying to come up with an explanation to cover for it doesn’t fly for me. It’s about as bad as the various explanations for why the film version of Ghost in the Shell has a white woman playing an Asian woman, and even when I completely acquiesce to the argument and buy into the whole “race is fluid” construct in this hypothetical future, there is no denying that it is so awkwardly forced into the narrative that it distracts from the philosophical nature of the source material.
I feel the same way about this PV. Just because you can explain why it looks bad doesn’t make it look less bad, and there’s no denying a team of VFX artists would have done a much more convincing job and maybe have been on time too!