themis figurine at lawyers office

Is AI My Competition?

In the summer of 2023, I stopped writing for How Stuff Works after more than a decade of freelancing for the company. After I left, I learned from my editor (who has also since left the company) that article updates would be performed by artificial intelligence. I posted about this change in several places, including on Mastodon. It was a kind of popular post for my little account, and it still makes the rounds every once in a while. A new person found my post six months after I shared it and asked:

As a writer, does it offend your sense of justice that you are competing for column inches and market share against other writers who use AI assistance?

I found the way the question was worded was interesting. It’s not the usual way that creative people are asked about AI. It doesn’t ask, “Are you afraid of AI?” or “Do you use AI in your creative practice?” It uses some really specific terms, so I’m going to answer by examining those terms.

Column Inches and Market Share

Let’s start with the easy ones. Thanks to the internet being basically infinite and computers making print and online layout far easier to adjust, writing assignments are usually given with word count, not column inches. And market share is something companies have, not individual freelancers. The outlets that hire me have market share, whatever their markets might be, but I don’t. So I’m not really competing for either of these. I think the asker may have meant to ask about assignments or gigs instead, but even then I’m not sure I’m competing.

Competing

I’ve been a freelance writer and editor since 2006, and I’ve never considered other writers to be my competition. There are so many of us covering so many topics and with so many different strengths and weaknesses that it’s tough to even compare most of our work, let alone compete against each other.

There have surely been times when I was competing for scare resources, whether it was the dollars in a publication’s freelance budget or the available space in a print publication, and I think this is the kind of competition the question asker was getting at. I still don’t view this kind of competition as adversarial; I’ve found my freelance colleagues to be very generous. If we’re too busy for an assignment, we often pass it along to someone we know will be great at it. If we get gigs at new publications with room for more freelancers, we share that information.

So it looks like another no when it comes to competition.

AI Assistance

I’m guessing that the question asker means large language models (LLMs) like ChatGPT when they write “AI assistance.” I’m sure there are other types of artificial intelligence assistants and LLMs out there, and I don’t pretend to know much about any of them. I’m sure there are AI applications and use cases that make perfect sense. I’ve read that it’s being used in some scientific arenas to create significant breakthroughs. And I’ve read that many people with disabilities find LLMs helpful for organizing thoughts and creating initial scripts and drafts to help them communicate. Hard to find fault with those kinds of uses. But I’m also definitely not competing against them.

The kind of situation I think the question asker means is when someone uses LLMs to create articles that I otherwise might have been assigned for a publication. But even in that case, I don’t think I’m competing against the AI so much as I’m competing against the writer.

Sense of Justice

This is where things get very tricky, and I will tell you know that I do not emerge with any kind of clarity after considering what a sense of justice might mean in the case of writers using AI assistance. I did some reading that helped me ask some better questions, though. I think this is what some people call “learning in public.”

I found the Stanford Encyclopedia of Philosophy entry on justice written by David Miller to be very helpful for getting my feet under me with this topic. The essay begins with a definition of justice that comes from Rome in the sixth century CE. Miller parses it this way:

Justice has to do with how individual people are treated…. Issues of justice arise in circumstances in which people can advance claims—to freedom, opportunities, resources, and so forth—that are potentially conflicting, and we appeal to justice to resolve such conflicts by determining what each person is properly entitled to have.

So justice assumes a state of affairs that’s been brought about by an agent. Who or what is the agent in the Mastodon question? Is it the AI itself? The writer who is using it as an assistant? The editor or publisher who pays for quantity of words or posts over quality of consideration and craft? Is it the creators of ChatGPT? I’m not sure which of these could be called the agent that is causing an injustice to me, so I’m not sure there’s an injustice being committed.

I wonder if the situation inches closer to justice when we consider that LLMs are trained on copyrighted works. Bear in mind that work does not have to be registered with the US Library of Congress to be copyrighted—the mere fact of it being committed to paper or screen or recording device is enough to claim a copyright. In the United States, we currently live in a culture where everything is owned, from blades of grass on plots of land to the words on a screen. In the case of a blog post, the author or publisher will own those words. In the case of social media posts, things get trickier. The author of most posts owns the copyright, but the terms of service usually give the platform the right to copy and share your words (that’s how it becomes social). But none of these things are owned by the companies that built ChatGPT and used that data to train the LLM. Copyright was built on shifting sands, and the first lawsuits have been filed to try and shift—or not shift—the sands again now that this new technology exists.

Do the creators of ChatGPT (and other LLMs like it) count as agents when it comes to copyright? Are their claims to resources and opportunities in conflict with mine? That’s possible, since the LLM could perform tasks that I would otherwise be paid to do. Justice may in the near future be called on to determine what I, or the New York Times, or any number of other entities, are properly entitled to have.

Which brings me to two flavors of justice worth thinking about in this circumstance: conservative justice, which adheres to existing laws even when new circumstances crop up, and ideal justice, which acknowledges that new circumstances require new laws to address them adequately. In the context of this question, should I rely on the existing laws to deliver justice when a writer using AI gets an assignment that I don’t get, or does the situation mean we need new laws that address the very different circumstances of a world with human writers and LLMs vying for the same work?

As Miller wrote: “Do those whose prior entitlements or expectations are no longer met have a claim to be compensated for their loss?” My prior entitlements might be to writing a particular assignment, and I would reasonably have been expected to get some gigs based on my years of experience in particular fields of writing. If these are no longer being met thanks to writers who use LLMs, should I be able to be compensated for my loss of work? We almost certainly will require news laws to address this situation; under those new laws, should I be compensated for the loss due to the change in my prior entitlements and expectations as a human writer? 

Daniel Sznycer and Carlton Patrick have done extensive research on our sense of justice and have determined that its an innate feature of the human brain. In one of their studies, they presented modern people with offenses committed 3,800 years ago in Mesopotamia, and the study subjects came to basically the same conclusions of justice as the laws written millennia before. One of the situations outlined in the Laws of Eshnunna used in the study described “failing to keep one’s aggressive ox in check, resulting in a slave being killed by the ox.” Are the creators of AI and LLMs failing to keep their aggressive systems in check, resulting in jobs being killed by technology? Is this the same kind of conflict that would result in the same sense of injustice, and therefore the same right to restitution or being made whole in some way? When I ask the questions this way, I think maybe it is an injustice.

The few works I consulted on justice (and its inverse, injustice) require that there is a conflict. Without conflict, there is no call for justice or restoration. And conflict requires an agent. Which brings me back to asking who is the agent in this potential conflict: the creators of ChatGPT, the writer using ChatGPT, or ChatGPT itself. Can humans feel a sense of injustice when the opposing agent that is causing the conflict is not human? Do we feel a sense of injustice toward the ox or toward the person who allowed the ox to run free? Or to neither, because the circumstances are so novel that new laws are required? Are we dealing with conservative justice or ideal justice?

I warned you that I have more questions than answers, but I’m very happy to keep considering this. I’m glad that someone on Mastodon asked me this question. There’s far more than even these questions to ponder as the situation changes. The morning that I sat down to refine this essay, I came across two additional and probably relevant essays published by 404 Media and Unheard that I have yet to digest. My thinking on AI and LLMs has also been influenced by Chuck Wendig’s recent art barf robot post and Nick Cave’s thoughtful take in the New Yorker.

Order KHG’s latest translation, Memoirs of a French Courtesan Volume 1: Rebellion, available now as a paperback or ebook.