KHG logo

I Was Replaced by AI

For more than a decade, I wrote for How Stuff Works. I started as an automotive writer, and after a few years, the editors figured out that I could take on almost any topic they threw at me. So I was farmed out to other departments (except health; they were very picky about those writers having experience and credentials). I wrote about language, and cats in boxes, and science, and more automotive things.

But it started to be less fun, even though my editors did their best to come up with interesting topics. And the pay was so stagnant that it lagged behind every other outlet I wrote for. Again, my editors did everything they could to advocate for the freelance writers and did get us a little bump in pay, but it was not enough to make it worth the time. So I had to tell my editor I was going to stop writing for the website, and she understood.

Then I learned why things had gotten so bad: How Stuff Works was planning to use ChatGPT to update the thousands of articles on the site. I learned that my byline would remain at the top of the article with a new date for the update. Then, at the very bottom of the page, there would be a note saying:

This article was created in conjunction with AI technology, then fact-checked and edited by a HowStuffWorks editor.

I did not create any articles with artificial intelligence (AI) technology, and neither did the vast majority of my fellow freelance writers for the site. ChatGPT did not exist for the twenty-plus years that How Stuff Works has been publishing, so it’s only the updates published this month, July 2023, or later that could use AI technology.

I posted about this on Mastodon, where people were kindly outraged or disappointed on my behalf. I’m over my shock now and mostly just irritated. If any future editors search for my name online (and search engines using AI is a whole other concern), they’re going to turn up results from How Stuff Works where it looks like I used AI to create articles.

A few people wondered about payment from How Stuff Works for these updates, so I’ll explain why that’s not an issue. Almost all of my work as a freelance writer is work for hire, which means I do the work and I get paid per piece, per word, per hour—whatever we agree on in a contract. The company owns the copyright, not me. For How Stuff Works, I’d write a few articles a month and then send an invoice. They would pay me a couple of weeks later (they were prompt about that). If there were pieces I liked a lot and was proud of writing, I’d save a PDF as a clip, so I’m glad I have those before any AI updates occur. But I’m only paid once for each article.

Other people wondered how updates are usually done. I wrote a lot about cars and automotive technology, and there have been massive changes in the past fifteen years in that category. So my work was updated by others, and I updated articles by other writers. The original writer would have the first byline, and the updater would get the second. I’d get paid for an update, but the original writer had already been paid whenever they wrote the article. That’s pretty standard at most sites with evergreen-ish content. I might not even be so irritated by the AI updates if it were stated at the top that the article was updated by AI, like a human-written update would be, rather than having a note at the bottom saying it was created by AI.

The final piece of that note at the bottom of the updated page says that the articles are checked by How Stuff Works editors, but I know my usual editor was laid off, along with a couple of others. I can only take the company at its word that trained editors are checking for accuracy what ChatGPT spits out.

The farther-reaching worry is that websites like How Stuff Works are often part of the data set that large language models (LLMs) like ChatGPT are trained on. So if the training data is created by AI and has believable errors, then the AI that scrapes it for answers will include those errors and possibly add a few new believable errors. The information will become entirely unreliable. It will be informational inbreeding, an AI Habsburg jaw situation. (A Mastodon friend preferred an ancient Egyptian royalty metaphor. You can choose whichever you like.)

So I guess this all seems shortsighted, like corporate FOMO that’s not very well thought out. Kind of like the ending to this essay.