Oof. Yet Another WorldCon Controversy
Good afterevenmorn, Readers!
I have been out of the writing world loop for a bit, being wrapped up in my own personal stuff (it’s a terrible combination of absolutely no time whatsoever, “out of sight, out of mind,” and having my head in the clouds as I’m neck deep in the first draft of a book), so I’m quite late to the party. Apparently, World Con has once again landed itself in some controversy.
Let me fill you in if you are like I was just two days ago; utterly clueless about it all.

It appears that Worldcon 2025, which will be held in August in the fine city of Seattle, used an LLM (Language Learning Model; specifically Chat GPT) in order to vet panelists for their programming. This created quite the furor. You can read more about it in the File 770 article that covers most of it. Gizmodo also had an article about it. It’s understandable, really. There’s a lot of bad blood between creatives and the “tech bros” who stole their creations in order to train their LLMs.
I understand the impulse to use AI in order to do this, especially for a convention as huge as Worldcon. I do not volunteer for any conventions. I have no idea the amount of work involved in getting one together, but I can imagine. I expect that a good portion of manpower is devoted to vetting possible panelists and then matching them to a panel where they would be the best fit for that particular topic.
AI would likely be a godsend in cutting down the hours required to do it all. Or it would be, if there weren’t so many issues with it.

It’s not even about stolen creations or jobs. What was taken in this case was volunteer hours. Is that better or worse? Not sure. There was one mention of why it’s not so great, and that while it might be a lot of work, it is good work, and can be a lot of fun. I cannot speak to that. But as an intention, that doesn’t seem bad to me.
At the most basic, practical level, without regard to ethics at all, LLMs are not great for vetting things. The examples of AI hallucinating are abound, and sometimes they flat out lie, or make things up and present them as facts though they don’t even exist. Wasn’t there some recent furor over an LLM citing supporting case law… by references cases that simply did not exist. The machine just… made them up. AI is constantly ascribing nonsensical things to people who had nothing to do with them, or making up answers to clearly nonsensical questions (specifically designed to prove just how unreliable the application is).
There is also the issue of the inherent bias in the data sets that is the internet. In short, the internet is a horribly bigoted place, and any LLM that gleaned its dataset from the internet is proven to be racist as all get-out.
Everything else does.

Practically, it doesn’t seem great, given all the problems with AI at present (granted, as the technology improves, that will be less and less an issue). Ethically, it’s an absolute stinker.
The environmental toll of using AI is absolutely horrific. The energy and water requirements for keeping these things running as appalling. Anyone who cares remotely about the environment should have serious concerns about using it just on that alone. If all you care about is the environment, then any AI use is an absolute no-go.
Then there’s the issue of the principals. The attendees of this convention are the very kinds of people who had their creativity stolen in order to teach these LLMs. Using the exact application that thieved from the very people in attendance was probably not a great move. It’s quite a slap to the face, if you think about it. Many of those very people, based on these grounds alone, feel very strongly that there is absolutely no ethical argument for using AI.
It will not surprise anyone that I’m kinda on their side; both on the personal and environmental issues. I don’t think there can be any ethical reason to use AI. The time it might save doesn’t offset the other considerations here.
Easy for me to say, I know. I’m not trying to organize one of the biggest SFF conventions in the world. I just think that using AI was a stumble and can’t really be justified. At least, not to me.

It’s become such an issue for Worldcon 2025 that three people have resigned from the board, and one author has withdrawn their books from award consideration (Yoon Ha Lee was in the running for the Lodestone Award, and withdrew following this mess). This, despite assurances that AI went nowhere near the Hugo Awards. Thankfully. I can’t imagine the mess if it had.
Honestly, in terms of controversies attached to Worldcon, this is the least aggravating for me personally. I am not in the running for a Hugo (could you imagine?), and I’m not attending Worldcon this year… or any convention in the US for the next few years. This is not the Sick Puppies, or Sick Puppies adjacent. The awards themselves appear to have maintained their integrity this year. My absolute dislike of LLMs on principal makes me dislike this situation intensely, but it’s not the worst thing that has happened to and with World Con.
Thank goodness. I don’t think my blood pressure could handle anything more egregious.
When S.M. Carrière isn’t brutally killing your favorite characters, she spends her time teaching martial arts, live streaming video games, and cuddling her cat. In other words, she spends her time teaching others to kill, streaming her digital kills, and a cuddling furry murderer. Her most recent titles include Daughters of Britain, Skylark and Human. Her serial The New Haven Incident is free and goes up every Friday on her blog.