Oof. Yet Another WorldCon Controversy

Oof. Yet Another WorldCon Controversy

Good afterevenmorn, Readers!

I have been out of the writing world loop for a bit, being wrapped up in my own personal stuff (it’s a terrible combination of absolutely no time whatsoever, “out of sight, out of mind,” and having my head in the clouds as I’m neck deep in the first draft of a book), so I’m quite late to the party. Apparently, World Con has once again landed itself in some controversy.

Let me fill you in if you are like I was just two days ago; utterly clueless about it all.

Image by Robert Fotograf from Pixabay

It appears that Worldcon 2025, which will be held in August in the fine city of Seattle, used an LLM (Language Learning Model; specifically Chat GPT) in order to vet panelists for their programming. This created quite the furor. You can read more about it in the File 770 article that covers most of it. Gizmodo also had an article about it. It’s understandable, really. There’s a lot of bad blood between creatives and the “tech bros” who stole their creations in order to train their LLMs.

I understand the impulse to use AI in order to do this, especially for a convention as huge as Worldcon. I do not volunteer for any conventions. I have no idea the amount of work involved in getting one together, but I can imagine. I expect that a good portion of manpower is devoted to vetting possible panelists and then matching them to a panel where they would be the best fit for that particular topic.

AI would likely be a godsend in cutting down the hours required to do it all. Or it would be, if there weren’t so many issues with it.

Image by Dmitriy from Pixabay

It’s not even about stolen creations or jobs. What was taken in this case was volunteer hours. Is that better or worse? Not sure. There was one mention of why it’s not so great, and that while it might be a lot of work, it is good work, and can be a lot of fun. I cannot speak to that. But as an intention, that doesn’t seem bad to me.

At the most basic, practical level, without regard to ethics at all, LLMs are not great for vetting things. The examples of AI hallucinating are abound, and sometimes they flat out lie, or make things up and present them as facts though they don’t even exist. Wasn’t there some recent furor over an LLM citing supporting case law… by references cases that simply did not exist. The machine just… made them up. AI is constantly ascribing nonsensical things to people who had nothing to do with them, or making up answers to clearly nonsensical questions (specifically designed to prove just how unreliable the application is).

There is also the issue of the inherent bias in the data sets that is the internet. In short, the internet is a horribly bigoted place, and any LLM that gleaned its dataset from the internet is proven to be racist as all get-out.

Everything else does.

A real firestorm

Practically, it doesn’t seem great, given all the problems with AI at present (granted, as the technology improves, that will be less and less an issue). Ethically, it’s an absolute stinker.

The environmental toll of using AI is absolutely horrific. The energy and water requirements for keeping these things running as appalling. Anyone who cares remotely about the environment should have serious concerns about using it just on that alone. If all you care about is the environment, then any AI use is an absolute no-go.

Then there’s the issue of the principals. The attendees of this convention are the very kinds of people who had their creativity stolen in order to teach these LLMs. Using the exact application that thieved from the very people in attendance was probably not a great move. It’s quite a slap to the face, if you think about it. Many of those very people, based on these grounds alone, feel very strongly that there is absolutely no ethical argument for using AI.

It will not surprise anyone that I’m kinda on their side; both on the personal and environmental issues. I don’t think there can be any ethical reason to use AI. The time it might save doesn’t offset the other considerations here.

Easy for me to say, I know. I’m not trying to organize one of the biggest SFF conventions in the world. I just think that using AI was a stumble and can’t really be justified. At least, not to me.

Thankfully unaffected this time.

It’s become such an issue for Worldcon 2025 that three people have resigned from the board, and one author has withdrawn their books from award consideration (Yoon Ha Lee was in the running for the Lodestone Award, and withdrew following this mess). This, despite assurances that AI went nowhere near the Hugo Awards. Thankfully. I can’t imagine the mess if it had.

Honestly, in terms of controversies attached to Worldcon, this is the least aggravating for me personally. I am not in the running for a Hugo (could you imagine?), and I’m not attending Worldcon this year… or any convention in the US for the next few years. This is not the Sick Puppies, or Sick Puppies adjacent. The awards themselves appear to have maintained their integrity this year. My absolute dislike of LLMs on principal makes me dislike this situation intensely, but it’s not the worst thing that has happened to and with World Con.

Thank goodness. I don’t think my blood pressure could handle anything more egregious.


When S.M. Carrière isn’t brutally killing your favorite characters, she spends her time teaching martial arts, live streaming video games, and cuddling her cat. In other words, she spends her time teaching others to kill, streaming her digital kills, and a cuddling furry murderer. Her most recent titles include Daughters of BritainSkylark and Human. Her serial The New Haven Incident is free and goes up every Friday on her blog.


Subscribe
Notify of
guest

2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
K. Jespersen

What? Why? By “LLM,” I’m assuming you’re referring to a generative AI, like ChatGPT. But why use a generative AI for sorting out which panelists go where, when a reactive AI is so much better for that? I mean, reactive AI is practically BUILT for sorting, and that’s why it so often drives personalized recommendations lists.

There’s something wrong with this story, or there’s more to it. It makes no sense. Did someone think a generative AI was just a search engine with synthesis capabilities? In that circumstance, the person or people may have not known it was AI– no harm no foul (WorldCon is not a landmark legal case, and panel membership alone is not going to make or break a person’s life). If the person or people knew it was AI, was it unawareness of the creative controversies around AI? That seems unlikely, unless the people organizing this thing are, as PacReach executive Rupert Innes put it, “rapidly losing touch with the pulse of modern tech.” If they are, and so are relying on their nieces’ and nephews’ recommendations for how to make their jobs easier, they deserve a bit of grace, rather than a kerfuffle. So if this person or these people did know it was AI, either they did know enough to understand the types of AI, or they are the-future-is-now technology adopters. The response to technology adopters is a small slap on the hand and an adjustment of the Sci Fi-to-Fantasy ratio of the blend of organizers. But if the person or people did know it was AI, were aware or the controversies around AI, and were invested enough to be able to understand the different types of AI so as to not offend and still get the job done, why did they not use the right tool for the job???

I’m baffled. The whole thing makes no sense, from top to toe. It smacks of people wanting to take offense. Are we being jerked around by an AI-generated deep fake controversy?

Sarah Avery

I emailed the programming folks to say that, if they had to pitch their current program roster and start from scratch, I would be completely okay with it if I didn’t end up on the new schedule. The person who replied said that they had used AI only after humans had made the first few passes through the applicants and decided who they wanted on program. They used the AI to check the applicants’ presence online and make sure nobody egregious got through. Because of the known hallucination problem, AI seems like the wrong tool for that job. That said, if we assume the guy who replied to me is right, some of the accusations against the program folks would be overblown.

And yet. It’s perplexing to me that I’m (at least for now) on program, but Micaiah Johnson (for example) is not. Did the AI hallucinate some objection to her? Or did a live human decide, somehow, that I would be more interesting on a panel than she would? I’ve been on a panel with Micaiah Johnson, and came away with the impression that she’d be an asset on anyone’s program.

2
0
Would love your thoughts, please comment.x
()
x