Clarkesworld founder Neil Clarke details the site’s plight around detecting plagiarism and AI-generated stories, with issues starting around the beginning of the pandemic. Through 2020, he’d consistently detected less than 25 mostly computer-assisted submissions each month.  In Depth: These experts are racing to protect AI from hackers. Time is running out A small surge happened when OpenAI released ChatGPT in late November. Plagiarized or AI-generated submissions rose to 50 a month in December, and then doubled to over 100 in January, and then more than tripled to over 350 by mid-February. Shortly after Clarke’s post announcing the suspension of new submissions, another surge happened, pushing detections to over 500 by February 20.  Announcing his decision, Clarke says that detected spam submissions hit 38% in February.      Also: This professor required his students to use ChatGPT. The results were surprising Mary Rasenberger, executive director of writers’ group the Authors Guild, told Reuters that human ghostwriting has a long tradition, but argued there should be transparency by authors and platforms about how these books are created.  Also: How to use ChatGPT: Everything you need to know Clarkesworld does have a policy on AI-written stories: “We are not considering stories written, co-written, or assisted by AI at this time.”  “Our guidelines already state that we don’t want “AI” written or assisted works. They don’t care. A checkbox on a form won’t stop them. They just lie,” he tweeted.  Developer Q&A site Stack Overflow still has a temporary ban on AI-generated submissions after its moderators were overwhelmed by plausible-looking but wrong answers just one week after OpenAI released ChatGPT. The site had detected posts generated by ChatGPT in the “thousands”.  Clarke acknowledged there are tools available for detecting plagiarized and machine-written text, but noted they are prone to false negatives and positives.  OpenAI recently released a free classifier tool to detect AI-generated text, but also noted it was “imperfect” and it was still not known whether it was actually useful. The classifier correctly identifies 26% of AI-written text as “likely AI-written” – its true positive rate. It incorrectly identifies human-written text as AI-written 9% of the time – a false positive.  Also: The best AI chatbots: ChatGPT and other interesting alternatives to try Clarke outlines a number of approaches that publishers could take besides implementing third-party detection tools, which he thinks most short fiction markets can’t currently afford. Other techniques could include blocking submissions over a VPN or blocking submissions from regions associated with a higher percentage of fraudulent submissions.   “It’s not just going to go away on its own and I don’t have a solution. I’m tinkering with some, but this isn’t a game of whack-a-mole that anyone can “win.” The best we can hope for is to bail enough water to stay afloat,” wrote Clarke.  “If the field can’t find a way to address this situation, things will begin to break. Response times will get worse and I don’t even want to think about what will happen to my colleagues that offer feedback on submissions. No, it’s not the death of short fiction (please just stop that nonsense), but it is going to complicate things.”