What can we conclude from the rash of published papers with obvious fingerprints of ChatGPT?
Over the last few weeks, there’s been a small flood of cases where a published paper turns out to have clear fingerprints of its authors’ use of ChatGPT (or other so-called “artificial intelligence” tools). By “fingerprints” I don’t mean the kind of odd-but-acceptable phrasing ChatGPT sometimes comes up with. I mean laugh-out-loud ridiculous things like...