I feel the urge to start by saying I consider myself to be generally off the “AI+Society” beat, to the extent that I was ever on it. I used to tell my friends and family not to talk to me about AI outside of business hours because I spent too much of my “day job” keeping up with the increasingly bleak headlines about the “existential risk” hype cycle. Now, my research on equity in clinical risk prediction is taking me in a related but ultimately oblique direction and I can half- tune out from the constant stream of garbage. Or maybe a better way to put it is that “AI+Society” has had to pivot away from questions of statistical learning methodology and on to both political and technical questions that are outside of my area of expertise—I’ve never formally studied or worked on language modeling and don’t intend to start now.
Unfortunately, the “AI+Society” beat has started to bleed into my growing extracurricular interest in literature, because it is becoming increasingly obvious to me that everyone is illiterate now and it’s at least partly because of the chatbots. I’m not really the best person to lay out all the evidence from the education and publishing sectors supporting this claim, but once you realize it, you’ll see it everywhere. “Booktokers” — influencers whose job is, ostensibly, to read — complain about novels that have too many words. Anonymous, unedited Substack posts riddled with typos are circulated by pseudo-intellectual tech bros like the word of God. Don’t get me started on the rising popularity of the “video essay.”
And, if my most recent conference reviewing experiences are indicative of the general state of the field, today’s computer scientists have completely lost the ability to string two sentences together. The few papers I reviewed that were attempting to communicate the result of what seemed like sound and interesting research were shockingly incoherent. If they hadn’t attached code and figures proving that someone on the research team knew how to use a GPU, I’d find myself wondering whether some of these authors were overly ambitious fifth graders. You’d think that a reasonably educated person with underdeveloped language skills would at least think to work in a word processor with a built-in spelling or grammar checker, but alas, I found no evidence of this behavior.
It really just seems like researchers can’t (or won’t) read carefully anymore. Some authors failed to identify major relevant literature from the proceedings of the very conference they were submitting to. Others simply didn’t bother to add citations to work they described directly and referred to by name. Even some of the other conference reviewers spoke in platitudes rather than point out specific parts of the paper that exemplified the points they were making. (The submissions have line numbers! They are in red!!)
Most conferences have a policy against using AI in both submissions and reviews, which is a good thing for several unrelated ethical reasons. But if the authors can’t read or write, and the reviewers aren’t getting paid to proofread and edit these terrible drafts, then who’s flying the plane? Junior scholars are rarely formally trained in scientific writing, so a certain amount of this problem can be traced to gaps in access to mentorship and other resources, which is especially unfortunate. Still, I don’t exactly feel bad for gatekeeping publication status from researchers who can’t write — yet I don’t enjoy the process, either. Nobody wants to spend their time trying to parse nonsense cobbled together by strangers.
Maybe I’m being overly harsh. I empathize with everyone struggling with their reading practice in 2025. I took a long hiatus from reading, starting somewhere around my sophomore year of college (I majored in math) until, well, the summer after I finished my PhD in computer science. Yes, I had to “read” research papers and books to get through my degrees, but any academic will tell you that power-skimming for information with a deadline is a muscle that can be overtrained, and ultimately weakens your ability to actually engage with a text — sometimes resulting in a negative emotional association with the activity of reading itself. This was something that took time and effort to recover from, and I still think I would probably fail a serious college English class, if only because it takes me too much time to get through any book that could even marginally be described as “difficult.”
Too many people have thus decided that literacy is just not a personal priority in the age of chatbots and audiobooks. I think this is going to be the first (and possibly only) “existential” threat that AI poses to society. The silver lining for me is that if reading comprehension really does become as rare and coveted a skill as I fear it will, maybe I’ll escape the 2020’s with my job security intact. Anybody need a scribe?