Everyone is illiterate now (conference reviewing rant)

I feel the urge to start by saying I consider myself to be generally off the “AI+Society” beat, to the extent that I was ever on it. I used to tell my friends and family not to talk to me about AI outside of business hours because I spent too much of my “day job” keeping up with the increasingly bleak headlines about the “existential risk” hype cycle. Now, my research on equity in clinical risk prediction is taking me in a related but ultimately oblique direction and I can half- tune out from the constant stream of garbage. Or maybe a better way to put it is that “AI+Society” has had to pivot away from questions of statistical learning methodology and on to both political and technical questions that are outside of my area of expertise—I’ve never formally studied or worked on language modeling and don’t intend to start now.

Unfortunately, the “AI+Society” beat has started to bleed into my growing extracurricular interest in literature, because it is becoming increasingly obvious to me that everyone is illiterate now and it’s at least partly because of the chatbots. I’m not really the best person to lay out all the evidence from the education and publishing sectors supporting this claim, but once you realize it, you’ll see it everywhere. “Booktokers” — influencers whose job is, ostensibly, to read — complain about novels that have too many words. Anonymous, unedited Substack posts riddled with typos are circulated by pseudo-intellectual tech bros like the word of God. Don’t get me started on the rising popularity of the “video essay.”

And, if my most recent conference reviewing experiences are indicative of the general state of the field, today’s computer scientists have completely lost the ability to string two sentences together. The few papers I reviewed that were attempting to communicate the result of what seemed like sound and interesting research were shockingly incoherent. If they hadn’t attached code and figures proving that someone on the research team knew how to use a GPU, I’d find myself wondering whether some of these authors were overly ambitious fifth graders. You’d think that a reasonably educated person with underdeveloped language skills would at least think to work in a word processor with a built-in spelling or grammar checker, but alas, I found no evidence of this behavior.
 
It really just seems like researchers can’t (or won’t) read carefully anymore. Some authors failed to identify major relevant literature from the proceedings of the very conference they were submitting to. Others simply didn’t bother to add citations to work they described directly and referred to by name. Even some of the other conference reviewers spoke in platitudes rather than point out specific parts of the paper that exemplified the points they were making. (The submissions have line numbers! They are in red!!)

Most conferences have a policy against using AI in both submissions and reviews, which is a good thing for several unrelated ethical reasons. But if the authors can’t read or write, and the reviewers aren’t getting paid to proofread and edit these terrible drafts, then who’s flying the plane? Junior scholars are rarely formally trained in scientific writing, so a certain amount of this problem can be traced to gaps in access to mentorship and other resources, which is especially unfortunate. Still, I don’t exactly feel bad for gatekeeping publication status from researchers who can’t write — yet I don’t enjoy the process, either. Nobody wants to spend their time trying to parse nonsense cobbled together by strangers.

Maybe I’m being overly harsh. I empathize with everyone struggling with their reading practice in 2025. I took a long hiatus from reading, starting somewhere around my sophomore year of college (I majored in math) until, well, the summer after I finished my PhD in computer science. Yes, I had to “read” research papers and books to get through my degrees, but any academic will tell you that power-skimming for information with a deadline is a muscle that can be overtrained, and ultimately weakens your ability to actually engage with a text — sometimes resulting in a negative emotional association with the activity of reading itself. This was something that took time and effort to recover from, and I still think I would probably fail a serious college English class, if only because it takes me too much time to get through any book that could even marginally be described as “difficult.”

Too many people have thus decided that literacy is just not a personal priority in the age of chatbots and audiobooks. I think this is going to be the first (and possibly only) “existential” threat that AI poses to society. The silver lining for me is that if reading comprehension really does become as rare and coveted a skill as I fear it will, maybe I’ll escape the 2020’s with my job security intact. Anybody need a scribe?

AI won’t take over the (art) world

An aged photograph of a snail playing the kazoo. A landscape painting of a giant robot destroying Los Angeles painted by Van Gogh. A Minecraft rendering of a guy riding a capybara. If you’ve been spending time on the Internet lately, you may have seen a scene like this depicted in shockingly-realistic, algorithmically-generated “art”—the handiwork of a multi-billion-parameter AI model aptly named DALL-E. Simply provide the machine with a description, and it will respond with a high-resolution image of whatever you dream up.

Continue reading “AI won’t take over the (art) world”

Please read these if you’re “doing” “explainable” “AI”

A lot has been said about the many recent attempts to make AI explainable, and over the last year or so I have made a good faith effort to read all of it. Especially now that I get to think and talk about this stuff near-full-time since I started my PhD program at the U (which is going swimmingly, thanks for asking) I have a much more coherent perspective on the issue and a lot to write about. As I work towards generating some takes of my own, however, I thought I would share a few of the highlights of my literature review.

Continue reading “Please read these if you’re “doing” “explainable” “AI””

Raw predictions considered harmful?

Last week I went to a really interesting talk by Bill Howe called “Raw Data Considered Harmful,” which presented a strong case for handing ML researchers and data scientists semisynthetic data in sensitive settings. Some of his work, currently under review, proposes methods for scrubbing raw data of “biases,” or signals that are unwanted because they should not or cannot be legitimate representations of the relationships between variables in a dataset.

Continue reading “Raw predictions considered harmful?”

Explaining how to explain our underwriting model at the local data science meetup

I’m going to start posting to this page with professional news and writing snippets. Here is an outdated update from October: I gave a meetup talk on how we are using additive feature attribution to make our algorithmic underwriting process more transparent at MassMutual.

Continue reading “Explaining how to explain our underwriting model at the local data science meetup”