A lot has been said about the many recent attempts to make AI explainable, and over the last year or so I have made a good faith effort to read all of it. Especially now that I get to think and talk about this stuff near-full-time since I started my PhD program at the U (which is going swimmingly, thanks for asking) I have a much more coherent perspective on the issue and a lot to write about. As I work towards generating some takes of my own, however, I thought I would share a few of the highlights of my literature review.Continue reading “Please read these if you’re “doing” “explainable” “AI””
While it’s virtually impossible to opt out of it at this point, I try to avoid automatic content recommendation whenever possible. This is partly because I have found these systems to be bad at what they do, for me specifically.Continue reading “I kinda hate recommender systems”
Last week I went to a really interesting talk by Bill Howe called “Raw Data Considered Harmful,” which presented a strong case for handing ML researchers and data scientists semisynthetic data in sensitive settings. Some of his work, currently under review, proposes methods for scrubbing raw data of “biases,” or signals that are unwanted because they should not or cannot be legitimate representations of the relationships between variables in a dataset.
I’m going to start posting to this page with professional news and writing snippets. Here is an outdated update from October: I gave a meetup talk on how we are using additive feature attribution to make our algorithmic underwriting process more transparent at MassMutual.Continue reading “Explaining how to explain our underwriting model at the local data science meetup”