Occupy algorithms: Will algorithms serve the 99%?
I recently spent two extremely enjoyable days at a conference called Governing Algorithms at NYU. The rather ambitious goal of the conference was to bring together researchers from more than a dozen fields to talk about algorithms.This count is probably an underestimate. I met anthropologists, historians, engineers, economists, sociologists, mathematicians, data scientists of all sorts, computer scientists, financial analysts, media and culture researchers, legal scholars and philosophers, just to name a few. The conference was extremely well attended. Yet all were united by a seemingly urgent desire to talk about algorithms. The breadth of the spectrum was quite new and challenging for me. I recall spending about an hour trying to learn the meaning of the phrase "socio-technical performativity of algorithms". I really thought I got it, but now I can't seem to remember. Oh well.
So, what was it that we ended up talking about? Due to the heterogeneity of the audience, I think the answer to this question varies greatly from one participant to another. Nevertheless, there were some common themes that emerged. I should say the title "governing algorithms" has two rather opposite meanings. One is that as of today algorithms govern much of our lives. I need not explain why. The other is that we wish to govern algorithms in the sense of being able to control, regulate and understand them. There is an obvious tension here.
The conference revolved around four related panel discussions. I was part of a panel discussion on algorithms in the finance sector together with the two legal scholars Frank Pasquale and Tal Zarsky. Frank started out by discussing the role that algorithms have come to play in the financial sector. To oversimplify his point, he argues that algorithms have been abused by the financial industry to justify arbitrary (often unfair) business practices and evade regulation. Frank's discussion paper called The Emperor’s New Codes: Reputation and Search Algorithms in the Finance Sector is treasure trove of thought-provoking examples. Many of them focus on the evils of the credit score system. Frank traces the failure of the system largely to the lack of transparency. While this sounds like an intuitively compelling argument, Tal provided some excellent counter arguments in his response.
The financial industry claims that opacity is necessary for if the internal workings of the industry were exposed, such as the algorithms behind credit scoring, the system would become vulnerable to manipulation. This point isn't unreasonable. Indeed, a "law" known as Goodhart's law states that:
Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes
Goodhart's law has a natural interpretation in the context of machine learning. If we expose the statistical signal that a classifier (such as the credit score) relies on, the signal disappears and the classifier becomes useless. There's an interesting distinction here that is underappreciated in machine learning. It's the distinction between classifying humans as opposed to non-strategic objects. It seems fundamental to me. Evidently, Goodhart's law is not true when classifying, say, different types of rock. Statistical signal observed in training data translates to low classification error on new instances as the theory would predict. For humans the situation is much more subtle. Humans will respond to a classifier strategically by trying to figure out how to achieve the best possible classification outcome within the boundary of certain constraints, e.g., the amount of time and money they are willing to invest. This process is often called gaming. There are many fun real-word examples. I once read that the number of books in the parents' household is highly predictive of a student's success in high school. Unfortunately, it's a terrible idea to use this signal in a high-school entrance test. Books are cheap and knowing that this attribute mattered parents could easily buy lots of books in preparation for the test. The issue here is distinct from the usual game-theoretic notion of truthfulness. The issue of gaming persits even if "lying" or misreporting is just not an option. As a theoretical computer scientist I have to wonder if Goodhart's law is really a law (sort of like a theorem) or if to the contrary we can design machine learning algorithms that are provably robust to manipulation. My guess is that there is room for an interesting notion here. I elaborated on the problem a bit in my response "Occupy Algorithms: Will Algorithms Serve the 99%?"
The general goal of my response was to discuss some of the work computer scientists have done to address the issues that Pasquale raises. This wasn't easy as there hasn't been a lot of work that I am aware of, unfortunately. If you have any suggestions, I'd love to hear your comments. One paper that readily came to mind though is the wonderful result by Arora, Brunnermeier, Barak and Ge on the computational hardness of pricing financial derivatives. It was fun to read the paper again on this occasion. I think it's a classic. There have been many good blog posts on this work, for example, here. David Zuckerman proposed a way to circumvent the negative result of ABBG using ideas from pseudorandomness.
To return to the title of the post, there is starting to be a significant amount of mistrust in what good algorithms do for society. Some suspicion is justified. After all, algorithms are actively used to carry out and, what's worse, even justify questionable government and business practices. It might be prudent for computer scientists to recognize the negative trend and actively counteract it. A first step is to respond to the growing social challenges revolving around the use of algorithms. For me this was a lot of what Governing Algorithms was about.
There were three other exciting panels that I haven't mentioned at all. One panel discussion by Tarleton Gillespie, Kate Crawford and Martha Poon focused on what Tarleton called relevance algorithms. These are algorithms that explicitly or implicitly assign relevance or reputation to content and individuals. A classical example are search algorithms and the rankings they produce. Computer scientists fortunately have been studying models and dynamics of reputation and relevance. Jon Kleinberg's slides give several examples. The panel discussion took a somewhat different direction though. A chief concern is that there really isn't such a thing as an objective notion of relevance. Relevance algorithms are tweaked and optimized by engineers at Google, Facebook, Twitter, and elsewhere. What might appear to be an objective or neutral search result, implicitly incorporates hundreds of design choices made by individuals through some social process invisible to the end user. This point reminded me of the visionary paper "Bias in Computer Systems" by Friedman and Nissenbaum (amazingly written in 1996!). Friedman and Nissenbaum argue that computer systems incorporate the biases of the designer. They suggest a taxonomy of bias and propose that minimizing bias should be a primary objective in the design of computer systems. Alas, the problems outlined in the article have hardly been resolved.
Yet another panel by Daniel Neyland, Mike Annany and Karrie Karahalios discussed what it means for an algorithm to be ethical. Daniel discussed the question in the context of a large interdisciplinary effort to build a video surveillance system. Daniel was the in-house ethicist on the project. He found himself challenged to give actionable advice on how to build an ethical system. The question touches on several issues. One is the problem of fairness (in the sense of "non-discrimination"). What if the system misclassifies members of racial minorities? Indeed, face detection software is famously known to perform differently on individuals of different color. It also raises the problem of privacy. If an image sequence is flagged for further scrutiny, what parts of the image should be hidden from the observer? Defining any meaningful privacy guarantee in the realm of image data is a difficult open problem (but check this out) well out of my comfort zone. Finally, there is an even more difficult question. Who is to be held accountable for the actions performed by the video surveillance system? I have no idea how to approach this problem, but I admire the effort.
Further pointers
I highly recommend that you check out the videos that are available of all talks and panel discussions. The audience was really fun and engaged. So, it's well worth waiting for the questions at the end. For an even deeper look check out the discussion papers. There's also another summary of the conference over here.
And if occupying algorithms isn't enough for you, don't forget Omer Reingold's post: Occupy Database.