By Bradley Murray (IG, X)
This is from Human Compatible by Stuart Russell. For those who haven’t read it, the book focuses on creating AI systems that work in “alignment” with human values (there are immediate difficulties with the premise since if there is one thing human beings tend to disagree over, it is what is valuable – but we won’t get into this right now).
[C]onsider how content-selection algorithms function on social media. They aren’t particularly intelligent, but they are in a position to affect the entire world because they directly influence billions of people. Typically, such algorithms are designed to maximize click-through, that is, the probability that the user clicks on presented items. The solution is simply to present items that the user likes to click on, right? Wrong. The solution is to change the user’s preferences so that they become more predictable. A more predictable user can be fed items that they are likely to click on, thereby generating more revenue. People with more extreme political views tend to be more predictable in which items they will click on. (Possibly there is a category of articles that die-hard centrists are likely to click on, but it’s not easy to imagine what this category consists of.) Like any rational entity, the algorithm learns how to modify the state of its environment—in this case, the user’s mind—in order to maximize its own reward.
It is frightening to think how susceptible we are to having our preferences modified by algorithms. But the more fundamental question this raises is whether in the age of algorithms (or algomodernism) there will be a fundamental disregard for human autonomy and dignity. Under such conditions, Kantian ethics may come to seem a relic of the past.