News stories understandably often focus on one instance of a particular ethics issue at a particular company. Here, I want us to step back and consider some of the broader trends and factors that have resulted in the types of issues we are seeing. These include our over-emphasis on metrics, the inherent design of many of the platforms, venture capital’s focus on hypergrowth, and more.
Hacking, I. (2016). Logic of statistical inference. [Cambridge Philosophy Classics]
“One of Ian Hacking’s earliest publications, this book showcases his early ideas on the central concepts and questions surrounding statistical reasoning. He explores the basic principles of statistical reasoning and tests them, both at a philosophical level and in terms of their practical consequences for statisticians. Presented in a fresh twenty-first-century series livery, and including a specially commissioned preface written by Jan-Willem Romijn, illuminating its enduring importance and relevance to philosophical enquiry, Hacking’s influential and original work has been revived for a new generation of readers.”
[potentially useful] Daston, L., & Galison, P. (2018). Objectivity.
“As Lorraine Daston and Peter Galison point out in their capacious and engaging study of the concept of scientific objectivity from the 17th century to the present day, the universal form is key to understanding how modern science moved from the study of curiosities, through the representations of perfect, notional specimens, to a concept of objectivity as responsibility for science.”
Pasting the article I mentioned in class from Ole Peters about democratic domestic product:
On phone keyboard on train ride home, so won’t wax too philosophical, but I do think that the fundamental insight of ergodicity economics is worth considering in the context of Goodhart’s law. Briefly, the idea is that the ensemble average (what’s the average outcome over many parallel universes?) is often different from the time series average of an individual. Ensemble is generally much easier to calculate, leading to people assuming ergodicity (which essentially means “ensemble and time series are the same”) when they shouldn’t. But if we think of institutional power as a concentrating force from many to few, this also means that entrenched powers generally prefer we use ensemble metrics, because they have a “next time, you could be the winner!” aspect to them that obscures the fact that most people aren’t winners. Essentially, ensemble metrics assume that all growth is assigned evenly, which benefits you as a disproportionate recipient of growth. Think I feel a blog post brewing on this so would love any insights folks have.
A few thoughts on the political artifacts paper that I found very interesting and am tempted to develop further in a blog post format.
Here is one common way that the claim “Artifacts are neutral” tends to be understood.
Artifacts are strictly dependent on users
As users [subjects of use] we are able to make judgments (on how to best use them)
Therefore, artifacts are dependent on our own judgment
I want to defend the proposition that artifacts are not neutral and that they are not merely dependent on human judgments. The reason for this claim is that artifacts possess features that act as built-in functions, which fulfill design stage intentions and shape specific outcomes - individual and collective. In other words, artifacts have their own norms and in turn, generate social effects independently of human judgments. Example: speed bumps are designed to reduce car speed in certain contexts and purposely function independently of human judgments. Speed bumps are intentional artifacts, in their own right.
Also, we need to distinguish between levels of complexity of artifacts. Think in particular about automated artifacts. A table is not the same as a youtube algorithm, which is designed to be sticky - increase use volume on the platform.