Christian Fröhlich & Robert C. Williamson (2024)
ACM Digital Library, Open access
Abstract. We argue that insurance can act as an analogon for the social situatedness of machine learning systems, hence allowing machine learning scholars to take insights from the rich and interdisciplinary insurance literature. Tracing the interaction of uncertainty, fairness and responsibility in insurance provides a fresh perspective on fairness in machine learning. We link insurance fairness conceptions to their machine learning relatives, and use this bridge to problematize fairness as calibration. In this process, we bring to the forefront two themes that have been largely overlooked in the machine learning literature: responsibility and aggregate-individual tensions.
Extract: “The performativity of insurance and machine learning becomes especially relevant due to ethical implications. Many scholars have argued, providing insightful examples, that insurance is fundamentally a normative technology, depending on causality, control and responsibility. Doing insurance or machine learning involves enacting certain realities and suppressing others, as we have sketched in Section 3.2. For instance, in the process of collecting data, only some features are considered, and others neglected. Expanding on this, a performativity perspective would emphasize that there is no objective data ‘collection’ process, that quantification and categorization require significant and ongoing work; such work may be influenced by implicit normative judgements, which becomes ingrained and hidden in the ‘representation’. There is now a vibrant, if still nascent, research field on the sociology of quantification (including categorization), owing much to the seminal work of Desrosières; for overviews of this field see, where the reader finds plenty of evidence for such work. Central in this research field is again performativity, or what has been called the constitutive potential of quantification. As a noteworthy example, it has been demonstrated that the census, through the introduction of statistical categories, can contribute to the establishment of a collective identity among the individuals it aims to describe [20, 86, 110]. Thus, a category that was initially intended to merely represent acquires performativity by actively shaping the formation of this particular group. We propose that insurance can act as a model for the performativity of statistical, “calculative devices” that arise from their social situatedness.”
OpenEdition vous propose de citer ce billet de la manière suivante :
GdL (15 juillet 2024). Insights From Insurance for Fair Machine Learning. Économie des conventions. Consulté le 10 décembre 2024 à l’adresse https://doi.org/10.58079/120l4