Get the latest tech news
The right not to be subjected to AI profiling based on publicly available data
Social media data hold considerable potential for predicting health-related conditions. Recent studies suggest that machine-learning models may accurately predict depression and other mental health-related conditions based on Instagram photos and Tweets. In this article, it is argued that individuals should have a sui generis right not to be subjected to AI profiling based on publicly available data without their explicit informed consent. The article (1) develops three basic arguments for a right to protection of personal data trading on the notions of social control and stigmatization, (2) argues that a number of features of AI profiling make individuals more exposed to social control and stigmatization than other types of data processing (the exceptionalism of AI profiling), (3) considers a series of other reasons for and against protecting individuals against AI profiling based on publicly available data, and finally (4) argues that the EU General Data Protection Regulation does not ensure that individuals have a right not to be AI profiled based on publicly available data.
Decision-makers, including the prime minister candidate, may withdraw from social media exchanges with the public, and in the longer run, the threat of being AI profiled for all sorts of dispositions may prevent people from engaging in politics altogether. Some would object that the provision of informed consent to data processing is highly routinized, i.e. provided as an unreflected, habitual act, and that the right not to be AI profiled as a consequence may lack any real power to protect individuals (Ploug & Holm, 2013, 2015). Finally, in the course of substantiating a sui generis right not to be AI profiled based on publicly available data, we have suggested several behavioural mechanisms and patterns that underlie social control and stigmatization and provided relevant evidence to the best of our ability.
Or read this on Hacker News