Get the latest tech news
The problem of algorithmic bias and military applications of AI.
Dr. Ingvild Bode argues that bias is as much a social as a technical problem; addressing it therefore requires going beyond technical solutions.
These arguments are based on the author’s presentation at the GGE side event “Fixing Gender Glitches in Military AI: Mitigating Unintended Biases and Tackling Risks” organised by UNIDIR on 6 March 2024. Beyond some noteworthy exceptions, chiefly UNIDIR’s 2021 report “Does Military AI Have Gender?” as well as policy briefs published by the Observer Research Foundation and the Campaign to Stop Killer Robots, issues of bias have not been covered at length. This research is mostly technical in nature and focuses on specific techniques that can be used in machine learning models and in facial recognition systems such as rebalancing data or regularization or the design of ‘fair’ algorithms through more risks and harms analysis as well as (even) more rigorous testing and auditing.
Or read this on r/technology