By Roma Dhanani
In an emerging digital society, perpetuating racial injustice is something we should be concerned about. Facial recognition systems misidentify people of color more often than white people, which could lead to wrongly identifying suspects of a crime. Ethnic profiling within policing can also influence automated systems to target neighborhoods which are predominantly people of color, or otherwise innocent people. A lot of us have heard about these issues in the United States, but don’t realize that it hits much closer to home and is also an important topic of discussion right here in the Netherlands.
This project investigates certain digital technologies used in the Netherlands as a concern for racial bias outcomes. The technologies identified are predictive policing systems, such as the Crime Anticipation System (CAS) that is currently implemented nation-wide already, automated algorithms such as SyRi, and facial recognition technologies used to target criminal suspects. Based on qualitative research, it was understood that this problem mainly stems from systemic racism with deep historic roots, and that awareness and education within academia is an important step to solving this issue. In order for Amsterdam (or any city, for that matter) to consciously grow as a smart city, as well as to be as inclusive and fair as possible, a workshop on racial bias within algorithms and AI is proposed to be given to university students working in the field (think future technologists, UX designers, user researchers, creative directors, etc.), with the goal to increase their awareness on racial bias, where it stems from and how it happens, as well as to provide solutions to reduce the risks of these outcomes. Thus, the goal is to be one step closer into solving the problem of further racial injustice being perpetuated through new and emerging digital technologies.