5 May 2021 | 2 pm CEST
After two rounds of web conferences and a steady growth of our audience we want to continue building on the work done so far. In the last three sessions we talked about feelings of insecurity at night, crime alerting applications and the use of drones in cities. As was the case for the first series of conferences, these sessions benefited greatly from the research done in the Cutting Crime Impact (CCI) Project, which aims to enable police and relevant local and national authorities to reduce the impact of petty crime and, where possible, prevent it from occurring in the first place. With the project entering its implementation phase, we now want to give the partners the opportunity to explain what they have learned and present the tools they have developed.
At the same time we continue deepening our understanding and exploring new angles of topics already discussed, such as crime prediction through artificial intelligence and the impact of feelings of insecurity. We are looking forward to welcoming all of you in these conversations and will continue to provide additional resources for our members on Efus network.
Session 2: How to ensure a fair and transparent use of AI technologies?
In previous sessions of our webconference series we discussed predictive policing and facial recognition. On the one hand, these artificial intelligence based technologies offer an array of opportunities in the domain of urban security. Facial recognition softwares can support the search for missing people, and the identification and tracking of criminals. Crime prediction softwares accelerate the processing and analysis of large amounts of data and can help guide security authorities in their daily operations.
On the other hand, we have to weigh the ethical, legal and social implications of the use of these surveillance technologies. Research led in the Cutting Crime Impact (CCI) project, found that, in terms of prevention, the results of predictive policing softwares are a matter of debate. Some argue that the approach could lead to decision-making processes that are free from human bias - as long as the data selection and quality is rigorous, otherwise biases can be reinforced.* The question of representative databases is a key issue when it comes to Facial Recognition: Studies have found that the error rate varies depending on gender and skin colour.** Other concerns are linked to data protection, the right to privacy and the right to free movement and association. In addition, cities have to think in terms of costs and benefits and evaluate if such tools have a real impact on the prevention
While a number of European cities experiment with predictive policing and facial recognition, other cities and regions use different applications of AI based technologies. The city of Amsterdam uses an algorithm to recognize keywords in complaints that residents submit to the municipality. In addition, the city is testing the use of cameras to monitor the respect of physical distancing rules in public spaces.
Whether for surveillance or for other safety and prevention purposes, how can cities ensure a fair and transparent use of artificial intelligence based technologies? In this session we explore this question by reiterating risks associated with the technologies and presenting existing safeguards and resources.
We will discuss, among other things, these questions:
→ Registration here
* Cutting Crime Impact factsheets on the state of the art of predictive policing and its ethical, legal and
** Algorithms Struggle to Recognize Black Faces Equally, Tom Simonite. Wired, 2019.
Available from: https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/