In December 2004, an 80-year-old Black man was wrongfully detained in the East Zone of São Paulo after being identified by a facial recognition system. The elderly citizen, a volunteer at a primary healthcare facility in Cidade Tiradentes, was identified as the perpetrator of a crime he did not commit. The true suspect was a white person. After being detained for ten hours, the mistake was acknowledged, but the victim left the station without being sure he wouldn’t go through the same situation again. The case is illustrative of the risk of an unregulated digital future, which deepens racial inequality instead of mitigating it.
Technology, when used without regulation or social control, can become yet another source of exclusion and violence. In countries marred with structural racism, like Brazil, artificial intelligence systems applied to public security have reproduced historical inequalities, with direct impacts on Black people. Unfair arrests, violent interactions with the police, and public humiliation are but some of the consequences of biased algorithms. The promise of greater efficiency in public security, in this context, becomes a sophisticated mechanism of selective criminalization — revealing how racism, far from having been overcome, continues to operate as a social technology that becomes updated and adapted to new institutional tools.
During the panel on “Artificial intelligence and digital justice for people of African descent” held at the Permanent Forum on People of African Descent of the UN, lawyer Caroline Leal, litigation advisor at Conectas Direitos Humanos, stated that “technology perpetuates biases, errors, and frequent failures against Black people, who are subjected to violent police approaches, detentions, and illegal arrests.”
Leal pointed out that, in Brazil, facial recognition has been used without any robust regulation, which results in the expansion of racial inequalities under the pretext of innovation.
In this scenario, 21st-century reparation justice must go beyond repairing historical trauma. It must also secure rights in the virtual environment, protecting sensitive data, neutralizing discriminatory algorithms and building digital equity infrastructures.
1 – Prohibition of contracting artificial intelligence systems in public security until they are duly regulated from the perspective of human rights and proven to be free from racial and gender biases;
2 – Creation of social and judicial control mechanisms for the use of these technologies;
3 – Active participation of affected populations in the formulation of public policies on technological innovation.