Voltar
News
-
01/10/2025

Lack of regulation and inequality exacerbate flaws in facial recognition in Brazil, says Nina da Hora

At the launch of the report De olho nos vigilantes (Eyes on the watchers), the director of Institute da Hora and a computer scientist, warned of the risks of surveillance and algorithmic racism

A cientista da computação, pesquisadora e ativista brasileira Nina da Hora. Foto: Divulgação
 A cientista da computação, pesquisadora e ativista brasileira Nina da Hora. Foto: Divulgação


In a country where more than 90% of arrests carried out on the basis of facial recognition involve Black people, according to data from the Public Security Observatory Network, computer scientist and researcher Nina da Hora issued a warning: “In Brazil, the risks of facial recognition technology are heightened by the lack of regulation, historical racial inequality, and  the opaque use of these systems by law enforcement agencies.”

Nina is a Brazilian computer scientist, researcher, and activist. She is the founder and director of the Instituto da Hora, which collaborated with Conectas on the publication of the report De olho nos Vigilantes: Combatendo a Propagação do Reconhecimento Facial na Segurança Pública. The original name of this study, produced by the International Network of Civil Liberties Organizations (INCLO), was “Eyes on the watchers: challenging the rise of police facial recognition”.

Nina is recognized for her work on ethics, social justice, and artificial intelligence. She has become a prominent voice in public debate for exposing the impacts of algorithmic racism and the dangers of surveillance without democratic controls. She believes, the safest route in the short term is to adopt a moratorium on the police’s use of facial recognition technology. “Given the current inability to neutralize biases, the policy that would provide the greatest protection is to suspend its use in highly vulnerable environments, such as public transport and large events,” she said.  See the interview below:

Conectas: The INCLO report, now available in Portuguese, indicates that the risks of facial recognition outweigh its benefits. In your opinion, what are the most critical risks in Brazil today?

Nina da Hora: In Brazil, the risks identified in the INCLO report are amplified because of three structural factors: lack of regulation, historical racial and social inequalities, and the fragmented and opaque use of technology by law enforcement agencies. The risk of social and political surveillance: Facial recognition technology has already been used to monitor supporters’ groups, protests, and even street carnival parades. This creates a chilling effect on the freedom to protest in a country with a strong tradition of social mobilization which is also marked by violent repressions—as seen in 2013 and in marginalized communities. The risk of criminalization of vulnerable communities: Data shows that young Black people and those living in peripheral regions are the main targets of police interventions. As facial recognition has a higher error rate for Black faces, it reinforces an already violent situation. False positives increase the risk of arbitrary arrests and even police lethality. Institutional risk of unchecked expansion: States like Rio de Janeiro and São Paulo have already tested facial recognition systems in subways, bus stations and during carnival, often in partnership with private companies and without any public debate. The regulatory vacuum allows different security agencies to adopt different systems, with no standardization or oversight. Democratic risk: In the context of recent political crises, facial recognition could become a tool for mass surveillance of opponents, journalists, and social movements. This is particularly serious in a country with a long history of using surveillance tools for authoritarian means—from the military dictatorship to recent operations at protests.

Conectas: The principles outlined in the report call for transparency and social participation. How can these guidelines be implemented given the widespread and unregulated use of facial recognition technology in Brazil?

Nina da Hora:  The challenge is putting these principles into practice and addressing a situation in which surveillance has become normalized. Some realistic paths include: a specific legal framework. Although Brazil’s General Data Protection Law covers personal and biometric data, it does not directly regulate facial recognition technology in policing. A bill is required to either ban it or introduce a moratorium on police usage until there are clear technical, social, and legal safeguards; legislative and judicial oversight. Congress and the Supreme Federal Court have a key role to play. The Federal Court has previously ruled on issues concerning digital surveillance. It could now be called upon to define constitutional limits for facial recognition technology. The Legislative Branch could create commissions that include civil society participation; independent oversight bodies. Brazil has internal affairs departments, police ombuds offices, and the National Data Protection Authority (ANPD). They could all be granted the authority to audit facial recognition technology, require public reports, and impose sanctions; community participation. Community-led safety councils, Black movements, trade unions, and digital rights collectives should be included in decision-making spaces. Without this, the use of technology will be controlled by the police and private suppliers alone.

Conectas: Recent cases have revealed errors and racial bias in the use of facial recognition technology. What strategies can be adopted to address algorithmic racism and prevent its normalization in security policies?

Nina da Hora: The Brazilian experience with racial inequality demands robust, pertinent solutions: a moratorium as a starting point. Given the current inability to neutralize biases, the policy that would provide the greatest protection is to suspend use—especially, in highly vulnerable environments, such as public transport and large events; mandatory independent auditing. All facial recognition technology should be subject to testing by independent organizations and error metrics published, broken down into race, gender, and age; strategic litigation and judicial oversight. Cases of wrongful arrest based on facial recognition have already occurred in Brazil. The Public Defender’s office and civil society organizations bring cases in order to generate restrictive jurisprudence; addressing structural racism. Facial recognition technology increases selective law enforcement. Combating algorithmic racism is not only about improving systems but also about reducing discriminatory police practices—such as stopping people for no reason (“stop and frisk”), which could now be automated using facial recognition; national data production. Universities and research centers should receive funding to map the racial impact of facial recognition in Brazil, to produce evidence to inform public polices and judicial rulings; practical steps for Brazil. Municipal and state bills banning facial recognition in public spaces (following examples in US and European cities); National Justice Council (CNJ) resolutions banning the acceptance of evidence based solely on facial recognition in judicial rulings or complaints; police protocols banning the use of real-time facial recognition for stops, under penalty of procedural nullity; and action by the Public Prosecutor’s Office and the Public Defender’s Office to monitor contracts with facial recognition companies and ensure accountability.

Do you want to follow news about human rights in Brazil and around the world?

Assine nossa newsletter e receba atualizações sobre o trabalho da Conectas.