The logos of Google Gemini, ChatGPT, Microsoft Copilot, Claude by Anthropic, Perplexity, and Bing apps are displayed on the screen of a smartphone in Reno, United States, on November 21, 2024. (Photo by Jaque Silva/NurPhoto) (Photo by Jaque Silva / NurPhoto / NurPhoto via AFP)
Conectas Human Rights, which has over 20 years of experience in defending human rights, has been closely monitoring the impacts of Artificial Intelligence (AI) on the promotion and protection of rights in contemporary society. Our mission to protect and expand human rights drives us to actively and critically engage in charting a digital and technological future that is fair and equitable, and respects human dignity.
The growing use of AI in several different sectors, and, in Brazil, particularly in public security and in the legal system, has been a central focus of our work. In 2025, alongside partner organizations, Conectas increased its international advocacy on this matter, by submitting several contributions to UN calls for input. We express our profound concern over the unchecked and non-transparent use of Facial Recognition (FRT), as well as the fact that without regulation of this technology discriminatory biases are being reinforced which can lead to wrongful criminalization, especially in the case of Black people and other populations that are already vulnerable.
In 2026, we also submitted contributions to the Inter-American Commission on Human Rights (IACHR) in response to a call for input on the impact of artificial intelligence in the region. On that occasion, together with partner organizations, we also raised concerns about the effects of this technology on the integrity of information and, in particular, on the participation of historically marginalized groups in public and political arenas.
We emphasize the urgent need for regulations governing the use and development of AI that prioritize transparency, public oversight, and respect for fundamental rights.
Alongside partner organizations such as CESeC, LAPIN, Mothers of May, Intervozes and Instituto da Hora, we alerted the Human Rights Council and several Special Rapporteurs to the proliferation of FRT systems in Brazil, with over 400 active projects. Our main concern is that these systems are not neutral. They embed racial bias inherent in the datasets on which they are trained and operate, resulting in disproportionate criminalization and vulnerability for certain groups—particularly those who are already the primary victims of police killings and of mass incarceration in the country, such as Black and transgender people. The lack of data on police stops and arrests based on these technologies only serves to deepen our concern.
In light of these challenges, we call for a ban on the use of facial recognition in public security. However, as its use is already widespread, we urge the legislative branch to, at least, impose basic safeguards for rights in the implementation of these systems, including provisions in Bill No 2338/2023, which is currently under consideration in Congress. “Machine errors” are not merely technical failures; they stem from biases that lead to wrongful arrests and undue harm, as seen in troubling cases of people being wrongly identified in public spaces, such as football stadiums and other cultural events, as well as in health centers.
Our work also extends to analyzing the risks of using AI in the judicial system. We recognize some of the important advances, such as Resolution No 615 by the National Council of Justice (CNJ), which establishes guidelines and bans the use of AI to outline criminal profiles or predict reconviction. However, we draw attention to significant regulatory gaps that still exist, allowing subnational governments and private businesses to adopt this technology and cross-reference data opaquely. The integration of surveillance systems, such as “Muralha Paulista” and “Smart Sampa”, without proper impact assessment and public consultations, is a worrying example of the unchecked expansion of a model of technological surveillance.
As an active member of the International Network of Civil Liberties Organizations (INCLO), a global network of 17 human rights and civil liberties organizations, Conectas also contributed to the 2025 position paper of the United Nations Special Rapporteur on the human rights impacts of the use of artificial intelligence (AI) in counter-terrorism, which includes guidelines on good practices. This contribution was based on exhaustive research by members of INCLO, translated into Portuguese by Conectas and Instituto da Hora: “Eyes on the Watchers: Challenging the Rise of Police Facial Recognition”. Together with INCLO, Conectas has been advocating for principles calling for a ban on FRT in public security. The report “Eyes on the Watchers” is a cornerstone of our argument, outlining systemic failures, biases, and the “toxicity” of facial recognition. It stresses that, if a ban is unviable, there should at least be a clear legal basis, mandatory impact assessments, public consultation, and the creation of an independent oversight body. We reiterate that FRT results should never be the sole basis for any police action, and that transparency on the workings of the systems, training data and error rates are crucial in ensuring justice.
Together with CESeC and Aláfia Lab, we submitted a contribution to the Special Rapporteur for Freedom of Expression of the IACHR, in which—as well as highlighting the challenges faced with the increasing unchecked deployment of FRT systems in Brazilian public security—we also pointed to the issues that generative AI poses to the integrity of information and to democracy, by impacting on fair elections through its facilitation of phenomena such as, for example, gender and race-based political violence.
In the material submitted to the Rapporteur, we showed how the spread of these systems could increase the risk of historically vulnerable groups, such as women and children, being revictimized. Moreover, we drew attention to the fact that the impact of the growing use of artificial intelligence tools on the integrity of information has been increasingly alarming, given the expanding capacity to produce disinformation campaigns, as well as the perpetuation of stereotypes, the reinforcement of social inequality, and the restriction of access to resources and opportunities.
This is a brief overview of our actions and contributions to the debate on the use of artificial intelligence. Conectas will continue to be mindful of the challenges and to advocate for a legal framework for artificial intelligence in Brazil that embraces a rights-based approach. Our aim is to ensure that technological innovation does not deepen exclusion, discrimination, and lack of transparency in the pursuit of justice and the full exercise of citizenship.
Only through sustained coordination and strategic advocacy can we continue to protect human rights in the face of rapid technological evolution and its potentially harmful effects. We reaffirm our commitment to a digital future in which technology is a tool to strengthen, rather than undermine, our dignity and our fundamental rights.