On May 19, 2025, a Washington Post investigation revealed that the New Orleans Police Department (NOPD) had been using a real-time facial recognition system in secret for over a year. Through a network of more than 200 surveillance cameras managed by the nonprofit Project NOLA, officers received live alerts identifying individuals suspected of crimes—many of them nonviolent. Washington Post, 2025. This practice violated a 2022 city ordinance-- New Orleans City Code § 147-2 (2022)--that expressly limited the use of facial recognition to serious violent crimes and required written authorization and case-specific documentation before using the technology.
The New Orleans case is not an isolated lapse in policy compliance. It is a powerful example of what happens when advanced artificial intelligence tools outpace legal and ethical oversight. As AI continues to evolve—and law enforcement agencies increasingly turn to automation for efficiency and public safety—this incident underscores the urgent need for responsible governance at the intersection of law, AI, privacy, and public accountability. This case represents a watershed moment in AI governance; a compelling warning of what happens when advanced surveillance technologies outpace legal and ethical guardrails. As law enforcement agencies nationwide embrace AI-powered tools like facial recognition, this incident lays bare how implementation without transparency or compliance can fundamentally undermine civil liberties and erode public trust. For legal professionals, cybersecurity experts, and Chief AI Officers alike, the message is clear: unchecked surveillance systems constitute a governance crisis demanding urgent attention at the intersection of law, privacy, and algorithmic accountability.
A Pattern of Deception: NOPD’s Covert Use of Facial Recognition Before 2022
The 2025 revelations about NOPD’s unlawful use of facial recognition are not an isolated breach—they are part of a documented pattern of deception and policy evasion that stretches back at least five years. As early as 2020, the New Orleans Police Department acknowledged that it was using facial recognition tools obtained through partnerships with state and federal agencies, including the FBI and Louisiana State Police. Yet this admission came only after years of deceptive public statements by city officials, who repeatedly assured the public that no such technology was in use.
On November 12, 2020 an investigation reported in The Lens, NOLA revealed that while the NOPD did not own facial recognition software, it had been quietly leveraging it through external partners, without disclosure, a policy in place, or oversight. This occurred for years, and under two separate mayoral administrations, according to The Lens. When the ACLU of Louisiana submitted a public records request that same year, the city responded that the city police department did “not employ facial recognition software,” a statement later exposed as intentionally misleading. Although the city of New Orleans did not own the technology, they were using it with the consent of other agencies. Further, NOPD spokespersons claimed that “employ” referred to ownership, not use; a distinction that critics, including the ACLU, rightly characterized as a deliberate attempt to deceive the public and evade scrutiny.
At the time, the NOPD had no records tracking the frequency, purpose, or outcomes of facial recognition use, no policy governing its deployment, and no audit mechanism in place. The Real Time Crime Center, a city-run surveillance hub, had a policy banning facial recognition, but that restriction explicitly did not apply to the NOPD, creating a loophole the department exploited. Even as the City Council debated banning surveillance tools in 2020, high-ranking officials, including the City’s Chief Technology Officer, denied on the record that the city had access to or used facial recognition—statements that were promptly contradicted by NOPD’s internal disclosures. The Lens, NOLA
Undermining Civil Liberties and Public Trust
This history of misuse and concealed operation underscores a critical point: the 2025 incident is not merely the result of lax compliance but reflects a longstanding culture of institutional deception. The NOPD’s pattern of circumventing public accountability, sidestepping oversight, and misleading both the City Council and the public reveals systemic governance failures that continue to undermine legal and democratic norms.
Further, NOPD’s covert use of recognition capabilities is a cautionary tale of how AI technologies, when implemented without transparency, legal compliance, or ethical safeguards, can undermine civil liberties and public trust. As lawyers, legal professionals, and CAIOs, we must recognize that unchecked surveillance systems are not just a technical issue—they are an AI governance crisis. This article examines the legal violations, constitutional concerns, and cybersecurity risks associated with this unauthorized use of AI and provides best practices for how organizations can responsibly deploy AI within the framework of the rule of law.
Legal Violations
1. Municipal Authority and Ordinance Violations
New Orleans City Code § 147-2 (2022), limits the use of facial recognition technology to investigations involving specific violent crimes—namely murder, rape, terrorism, and kidnapping—and mandates a written request from an investigating officer, probable cause documentation, supervisory approval, and case-specific justification.
The NOPD’s deployment of a live, automated alert system without any record of written requests or internal review arguably violated both the letter and spirit of the ordinance. Such behavior may constitute ultra vires agency action, opening the city to liability under state administrative law doctrines or to injunctive challenges by affected individuals or civil rights organizations. It also raises broader administrative law questions regarding local agency autonomy and the enforceability of municipal guardrails on emerging technologies.
2. Fourth Amendment Concerns: Warrantless, Persistent Surveillance
In Carpenter v. United States, 138 S. Ct. 2206 (2018), the Supreme Court held that the government’s warrantless collection of historical cell-site location information constituted a search under the Fourth Amendment, emphasizing the intrusive potential of continuous digital surveillance. The reasoning in Carpenter has since been extended to analogous technologies capable of tracking or identifying individuals in public. See, e.g., Leaders of a Beautiful Struggle v. Baltimore Police Dep’t, 2 F.4th 330, 342–43 (4th Cir. 2021) (en banc) (striking down aerial surveillance system for enabling persistent surveillance).
Further, in United States v. Jones, 565 U.S. 400 (2012), Justice Sotomayor’s concurrence warned that pervasive surveillance could erode constitutional protections, suggesting the Court may require a “mosaic theory” approach to assessing searches involving modern technologies. Id. at 416 (Sotomayor, J., concurring). Real-time facial recognition likely implicates these same concerns, as it enables undisclosed, suspicionless searches of individuals’ faces in public—without a warrant, individualized suspicion, or clear limitation in scope.
Courts have not yet definitively ruled on real-time facial recognition, but growing legal scholarship and advocacy point toward its classification as a search requiring heightened justification, especially where the technology is deployed continuously or without limitation. See Andrew Guthrie Ferguson, The Fourth Amendment and Facial Recognition: Protecting Privacy in Public, 105 Minn. L. Rev. 1105 (2021).
3. Section 1983 and Equal Protection Risks
The NOPD’s actions may also give rise to claims under 42 U.S.C. § 1983, which authorizes civil suits against state actors who deprive individuals of constitutional rights. Plaintiffs alleging unlawful arrest or surveillance based on misidentification by facial recognition could argue violations of their Fourth and Fourteenth Amendment rights. If these harms flowed from a policy or custom, municipal liability may attach under Monell v. Department of Social Services, 436 U.S. 658, 694 (1978).
Further, the racial and gender disparities associated with facial recognition software are well-documented. A comprehensive study by the National Institute of Standards and Technology found that the majority of facial recognition algorithms exhibit false positive rates up to 100 times higher for Black and Asian faces compared to white faces. See NIST Interagency Report 8280, Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects(Dec. 2019), available here.
If plaintiffs can show that these disparities led to discriminatory policing outcomes or surveillance patterns, they may have a plausible Equal Protection Clause claim, particularly if the system was used disproportionately in communities of color. While proof of discriminatory intent remains challenging, disparate impact coupled with deliberate indifference to known risks may suffice under Village of Arlington Heights v. Metro. Hous. Dev. Corp., 429 U.S. 252, 265–66 (1977).
Cybersecurity and Governance Risks
The use of facial recognition technology in the New Orleans case not only raises legal red flags but also reveals serious deficiencies in cybersecurity and AI governance—two critical dimensions often overlooked in public-sector use of AI. Real-time biometric surveillance systems, like the one operated in partnership with Project NOLA, transmit highly sensitive data, such as facial vectors and GPS coordinates, across networks that may lack hardened security protocols or clear data retention policies. Without robust encryption, access controls, and audit logging, these systems are vulnerable to interception, misuse, or compromise by malicious actors.
Moreover, the decision to integrate AI-driven alerts into officers’ personal or department-issued devices introduces a new vector of cybersecurity risk. By pushing live identification data directly to individual law enforcement units without centralized logging or oversight, the NOPD created what is effectively a shadow AI system—a deployment outside formal governance and compliance frameworks. Such architectures often evade standard risk assessments and incident response protocols, creating gaps that adversaries or internal actors could exploit.
From a governance standpoint, this case illustrates a broader institutional failure: the absence of AI lifecycle management. I was unable to find evidence to indicate that the NOPD conducted a data protection impact assessment, tested for algorithmic bias, or established redress mechanisms for individuals wrongly flagged. These omissions run counter to emerging best practices and frameworks like the NIST AI Risk Management Framework (2023), which emphasizes continuous monitoring, context-sensitive deployment, and public transparency. In the absence of such controls, even well-intentioned uses of AI can lead to rights violations, mission drift, and reputational harm. This is particularly true in law enforcement, where stakes are high and public trust tends to be quite fragile.
What This Means for Legal Professionals and CAIOs
The unauthorized use of facial recognition by the New Orleans Police Department is a powerful reminder that legal, monitoring, and enforcement frameworks often lag behind technological capabilities. But this lag does not excuse institutions from their obligation to uphold constitutional rights, ensure transparency, and manage emerging technologies responsibly.
For law enforcement agencies, this means recognizing that the public’s trust in AI-enhanced policing depends not just on outcomes, but on process, oversight, and accountability. For lawyers and consultants, it means ensuring that AI tools are deployed within clear legal boundaries and that clients have governance structures robust enough to weather scrutiny from courts, regulators, and the communities they serve.
Safeguarding civil liberties in the age of AI will require enforceable policies, cross-disciplinary collaboration, and the courage to pause when compliance is uncertain. In the AI Age currently emerging, it is often a system of “deploy first, determine the risks later.” There is a premium on time, and on taking the steps necessary to put quality AI governance into place. Perhaps there has never been a greater amount of peril. The public is watching. Bad actors are watching. And the next misstep could result not just in litigation, but in a fundamental erosion of trust that may take years to rebuild, affecting law enforcement, government systems, and the broader adoption of beneficial AI technologies.
To support legal professionals and AI leaders in turning these principles into action, we offer the following Best Practices Checklist, grounded in established legal standards, ethical frameworks, and emerging risk management guidance.
Best Practices for Lawyers, Legal Professionals and Chief AI Officers
1. Conduct Pre-Deployment Legal and Risk Assessments
Identify legal risks under constitutional, statutory, and municipal laws.
Analyze privacy and equity risks through data protection impact assessments (DPIAs).
Ensure AI tools are reviewed by legal counsel before operational use.
2. Create and Enforce Clear AI Policies
Define approved use cases, prohibited functions, and procedural requirements.
Require documented justification and supervisory sign-off for high-risk applications.
Emphasize human-in-the-loop decision-making as the default.
3. Implement Strong Cybersecurity and Audit Protocols
Require data encryption, immutable logs, and granular access controls.
Maintain centralized audit trails of AI use, queries, and decision outcomes.
Include AI risk in broader cybersecurity governance programs.
4. Ensure Transparency and Individual Redress
Publicly disclose AI tools in use, their purposes, and applicable safeguards.
Offer grievance procedures for individuals affected by erroneous or unfair decisions.
Provide meaningful human review and the ability to contest automated outcomes.
5. Govern Through Contracts and Procurement
Include AI-specific risk provisions in vendor contracts (e.g., indemnity, audit rights).
Demand disclosure of training data provenance and model performance metrics.
Require vendors to certify compliance with applicable laws and internal policies.
© 2025 Amy Swaner. All Rights Reserved. May use with attribution and link to article.
Without consequence, there is no motivation to stop.
I think you will appreciate this podcast:
https://open.substack.com/pub/soberchristiangentlemanpodcast/p/the-delayed-consequence-deception?utm_source=share&utm_medium=android&r=31s3eo