Civil Rights in the Age of AI: New Jersey’s 2025 Algorithmic Discrimination Guidance
As we observe Black History Month, we honor the progress made in the pursuit of equality and justice under the law. The civil rights movement taught the legal community an enduring lesson: fairness in process is essential to fairness in outcome. While much of the early work of civil rights law focused on eliminating overt discrimination, today the legal system faces a new and subtler challenge, such as implicit bias embedded in the systems and technologies we increasingly rely upon, including artificial intelligence (AI). While New Jersey does not yet have a fully enacted comprehensive AI statute, important recent laws and official guidance are shaping how bias in algorithmic systems is understood under state law.
In January 2025, the New Jersey Law Against Discrimination (NJLAD) issued a guidance on algorithmic discrimination, prohibiting “all forms of discrimination, irrespective of whether discriminatory conduct is facilitated by automated decision-making tools or driven by purely human practices.” The central message of the guidance is straightforward but significant: technology does not excuse discrimination. If an automated tool produces outcomes that discriminate on the basis of protected characteristics – such as race, color, religion, national origin, sex, gender identity, sexual orientation, disability, pregnancy, or other protected characteristics – liability may arise under the NJLAD just as it would if the decision had been made by a human being.
The guidance emphasizes that the NJLAD’s protections are broad and remedial. Importantly, discriminatory intent is not required. A covered entity may be liable where an algorithm causes either disparate treatment or disparate impact, meaning even facially neutral tools, such as resume-screening software, tenant-screening platforms, risk-assessment systems, or credit evaluation programs, can violate the law if they disproportionately exclude or disadvantage protected groups without sufficient justification.
One of the most consequential aspects of the guidance is its position on responsibility. The guidance makes clear that businesses, employers, housing providers, and other covered entities cannot shift blame to third-party vendors. The fact that a tool was designed, marketed, or maintained by an outside technology company does not shield the user from liability. If a company chooses to deploy an automated decision-making system, it retains responsibility for ensuring that the system complies with New Jersey’s anti-discrimination laws.
The guidance reflects a broader civil rights principle: innovation must not come at the expense of equality. The NJLAD has historically been interpreted to eradicate discrimination. By explicitly applying its protections to algorithmic systems, New Jersey reinforces that civil rights law evolves alongside technological changes.
As we commemorate Black History Month, it is fitting to reflect on the lessons of civil rights pioneers who insisted that justice must be both blind and impartial. Their work challenged the legal community to confront prejudice in its starkest forms. Today, that challenge has evolved to include not only human prejudice but also implicit bias embedded in the tools we use. The 2025 guidance does not create a new law; rather, it clarifies that longstanding anti-discrimination principles apply with equal force in the digital age. Under the NJLAD, discriminatory outcomes remain unlawful, whether produced by human judgment or by digital code. In the digital age, civil rights protections remain constant, even as the tools that shape modern decision-making continue to evolve.
If you have any questions about the information in this post or if you would like to learn more on this topic, you can contact Franceska Osmann at fosmann@hoaglandlongo.com or 732-545-4717.