GW to Co-Lead New $20 million NSF Artificial Intelligence Institute


May 4, 2023

TRAILS: Trustworthy AI in Law and Society

The George Washington University is co-leading a multi-institutional effort supported by the National Science Foundation (NSF) that will develop new artificial intelligence (AI) technologies designed to promote trust and mitigate risks while simultaneously empowering and educating the public.

The NSF Institute for Trustworthy AI in Law & Society (TRAILS), announced on May 4, unites specialists in AI and machine learning with systems engineers, social scientists, legal scholars, educators, and public policy experts. The multidisciplinary team will work with impacted communities, private industry, and the federal government to determine how to evaluate trust in AI, how to develop technical solutions and processes for AI that can be trusted, and which policy models best create and sustain trust.

“The TRAILS Institute is a leading example of the world-class work GW faculty are doing at the intersection of technology and the social sciences, and it reflects the university’s commitment to innovation that advances knowledge, fuels the economy, and enhances equity and quality of life for all members of our global community,” said GW President Mark S. Wrighton.

Provost and Executive Vice President for Academic Affairs Christopher Alan Bracey echoed the president’s excitement about this new collaboration. “With engineers and computer scientists working side-by-side with legal scholars and social scientists in the policy capital of the world, TRAILS is poised to enhance the opportunities and address the risks of artificial intelligence and how it is being used in our society.”

Funded by a $20 million award from NSF, the new institute is expected to transform AI practice by encouraging innovations that foreground ethics, human rights, and input and feedback from communities whose voices have previously been marginalized.

John Lach, dean of the GW School of Engineering and Applied Science, said, “TRAILS embodies the ‘Engineering And…’ ethos of GW Engineering—working across disciplines and engaging with communities to leverage the power of engineering and computing for societal good.”

In addition to GW, TRAILS will include faculty members from the University of Maryland, including Hal Daumé III, the lead principal investigator for TRAILS; Morgan State University; and Cornell University, with more support coming from the National Institute of Standards and Technology (NIST) and private sector organizations like Arthur AI, Checkstep, FinRegLab, and Techstars.

The new institute recognizes that AI is currently at a crossroads. AI-infused systems have great potential to enhance human capacity, increase productivity, catalyze innovation, and mitigate complex problems. Still, today’s systems are developed and deployed in a process that is opaque to the public. As a result, those most affected by the technology have little say in its development.

“I think it’s important to understand that AI can be the source of significant benefits and innovations to society, but can also cause a lot of harm,” said David Broniatowski, an associate professor of engineering management and systems engineering at GW and the lead principal investigator of TRAILS at GW. “Many of those harms are felt by people who are historically underrepresented because their concerns are not reflected in the design process.”

As an example, Broniatowski said, AI systems may be trained on datasets that reflect the values—and biases—of system designers or data labelers. People using the system may not be aware of those biases, instead believing the system’s output to be “objective.” If system output is not easily interpretable, the priorities encoded into the system may reflect values that do not align with the people who are using or otherwise being affected by those systems.

Given these conditions—and the fact that AI is increasingly being deployed in systems that are critical to society, such as mediating online communications, determining health care options, and offering guidelines in the criminal justice system—it has become urgent to ensure that people’s trust in AI systems matches those same systems’ level of trustworthiness.

TRAILS has identified four key research thrusts to promote the development of AI systems that can earn the public’s trust through broader participation in the AI ecosystem.

“The first, known as participatory AI, advocates involving human stakeholders in the development, deployment, and use of these systems. It aims to create technology in a way that aligns with the values and interests of diverse groups of people rather than being controlled by a few experts or solely driven by profit.

The second research thrust will focus on developing advanced machine-learning algorithms that reflect the values and interests of the relevant stakeholders.

Broniatowski will lead the institute’s third research thrust of evaluating how people make sense of the AI systems that are developed and the degree to which their levels of reliability, fairness, transparency, and accountability will lead to appropriate levels of trust.

Susan Ariel Aaronson, a research professor of international affairs at GW, will use her expertise in data-driven change and international data governance to lead the institute’s fourth thrust of participatory governance and trust.

“There is no trust without participation and no accountability without participation—hence we believe in a participatory approach to AI at all levels from design to deployment,” said Aaronson.

Trustworthy AI in Law & Society (TRAILS) Announcement from The George Washington University on Vimeo.


In addition to engineering and international affairs, GW faculty from multiple schools and across disciplines will be involved in TRAILS.

“What GW brings to TRAILS is a reflection of the contributions a comprehensive university can make and the readiness of our university to partner with others to magnify the impact of our research and tackle some of the most complex issues of our time,” said Pam Norris, vice provost for research.

Morgan State University, Maryland’s preeminent public urban research university, will lead community-driven projects related to the interplay between AI and education, while Cornell University will advance efforts focused on how people interpret their use of AI.

Federal officials at NIST will collaborate with TRAILS in the development of meaningful measures, benchmarks, testbeds, and certification methods—particularly as they apply to important topics essential to trust and trustworthiness, such as safety, fairness, privacy, transparency, explainability, accountability, accuracy, and reliability.

“The ability to measure AI system trustworthiness and its impacts on individuals, communities, and society is limited,” U.S. Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio said. “TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them.”

This announcement is the latest in a series of federal grants establishing a cohort of National Artificial Intelligence Research Institutes. This recent investment in seven new AI institutes, totaling $140 million, follows two previous rounds of awards.

The NSF, in collaboration with government agencies and private sector leaders, has now invested close to half a billion dollars in the AI institutes ecosystem—an investment that expands a collaborative AI research network into almost every U.S. state.

“These institutes are driving breakthrough discoveries to achieve our country’s ambition of being at the forefront of the global AI revolution. This work would not be possible without our longstanding alliances with our academic partners, government agencies, industry leaders, and AI communities at large,” said NSF Director Sethuraman Panchanathan.


This research is supported by the National Science Foundation (Award IIS-2229885). This story does not necessarily reflect the views of this organization.