NCITE Leads Congressional Workshop on “Jailbroken” AI Risks
Lawmakers saw firsthand how uncensored AI models can generate dangerous, real-world attack guidance.
- published: 2026/04/24
- contact: NCITE Communications
- phone: 4025542972
- email: ncite@unomaha.edu
- search keywords:
- Congress
- uncensored AI
- abliterated
On April 22, the National Counterterrorism Innovation, Technology, and Education Center (NCITE) at the University of Nebraska at Omaha (UNO) took to Capitol Hill, giving members of Congress a live look at how “jailbroken” AI can be exploited, as reported in Politico. For more details on the science, read NCITE's brief on censored, uncensored, and abliterated large language models.
➡️ What’s new:
NCITE conducted a congressional workshop demonstrating how uncensored large language models (LLMs) can generate harmful outputs when safeguards are removed. The session featured live demonstrations led by NCITE researchers and students.
💡 Why it matters:
As uncensored AI tools rapidly become more accessible, so do the risks. Lawmakers are grappling with how to regulate emerging technologies that can be misused by – and even tailored to – bad actors.
🔎 Zoom in:
- NCITE researchers Sam Hunter, NCITE senior scientist and director of academic research, and Joel Elson, director of IS&T research initiatives, led the session.
- NCITE students and Congress members prompted uncensored models with scenarios involving attacks, explosives, and other violent activity.
- The models responded with detailed answers – without the safety refusals built into mainstream AI platforms.
- The demonstration highlighted how easily these tools can be accessed and manipulated.
🎤 What they’re saying:
- Rep. Andrew Garbarino (R-N.Y.), Chair, House Homeland Security Committee: asked one large language model how to kidnap a member of Congress. “It spit out an answer in under three seconds…ways to find them, where to look for them…the best spots to do it,” he said.
- Rep. Andrew Ogles (R-Tenn.), Chair, Cyber Subcommittee: “What’s extraordinary about this presentation is how most of [the AI tools] are readily off-the-shelf and easy to access,” he said. “That just increases the probability that the wrong person gets their hands on this.”
- NCITE researcher Sam Hunter: “In my 20 years as an academic and practitioner, I don’t think I’ve had a more impactful experience than this trip. We had a chance as researchers to put science into practice and came through in that moment. I’m so proud of this team.”
- NCITE researcher Joel Elson: “This is how research that informs national security is supposed to work – government, academia, and industry coming together to address today’s emerging issues and solve complex challenges collaboratively.”
- NCITE students James Heldridge & Casey Witkowski: “Getting to share our research on these emerging threats with some of the highest-ranking officials who help protect our country was a really special experience, and one we feel lucky to have had as graduate students. We are so thankful to our faculty advisors and to the team at NCITE for trusting us to represent the Center and the work we do.”
🌎 The big picture:
This work reflects UNO’s commitment to pragmatic research and public impact – equipping policymakers with real-world insights to address national security challenges. Through NCITE, UNO continues to lead at the intersection of AI, security, and workforce development, advancing solutions that protect communities and inform policy.
⚡ What’s next:
NCITE will continue engaging federal partners and publishing research on emerging AI threats, including a recent brief on uncensored and “abliterated” models.