AI bill would crack down on deepfake distribution and protect whistleblowers

Congressman Ted Lieu (D-CA) speaks during the House Judiciary Committee at the Rayburn House Office Building in Washington, DC on February 11, 2026.
Nathan Posner | Anatolia | Getty Images
A new AI bill, first reported by CNBC, would crack down on deepfakes and non-consensual images and make it easier for whistleblowers to report AI-related concerns.
The bill was introduced by Rep. Ted Lieu, D-Calif., who leads the bipartisan House Task Force on AI along with Rep. Jay Obernolte, R-Calif. Supported by. The bill was created based on the recommendations in the working group’s report.
Lieu called the bill “a step forward” in an interview with CNBC.
“It’s not designed to be controversial,” he said. “This is based on bipartisan legislation that other members have introduced, as well as recommendations from the bipartisan House Artificial Intelligence Task Force. So we’re trying to get something done right now with this bill this term.”
Lieu’s bill avoids some of the thornier issues surrounding AI, including whether a federal standard should be created to override state AI laws and whether testing requirements should be required for AI systems used in places like critical infrastructure and education.
The sweeping bill includes provisions protecting whistleblowers who report AI security risks or violations, which would require the United States to join international organizations developing technical standards for AI and would establish a prize competition for groundbreaking AI research and development.
While Lieu’s bill has Obernolte’s support, the Republican is working on his own artificial intelligence package that he hopes to release later this year. Like Lieu’s bill, Obernolte’s bill would build on the work of the bipartisan task force.


