Risk Insights
August 21, 2025

A New Era of U.S. Federal Technology Policy: Best Practices for Boards

Ali Plucinski
Cyber Analyst at RANE

By Chris Hetner, Senior Executive, Board Director, and Leader in Cybersecurity, Former SEC Chair Senior Cybersecurity Advisor; Dominique Shelton Leipzig, Founder & CEO, Global Data Innovation; Steve Roycroft, CEO of RANE; Ali Plucinski, Cyber Analyst, RANE

The White House has devoted particular attention to the technology space amid ongoing advancements in the burgeoning artificial intelligence (AI) industry, presenting both opportunities and risks for organizations. As the Trump Administration embarks on a new term, sweeping changes to U.S. federal agencies and legislative priorities are underway. Among the most consequential shifts are those affecting cybersecurity and AI, two domains critical to national security and corporate governance.


Cybersecurity: Regulatory Retrenchment and Strategic Uncertainty

The Administration has signaled a deregulatory approach, targeting workforce reductions and streamlined oversight across agencies like the Cybersecurity and Infrastructure Security Agency (CISA), National Security Agency (NSA), and Federal Bureau of Investigation (FBI).

While no formal revisions have been announced, experts suggest that key regulations could be impacted or altered, including the:

  • Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA), which mandates timely reporting of cyber incidents by critical infrastructure entities.
  • Cybersecurity Maturity Model Certification (CMMC), a Department of Defense framework that sets cybersecurity requirements for defense contractors.
  • U.S. Securities and Exchange Commission (SEC) Disclosure Rules, introduced under the Biden Administration, requiring public companies to disclose material cybersecurity incidents and governance practices.

Boards should prepare for increased governance responsibility in the absence of strong federal oversight. Maintaining compliance with current regulations remains essential to avoid penalties and reputational harm.


AI: Deregulation Amid Rapid Innovation

On the AI front, President Trump has similarly looked to dismantle preexisting restrictions for developers and organizations integrating AI applications. One of the Administration’s first actions was rescinding the Biden-era executive order on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," which outlined regulatory mechanisms for safety reviews and mandated cybersecurity protocols.

Subsequent steps have further diluted government oversight and compliance requirements for the private sector, including consideration of a ten-year moratorium on state-level AI laws in the recently passed “One Big Beautiful Bill”—though this provision was ultimately omitted. Looking ahead, there is uncertainty around how the Trump Administration may seek to regulate the AI industry, if at all.

Meanwhile, innovation continues at a fast pace. In September 2024, OpenAI—the company behind the chatbot service ChatGPT—released a model designed for reasoning capabilities, enabling greater agency and automation in user tasks. These agent models have since been replicated by leading companies like Anthropic, Alibaba, DeepSeek, and Google, offering organizations tools to accelerate and streamline research, clerical activities, brainstorming, and decision-making, among other capabilities. In April 2025, OpenAI released o3 and o4-mini models with sophisticated reverse image search capabilities, further expanding the frontier of AI functionality.


Governance Implications: 5 Practices for Boards

As organizations increasingly adopt AI applications and navigate evolving cybersecurity requirements, it is important for boards to lead with strategic foresight. The following practices can help boards navigate legal, regulatory, and reputational risks:

  1. Monitor regulatory signals. Amid ongoing uncertainty around the short-term outlook for U.S. federal policy, stay alert on updates from federal organizations such as CISA, the Department of Defense, and the SEC. These entities will likely provide early indications of policy shifts.
  2. Maintain compliance. Until formal changes are enacted, organizations should stay the course with current federal requirements to minimize noncompliance penalties. In the AI space, while the new Administration has signaled it is unlikely to introduce onerous requirements, organizations should be mindful of other risks that could emerge from rapidly integrating AI applications into their operations. These risks include potential reputational harm that could arise from the deployment of insecure or malfunctioning AI tools.
  3. Strengthen cross-functional oversight. AI and cybersecurity risks are not confined to IT departments. Boards should ensure close coordination across all impacted functions, including the C-suite, human resources, cybersecurity, and legal teams, to assess risks holistically and respond with agility.
  4. Enhance transparency and training. Organizations should consider whether to disseminate public disclosures to partners or consumers regarding how AI tools are used and personal data is handled. Internally, regular employee training is essential to build awareness of AI uses and risks.
  5. Take ownership of risk management. Bottomline, it is incumbent upon boards to proactively oversee AI and cyber threats, recognizing their potential to introduce business, operational, legal, and financial impacts.


Conclusion: Boardroom Leadership in a Time of Technological Flux

The convergence of cybersecurity and AI policy under a deregulatory agenda presents both opportunities and risks. As federal oversight recedes, the onus shifts to corporate leadership to ensure responsible innovation and risk management. Boards should lead with agility, accountability, and a commitment to safeguarding organizational integrity in this rapidly shifting environment.


To receive exclusive corporate governance insights for board members and leaders, join the Nasdaq Center for Board Excellence.

The views and opinions expressed herein are the views and opinions of the contributors and do not necessarily reflect those of Nasdaq, Inc.