By Chris Hetner, Senior Executive, Board Director, and Leader in Cybersecurity, Former SEC Chair Senior Cybersecurity Advisor; Dominique Shelton Leipzig, Founder & CEO, Global Data Innovation; Steve Roycroft, CEO of RANE; Ali Plucinski, Cyber Analyst, RANE
The White House has devoted particular attention to the technology space amid ongoing advancements in the burgeoning artificial intelligence (AI) industry, presenting both opportunities and risks for organizations. As the Trump Administration embarks on a new term, sweeping changes to U.S. federal agencies and legislative priorities are underway. Among the most consequential shifts are those affecting cybersecurity and AI, two domains critical to national security and corporate governance.
The Administration has signaled a deregulatory approach, targeting workforce reductions and streamlined oversight across agencies like the Cybersecurity and Infrastructure Security Agency (CISA), National Security Agency (NSA), and Federal Bureau of Investigation (FBI).
While no formal revisions have been announced, experts suggest that key regulations could be impacted or altered, including the:
Boards should prepare for increased governance responsibility in the absence of strong federal oversight. Maintaining compliance with current regulations remains essential to avoid penalties and reputational harm.
On the AI front, President Trump has similarly looked to dismantle preexisting restrictions for developers and organizations integrating AI applications. One of the Administration’s first actions was rescinding the Biden-era executive order on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," which outlined regulatory mechanisms for safety reviews and mandated cybersecurity protocols.
Subsequent steps have further diluted government oversight and compliance requirements for the private sector, including consideration of a ten-year moratorium on state-level AI laws in the recently passed “One Big Beautiful Bill”—though this provision was ultimately omitted. Looking ahead, there is uncertainty around how the Trump Administration may seek to regulate the AI industry, if at all.
Meanwhile, innovation continues at a fast pace. In September 2024, OpenAI—the company behind the chatbot service ChatGPT—released a model designed for reasoning capabilities, enabling greater agency and automation in user tasks. These agent models have since been replicated by leading companies like Anthropic, Alibaba, DeepSeek, and Google, offering organizations tools to accelerate and streamline research, clerical activities, brainstorming, and decision-making, among other capabilities. In April 2025, OpenAI released o3 and o4-mini models with sophisticated reverse image search capabilities, further expanding the frontier of AI functionality.
As organizations increasingly adopt AI applications and navigate evolving cybersecurity requirements, it is important for boards to lead with strategic foresight. The following practices can help boards navigate legal, regulatory, and reputational risks:
The convergence of cybersecurity and AI policy under a deregulatory agenda presents both opportunities and risks. As federal oversight recedes, the onus shifts to corporate leadership to ensure responsible innovation and risk management. Boards should lead with agility, accountability, and a commitment to safeguarding organizational integrity in this rapidly shifting environment.
To receive exclusive corporate governance insights for board members and leaders, join the Nasdaq Center for Board Excellence.
The views and opinions expressed herein are the views and opinions of the contributors and do not necessarily reflect those of Nasdaq, Inc.