Picture this.
You're driving cross-country. In one state, the speed limit is 65. Cross the border and it's 80. A few miles later, there's a toll road with cameras tracking every move. Same car. Same road. Completely different rules.
That's where AI governance is right now.
And if you're building, buying, or using AI inside an organization: whether in the corporate or government sector: this isn't abstract policy talk. This is scope, risk, and stakeholder management at a national scale.
The Patchwork Quilt: A Familiar Problem for Project Leaders
The United States does not have a single, comprehensive federal AI law comparable to the EU AI Act. Instead, states have stepped in to fill the gap, creating a fragmented regulatory environment that varies significantly by jurisdiction.
Consider the current landscape:
- Colorado passed the first comprehensive "high-risk" AI law (SB 24-205), focusing on preventing algorithmic discrimination in consequential decisions.
- California is advancing legislation requiring safety testing and transparency for the largest "frontier" AI models.
- New York and Illinois have enacted regulations specifically addressing how AI affects hiring, employment screening, and labor practices.

From a project management perspective, this fragmentation looks familiar.
When no approved enterprise standard exists, teams create local solutions: not because they want chaos, but because the work must continue. AI is already embedded in hiring systems, insurance underwriting models, benefits determinations, and government workflows. It is not a "future deliverable." It is already in production.
For organizations operating across multiple states, this patchwork creates immediate operational complexity. Companies that cannot geofence users by jurisdiction face practical difficulties complying with multiple regulatory regimes simultaneously.
Federal Governance: The "Soft" Approach
While Congress has not passed formal federal AI legislation, the executive branch has been active in setting direction through guidance and policy mechanisms.
Key federal governance instruments include:
| Instrument | Description |
|---|---|
| Executive Order 14110 | Establishes federal priorities for AI safety, security, and trustworthiness |
| NIST AI Risk Management Framework (AI RMF) | Provides voluntary best practices that are becoming a de facto industry standard |
| OMB Memoranda | Directs federal agencies on AI procurement, use, and oversight requirements |
| White House Blueprint for an AI Bill of Rights | Outlines principles for protecting civil rights in AI deployment (non-binding) |
The executive branch cannot override state laws. Instead, federal agencies are leveraging familiar governance levers:
- Procurement power: Tying federal contracts and funding to compliance with AI governance standards: functioning as an effective "change request" mechanism
- Enforcement authority: The DOJ and FTC are using existing civil rights and consumer protection laws to audit AI systems for bias, discrimination, and harm
- Standard-setting influence: The NIST AI RMF is increasingly referenced in state legislation, procurement requirements, and industry certifications
For project managers and executives who have worked on initiatives with multiple sponsors, this pattern is recognizable: influence without direct authority, escalation without a clean decision tree.

States Are Not Waiting for Federal Alignment
Several states have made their position clear: they are not waiting for a federal "green light."
From their perspective, AI is already influencing critical outcomes: who receives a loan, who is flagged by law enforcement algorithms, who advances in a hiring process. Waiting for federal alignment could take years, and critical-path risks are occurring now.
This creates the conditions every project manager dreads:
- Work in progress across 50 jurisdictions with varying requirements and timelines
- Competing priorities between innovation velocity and safety/compliance obligations
- Overlapping authority between state attorneys general and federal agencies
In project management terms, organizations are executing while the governance plan remains in draft mode.
Why This Is a Portfolio Management Issue
This regulatory fragmentation is not merely a policy concern. It is a portfolio management issue with direct implications for organizational risk, resource allocation, and strategic planning.
For organizations deploying AI systems, the following risks apply:
| Risk Category | Impact |
|---|---|
| Requirements volatility | What was compliant in Q1 may constitute a violation by Q4 as new regulations take effect |
| Assumption fragility | Building on opaque "black box" models today may require expensive rework if transparency requirements expand |
| Legal technical debt | Ignoring state-level nuances does not eliminate risk: it accumulates liability that compounds over time |
| Reputational exposure | Public trust failures in AI systems can damage organizational credibility across all operations |
Organizations that struggle will be those treating AI as a simple tool rollout. Those that succeed will treat AI deployment as a complex system with legal, ethical, and human dimensions requiring sustained governance attention.
That is not a technology failure. That is a governance failure.

The Strategic Advantage: Governance as a Project Discipline
Here is the opportunity most organizations are missing.
Organizations that treat AI governance like a project discipline: with documentation, decision logs, risk reviews, and stakeholder alignment: will not be scrambling when federal legislation eventually reaches final passage.
They will already have:
- Clear assumptions documented and version-controlled
- Traceable decisions with rationale and approval records
- Built-in transparency that satisfies audit and public accountability requirements
- Risk registers that capture regulatory dependencies and trigger points
For government agencies, this discipline directly supports public trust and accountability obligations. For corporate entities, it reduces legal exposure and positions the organization favorably for federal contract opportunities.
Governance done early is not bureaucracy. It is control.
And every experienced project leader knows this truth: it is always cheaper to plan than to rework.
Recommended Actions for Leaders
Organizations should consider the following governance practices:
- Conduct comprehensive AI risk assessments to identify potential regulatory, ethical, and operational risks across all AI-enabled systems
- Implement robust data governance ensuring accuracy, completeness, security, and traceability of training data and model outputs
- Prioritize transparency and explainability in AI model selection, development, and deployment decisions
- Establish AI ethics review processes integrated into project approval and change management workflows
- Monitor regulatory developments at federal, state, and industry levels through structured environmental scanning
- Engage with policymakers and industry associations to help shape emerging regulations and standards
- Document governance decisions in auditable formats that support future compliance demonstrations
For organizations seeking structured guidance on integrating governance disciplines into project and portfolio management practices, professional training and consulting resources are available.
Coffee's on me next time. Let's keep watching this project unfold.
Next Steps
For project managers and executives seeking to strengthen AI governance capabilities within their organizations, Core Project Management Essentials offers training programs and consulting services designed for corporate and government leaders.
To discuss your organization's specific AI governance challenges, contact our team for a consultation.
Core Project Management Essentials
5757 W. Century Blvd, 7th Floor, Suite 52A
Los Angeles, CA 90045
Phone: (877) 633-2763
Website: www.corepmessentials.com
Training & Courses: www.pmteacher.com