Invitation Only

April 13th

AI Governance. Executive Accountability.

Tenative agenda.

By Invitation Only

Participation is limited to a small group of board directors and senior executives.

3:00pm — 6:00pm

Summary

CLOSED-DOOR EXECUTIVE SESSION
St. Julien Hotel & Spa, Boulder
~25 Senior Leaders  ·  No Press  ·  No Recording
A highly interactive training session — not a one-way lecture. Designed for ~25 board members and senior executives. Space held for real-time questions and candid discussion. This is the first run of a scalable board AI education program that will continue beyond IterateOn as a repeatable, company-by-company engagement.

This afternoon is the first half of a two-part argument. Today we build the case that boards can no longer govern what they don't understand — covering fiduciary duty, legal exposure, and what runtime control actually means at the board level. Tomorrow, the full-day session goes a layer deeper: into the architecture, the memory, the failure modes, and what it actually takes to govern AI at the execution layer. If you're here for both days, that's intentional. Today gives you the why. Tomorrow gives you the what.

3:00pm — 3:05pm

Opening

AI Has Already Crossed the Control Threshold
Most boards still think of AI as a productivity tool. It is now an operational system — running inside enterprises, making decisions, and creating legal exposure before governance structures have caught up. This opening frames the session's core argument: boards that don't understand AI can't govern it, and boards that can't govern it are carrying liability they don't know they have. AI is already operating inside your enterprise — with or without board visibility. Governance structures lag deployment by 12–24 months in most public companies. Accountability and liability are accumulating faster than the earnings impact is visible. Executive ownership must tighten now — not after the first incident.
SPeakers: 
Jon Nordmark — CEO & Co-Founder, Iterate.ai
Harry Surden — Professor of Law, University of Colorado Law School · Leading Scholar at the Intersection of AI and Legal Systems

3:05pm — 3:15pm

AI Assessment

An Invitation to Take a Free Assessment: Are You Actually Ready to Deploy AI? Most Companies Aren't.
A structured assessment of what AI readiness actually requires across data, governance, tooling, and operational risk. Not a vendor checklist — a diagnostic built from real enterprise deployments.
SPeaker: 
Rob Taylor — Enterprise Technology Attorney, Carstens Allen & Gourley LLP · Advising companies on AI governance, data liability, and emerging tech risk

3:15pm — 3:45pm

Backdrop panel

AI Economics & Capital Discipline: What the CFO Knows That the Board Doesn't
The threat landscape boards were briefed on two years ago no longer exists. AI has collapsed the timeline between intrusion and damage — what used to take attackers days or weeks now takes minutes.

State-sponsored teams and loosely affiliated criminal networks alike are using frontier AI at every stage of the attack cycle. Your AI agents can now be hijacked through prompt injection — redirected by the content they read to exfiltrate data or take actions you never authorized. Identity is the new perimeter.

And when an AI-enabled attack moves faster than any human analyst can respond, the question isn't just whether you detect the threat. It's whether your systems can respond before the damage is done.

Sibito has been inside this fight — working with the FBI and Interpol to create digital fingerprints to separate customers from threat actors and disrupt AI-automated attack operations. What he found: most enterprise defenses were built for a slower world.

Navneet's platform at Rigor.ai was built for this one. The core insight is simple and urgent: AI-enabled attacks don't give you time to be reactive. Rigor.ai detects threats in real time and responds in real time — mathematically rigorous, preemptive, and designed to close every known attack vector before exploitation occurs. Not a dashboard that alerts your team. A system that acts before the damage is done.

The board's role here is not technical oversight. It is accountability. Three questions every director should be able to answer after this session: Does our organization detect and respond to threats in real time — or do we find out days later? Is our security architecture built for AI-speed attacks, or for the threat landscape of five years ago? And who is specifically accountable for that answer?
Moderator: 
Rob Taylor, JD — Enterprise Technology Attorney, Carstens Allen & Gourley LLP · Advising companies on AI governance, data liability, and emerging tech risk
Panel: 
Sibito Morley, JD — Co-Founder and President, Veromesh.ai · Former Chief Data Officer, Sinch (~1T of Western-world mobile message traffic flows through this network) · Former C-level at Lumen, CenturyLink and VP at DaVita · Princeton, BYU educated attorney

Navneet Yadav — Co-Founder & Chief Product Officer, Rigor.ai · Former Senior Director of Product Management, Palo Alto Networks · Co-Founder, CloudGenix (acquired by Palo Alto Networks) · IIT Bombay

3:45pm — 4:00pm

Session I

AI Economics & Capital Discipline: What the CFO Knows That the Board Doesn't
Public company credibility and AI cost exposures are at stake. AI spending is often invisible on the balance sheet — buried in cloud costs, fragmented across departments, and rarely surfacing in audit committees until it's already a problem. The real issues: cost opacity and shadow deployments the board doesn't see, margin erosion from inference spend and uncapped cloud consumption, audit scrutiny and reporting gaps emerging in public filings, and the question of capital allocation discipline — what ROI frameworks actually work for AI, and who is accountable when the numbers don't add up. This session frames AI governance as a financial control issue.
SPeaker: 
CFO to be announced soon

4:00pm — 4:15pm

Session II

What Public Boards Must Now Require: Oversight, Fiduciary Duty, and the 'Reasonable Controls' Standard
AI oversight is no longer optional for public company boards. Securities regulators, institutional investors, and plaintiffs' attorneys are all beginning to ask the same question: what did the board know, when did they know it, and what did they require? The answer is taking shape in courtrooms and regulatory filings: AI oversight is now an explicit board-level fiduciary responsibility. The 'reasonable controls' standard is being applied by courts and regulators today — not in some future enforcement cycle. Audit Committees need a reporting cadence that defines what appears, how often, and in what form. Disclosure is evolving fast — what companies are now putting in their 10-Ks and proxy statements looks very different from two years ago. And director liability is no longer theoretical: there are specific scenarios where personal exposure becomes real, and this session names them.
SPeaker: 
Audit Committee Chair, TBD — Public governance rigor, global automation oversight, serious board credibility

4:15pm — 4:30pm

Break

4:30pm — 4:45pm

Session III

AI Legal Exposure: Where Liability Is Already Emerging
The legal landscape around enterprise AI is not theoretical. IP contamination cases are in court. Privacy regulators are issuing fines. Consumer harm claims are being filed. The exposure spans IP contamination — when training data and model outputs create infringement liability — to data leakage and what GDPR, CCPA, and state laws now explicitly require of AI systems. AI misrepresentation and consumer harm claims are already being litigated, not just threatened. Negligent oversight is the standard boards are being held to — and enforcement trajectory is accelerating into 2026 and 2027. This session grounds the governance conversation in legal reality: where the exposure is today, where it's heading, and what it means for every executive in this room.
SPeaker: 
Rob Taylor, JD — Enterprise Technology Attorney, Carstens Allen & Gourley LLP · Advising companies on AI governance, data liability, and emerging technology risk

4:45pm — 5:15pm

Session IV

Runtime Control & Enterprise Security: What 'Governing AI' Actually Means at the Execution Layer
Governance on paper doesn't stop an AI agent from taking action at 2 am. This session translates board-level accountability into operational reality — what it actually means to control an AI system, how you know when it's behaving outside its boundaries, and what it takes to stop it mid-action. The operational layer covers identity for AI agents, zero trust for autonomous systems, observability and intervention authority, and live runtime enforcement. Then the conversation moves one layer deeper — to the network infrastructure most boards have never seen. Your AI agents are traveling across network boundaries that were never designed for autonomous systems, at a latency you can't measure, and through infrastructure you didn't architect for this purpose. That absence of visibility is a risk that belongs on every Audit Committee agenda.
Moderator: 
Justen Aguillon — Director, Technology Partner Ecosystem, Equinix · Architect of the Fabric Intelligence vision connecting AI providers and enterprise subscribers across 270+ global data centers
Panel: 
Brian Sathianathan — Co-Founder & CTAIO, Iterate.ai · Former Engineering Leader, Apple (Secret Products)

Stuart Oliver — Principal WW AI GTM & Growth Solutions Leader, NetApp

5:15pm — 5:45pm

Executive Roundtable

The Five Governance Decisions Every Board Must Make in 2026
Each of the five questions below represents a decision that every public company board should be able to answer by the end of 2026.
  1. Where is AI operating inside our organization without board-level visibility?
  2. Who owns runtime authority — the ability to stop an AI system mid-action?
  3. How is AI cost tracked, reported, and governed at the board level?
  4. What is reported to the Audit Committee, and on what cadence?
  5. What must change in the next 90 days — and who is accountable for it?
The goal is not inspiration — it's action commitments. Each participant leaves with a clear view of where their organization stands and what must change.
Moderator: 
Rob Taylor — Enterprise Technology Attorney, Carstens Allen & Gourley LLP · Advising companies on AI governance, data liability, and emerging technology risk
Panel: 
Diane Randolph — Member, two public company boards: Dollar Tree, Shoe Carnival · former CIO, Ulta Beauty

Jodi Watson — Current or past member of public company boards: Dakota Supply Group, PetMed Express

Elaine Boltz — Past board member: Brinker International; current board member, AARP · Expert Advisor to BCG

By Invitation Only

Participation is limited to a small group of board directors and senior executives.

5:30pm — 7:00pm

Cocktails & Networking

St. Julien Hotel & Spa  ·  Boulder, Colorado
Recognizing leaders advancing AI governance, innovation, and enterprise responsibility.
Conclude the day with drinks and networking, fostering connections and discussions sparked throughout the day.

"If the rate of change on the outside exceeds the rate of change on the inside, the end is near."

Jack Welch