Spring 2026

AI Symposium III

Made possible by

Spring 2026 Agenda

IterateOn helps executives understand how AI really works—from the ground up.

From infrastructure and models to the application layer, where agents do real work. Usually, with humans in the loop. But, increasingly, on their own.

That understanding leads to outcomes that matter: higher revenue, lower costs, and faster throughput.

Outcomes are the obvious goal. But without seeing the full system, you can’t optimize it. And when you don’t understand the whole picture, you’re forced to trust those who do.

Invitation Only

April 13th

AI Governance. Executive Accountability.

By Invitation Only

Invitation-only executive session for a small group of board directors and senior executives.

3:00pm — 3:05pm

Opening

AI Has Already Crossed the Control Threshold
Most boards still think of AI as a productivity tool. It is now an operational system—running inside enterprises, shaping decisions, and creating legal exposure before governance structures have caught up. This opening frames the session’s core argument: boards that do not understand AI cannot govern it, and boards that cannot govern it may be carrying liability they do not yet recognize.
SPeaker: 
Jon Nordmark— CEO & Co-Founder, Iterate.ai

3:05pm — 3:10pm

Board & Corporate Leadership

How Do Board Members and Corporate Leaders Best Collaborate on AI Governance and Risk?
Boards and corporate leadership are often at cross-purposes when it comes to AI governance and risk. Many board members lack foundational AI knowledge and therefore rely heavily on management. At the same time, management has an obligation to provide the board with meaningful, decision-useful information on AI usage, governance, and risk. This session explores how that dynamic should work in practice.
SPeaker: 
Rob Taylor, JD — Enterprise Technology Attorney, Carstens Allen & Gourley LLP · Advising companies on AI governance, data liability, and emerging tech risk

3:10pm — 3:40pm

Backdrop panel

The New Threat Landscape: AI-Speed Attacks and the Board’s Accountability
The threat landscape boards were briefed on two years ago no longer exists. AI has compressed the timeline between intrusion and damage. Prompt injection, agent hijacking, and identity-centric attacks are changing the attack surface and outpacing defenses designed for a slower world. This panel focuses on what directors need to understand about accountability in that environment.
Moderator: 
Rob Taylor, JD — Enterprise Technology Attorney, Carstens Allen & Gourley LLP · Advising companies on AI governance, data liability, and emerging tech risk
Panel: 
Sibito Morley — Co-Founder and President, Veromesh.ai · Former Chief Data Officer, Sinch · Former SVP, Lumen Technologies and CenturyLink

Brian Sathianathan — Co-Founder & Chief Technology & AI Officer, Iterate.ai · Former Engineering Leader, Apple (Secret Products)

Jai Desai — Co-Founder, Rigor.ai · Former VP Enterprise Sales, Netsil (acquired by Nutanix) · Former Global Head of Sales, AIOps, Nutanix · Former Senior Director, HashiCorp · MS Electrical Engineering, USC

3:40pm — 4:05pm

Session I

AI Governance in Practice: Variable Cost, Control of Memory and Models, and Runtime Security
Most enterprises have limited visibility into what AI is costing them, what data it is exposing, and what systems it is learning from. Spend is fragmented, data may be flowing into third-party model providers, and persistent AI memory raises a new class of governance challenges. This session combines practical governance infrastructure with the deeper issue of memory, model control, and runtime accountability.
Speakers: 
Justen Aguillon — Director, Technology Partner Ecosystem, Equinix · Architect of the Fabric Intelligence vision connecting AI providers and enterprise subscribers across 270+ global data centers

Sibito Morley
— Co-Founder and President, Veromesh.ai · Former Chief Data Officer, Sinch · Former SVP, Lumen Technologies and CenturyLink

Brian Sathianathan — Co-Founder & Chief Technology & AI Officer, Iterate.ai · Former Engineering Leader, Apple (Secret Products)

4:05pm — 4:15pm

Break

4:15pm — 4:25pm

Session II-A

What Laws Apply? Legal Liability Landscape Under Non-AI Laws and AI-Specific Laws
Many companies focus too narrowly on AI-specific laws and regulations and miss the broader legal landscape that already applies to AI. This session explains how legacy laws and newer AI laws interact, and how boards and leadership should think about the legal themes that should drive governance and risk assessment.
SPeaker: 
Rob Taylor, JD — Enterprise Technology Attorney, Carstens Allen & Gourley LLP · Advising companies on AI governance, data liability, and emerging technology risk

4:25pm — 4:45pm

Session II-b

Legal Exposure Eye-Opener: Where Liability Is Already Landing
The legal landscape around enterprise AI is not theoretical. IP contamination cases are already in court. Privacy regulators are issuing fines. Consumer harm claims and AI misrepresentation claims are being litigated. Negligent oversight is becoming an increasingly important framing. This session is intended to make the current exposure landscape concrete for directors and senior executives.
SPeaker: 
Rob Taylor, JD — Enterprise Technology Attorney, Carstens Allen & Gourley LLP · Advising companies on AI governance, data liability, and emerging technology risk

4:45pm — 4:55pm

Session II-C

Legal Blind Spots: Privilege, Intellectual Property, and Insurance
This section brings together three areas where boards and leadership often underestimate AI-related exposure: attorney-client privilege, intellectual property strategy, and insurance coverage. All three are blind spots that can materially affect enterprise risk.
SPeakerS: 
James R. Gourley, JD — Partner, Carstens Allen & Gourley LLP · Intellectual Property & Technology Law · Chemical Engineer · Denver Office
Vincent J. Allen, JD — Partner, Carstens Allen & Gourley LLP · Intellectual Property & AI Legal Risk · Registered Patent Attorney · Electrical Engineer
Rob Taylor, JD — Enterprise Technology Attorney, Carstens Allen & Gourley LLP · Advising companies on AI governance, data liability, and emerging technology risk

4:55pm — 5:55pm

Panel

The Five Governance Decisions Every Board Must Make in 2026 — Followed by Participant Questions
This combined session translates the day into explicit governance decisions and then opens the floor for moderated participant questions. The first part should feel concrete, not aspirational: a board-level discussion of the governance choices organizations need to make now around AI visibility, accountability, reporting, cost control, and oversight. The second part creates space for board directors and senior executives in the room to raise the governance, legal, security, disclosure, and accountability issues they most want addressed.
Moderator: 
Rob Taylor, JD — Enterprise Technology Attorney, Carstens Allen & Gourley LLP · Advising companies on AI governance, data liability, and emerging technology risk
Panel: 
Frank Kollmar — Expert Advisor to Bain & Company · Former Global Deputy Managing Director, L'Oréal Dermatological Beauty Division · Former President & CEO, L'Oréal Canada

Lynda Pak — Global Chief Information Officer, Estée Lauder

Prama Bhatt — Board Director at JD Sports and eHealth, former Board Director for Hormel Foods and Cheif Digital Officer at Ulta Beauty.

Diane Randolph — Board Director, Dollar Tree (NASDAQ: DLTR) · Board Director, Shoe Carnival (NASDAQ: SCVL) · Former CIO, Ulta Beauty

Deb Hall Lefevre — Drives global digital transformation at Fortune 300 companies. Former EVP and CTO at Starbucks, EVP and CTO of Couche-Tard/Circle K, and US CIO at McDonald’s. Current board member and chair of the Technology Committee, member of Nomination & Governance and Executive Committees for Wintrust Financial Corporation (WTFC).

5:55 — 6:00pm

Final Wrap-Up

Closing Reflections and Final Charge
Jon closes the session by pulling together the day’s main themes and reinforcing the central message: AI governance is no longer a future concern or a narrow technology issue. It is now a board- and executive-level responsibility. The wrap-up should leave the room with a clear sense of urgency, accountability, and next-step mindset.
Speaker: 
Jon Nordmark — CEO & Co-Founder, Iterate.ai

By Invitation Only

Participation is limited to a small group of board directors and senior executives.

5:55pm — 7:00pm

Cocktails & Networking

St. Julien Hotel & Spa  ·  Boulder, Colorado
Conclude the day with drinks and networking, fostering connections and discussions sparked throughout the afternoon.

"If the rate of change on the outside exceeds the rate of change on the inside, the end is near."

Jack Welch

A Full Day of Events

April 14th

Private AI.  Quantum Reality. Governable Systems.

Tenative agenda.

7:30 — 8:00am

Continental Breakfast

Location: Ballroom Pre-Function, St Julien Hotel & Spa

8:00 — 8:05am

OPENING

The Point of No Return: What Happens When AI Stops Being a Tool and Starts Being a System
Some of you were in the room yesterday afternoon. You heard the governance case — the fiduciary argument, the legal exposure, the question of who owns runtime authority when an AI system acts on its own at 2am. Today is the answer to the question that session was designed to raise: what does any of that actually look like inside a real enterprise system?

For those joining us for the first time today: welcome. You're arriving at the right moment. The day turns on two irreversible shifts — AI autonomy and governance — and a third one most leadership teams aren't tracking yet: quantum sensing and computing, which are closer than the headlines suggest. The choices enterprises make in 2026 will determine whether they govern AI by 2027, or get governed by it.

A rapid-fire P&L reality check follows: ten real, profitable AI use cases already driving measurable results across revenue, cost, fraud, compliance, and operations.
Speaker: 
Jon Nordmark — CEO & Co-Founder, Iterate.ai · Co-Founder & Former CEO, eBags.com — scaled to $165M in annual revenue and acquired by Samsonite

Act 1:
The Autonomous Turn

Theme: AI is no longer waiting for instructions. It's already moving.
Takeaway: We're not choosing tools—we're choosing futures.

8:05 — 8:25am

Act 1:
The Autonomous Turn

When AI Runs our Institutions, Who Runs AI? Welcome to The Office of Human Purpose and Flourishing
Hans Peter frames AI not as just another a software upgrade, but as a societal reconfiguration — a supercharged cognitive revolution unfolding 10× faster with 10× the impact of the industrial revolution. What does it mean when AI increasingly runs society, from governments to corporations to schools and much more? What is our role in shaping that world?
Speaker: 
Hans Peter Brondmo — former CEO, Everyday Robots, Google X, Start-up Advisor, Visiting Committee Member, MIT Media Lab

8:25 — 8:35am

Act 1:
The Autonomous Turn

Robotic Cars Didn't Ask Permission: The Real Timeline for Autonomous AI — and What It Takes to Get There Safely
Autonomous vehicles are already operating on public roads. But when will they be everywhere — and what does it actually take to get there? Chris spent three years building the AI stack at Zoox, designing and training the multimodal language models that interpret driving behavior, handle long-tail scenarios, and support validation. He brings a rare, unvarnished view of what happens when AI systems move from prototype to persistent real-world operation — and what governance gaps look like when the cost of getting it wrong isn't a bad quarter, it's a life.
Speaker: 
Chris Heckman, Ph.D. — Professor, University of Colorado · Former AI Stack Engineer, Zoox · Former Postdoctoral Fellow, Naval Research Laboratory

8:35 — 8:45am

Act 1:
The Autonomous Turn

OpenClaw, MoltBook, and the $8 Million Wake-Up Call: What Happens When Agents Go Feral
Autonomous cars are already driving. Waymo in SF. Zoox in Vegas. And Waymo is coming to Denver.

Now autonomous agents are roaming the web, too. In January 2026, OpenClaw went viral. By February: 1.5 million agents created, 770,000 spawned in a single week, 1.49 million database records exposed, an $8 million crypto scam executed, and Cloudflare's stock moved 14%. This is not a hypothetical. A candid teardown of what commercial-scale agent swarms look like when governance is an afterthought.
Speaker: 
Todd Sherman — Former CMO, Amplero · Established Amazon's Third-Party Marketplace

Act 2:
Real time, BiG Memory IS THE NEW CONTROL PLANE & Why Public AI Breaks in the Enterprise

Theme: Architecture, memory, and risk, plus how AI behaves at scale.
Takeaway: The most dangerous systems aren't the smartest. They're the ones that have the biggest memories. And the ones that you don't control yourself.

8:45 — 9:00am

Act 2:
Real time, BiG Memory IS THE NEW CONTROL PLANE & Why Public AI Breaks in the Enterprise

AI Will Remember Everything... Except What Makes You Great
Those autonomous agents are becoming smarter and smarter. They want to remember everything.

Despite $30–40 billion in enterprise A.I. investment last year, 95% of organizations saw no measurable ROI. The problem isn't the technology. It isn't the architecture. It isn't even the governance framework. It's what A.I. memory CAN'T see. A.I. can capture what happens, but organizational greatness almost never lives in what happened. It lives in what didn't — the invisible acts of human restraint that no A.I. system, however sophisticated, can capture. In the rush to keep up, we've built systems that recall everything… and  forgot to ask what's actually worth remembering.
Speaker: 
Josh Allan Dykstra — Optimistic Futurist & Keynote Speaker Hello Tomorrow, Board Member

9:00 — 9:10am

Act 2:
Real time, BiG Memory IS THE NEW CONTROL PLANE & Why Public AI Breaks in the Enterprise

The Autonomous Enterprise: Scaling Enterprise with Agentic Governance
Agentic workflows don't operate the way traditional automation does. They execute through autonomous reasoning, tool-use, and multi-step orchestration—each introducing unique security, data, and logic risks. This session explains how agents can become "insider threats," why unmanaged autonomy leads to cascading failures, and how to implement a trifecta of Critic Agents, Human-in-the-Loop triggers, and Role-Based Access to ensure your enterprise intelligence is both high-velocity and highly governed.
Speaker: 
John Selvadurai, Ph.D — VP R&D, Iterate.ai · Former SAP Architect

9:10 — 9:25am

Act 2:
Real time, BiG Memory IS THE NEW CONTROL PLANE & Why Public AI Breaks in the Enterprise

International Crime Investigation Authorities Called. Here's What AI-Enabled Crime Actually Looks Like Now.
This is why companies and people need to be careful about what you allow AI and Agents to remember.

Threat actors like Scattered Spider — thought to be roughly 1,000 loosely affiliated actors operating across the US and UK — are among the fastest adopters of AI on the planet. And they're not alone.

State-sponsored AI teams from Russia, China, North Korea, and Iran are now using frontier models at every stage of the attack cycle: reconnaissance, phishing, malware development, and data exfiltration. North Korea uses AI to synthesize intelligence on targets at defense companies. Iran uses it to augment reconnaissance and map business partner networks. China uses it to conduct vulnerability analysis and penetration testing planning against US targets.

These groups are deploying AI across the full attack surface. Social engineering is one method — cloning executive voices, manipulating help desks, automating identity takeover at scale. But it doesn't stop there. The same groups use agentic AI scripts to infiltrate code repositories and accelerate source code theft, run automated reconnaissance that maps internal networks and locates SOPs faster than any human analyst, exploit leaked credentials to move laterally before anyone knows they're inside, and — increasingly — use prompt injection to hijack AI agents operating inside enterprise systems, redirecting them to exfiltrate data or execute actions their owners never authorized. When your AI agent can be told what to do by the content it's reading, the attack surface isn't just your network. It's every document, email, and data feed your agents touch.

Sibito has been inside this fight. His team built the predictive fingerprinting system — developed in collaboration with the International Crime Investigation Authorities — that detected the behavioral signatures left by AI-automated attack operations. What he found, and what he'll discuss this morning: identity is now the perimeter, and AI has made every attack vector faster, cheaper, and harder to detect.

The question isn't whether your organization is a target. It's whether your defenses were built for the threat that exists today.
Speaker: 
Sibito Morley — Co-Founder and President, Veromesh.ai · Former Chief Data Officer, Sinch (~1T of Western-world mobile message traffic flows through this network) · Former CTO, Lumen, Century Link, Davita

9:25 — 9:40am

Act 2:
Real time, BiG Memory IS THE NEW CONTROL PLANE & Why Public AI Breaks in the Enterprise

Three Rings: Why Private AI Is an Architecture Decision, Not a Vendor Choice
To be as safe as possible, organizations and people need Private AI.

Private AI is not a product category. It is an architectural commitment spanning data, models, and hardware. Brian introduces the Three Rings framework and explains precisely why control, containment, and accountability collapse under public AI architectures — and what it takes to build systems where governance is structural, not performative.
Speaker: 
Brian Sathianathan — Co-Founder & Chief Technology & AI Officer, Iterate.ai · Former Engineering Leader, Apple (Secret Products)

9:40 — 9:50am

Act 2:
Real time, BiG Memory IS THE NEW CONTROL PLANE & Why Public AI Breaks in the Enterprise

Sovereign AI in Action: How Hybrid Architecture Can Accelerate National Digital Transformation
A practical look at how hybrid AI architecture can help governments and institutions scale AI without becoming fully dependent on cloud-first models. Based on an African government use case, this talk shows how AI-native devices, local edge execution, and selective cloud coordination can strengthen resilience, support public services, and create a more sovereign path to national digital acceleration.
Speaker: 
Miguel Stief — Dominant Success, Global CEO, Executive Advisor, and Transformation Leader

9:50 — 10:05am

Break

10:05am — 10:35am

QUANTUM INTERLUDE

When the Physics Change, the Architecture Has to Change With It  ·  30 minutes  ·  3 talks
Boulder is not a coincidence. It has become one of the world's most concentrated clusters of quantum research and development — anchored by CU Boulder's five Nobel Laureates, NIST, and a growing ecosystem of deep-tech companies building the hardware that will define the next era of computing. Thirty percent of the world's quantum sensing companies are in the Boulder area. The next three talks show why that matters right now — and how quantum and AI are converging faster than most leadership teams are tracking.

10:05 — 10:15am

QUANTUM INTERLUDE

Talk 1, 10 minutes
How AI Is Making Quantum Usable
Quantum computing has promised breakthroughs on some of the hardest computational problems for years, but noisy hardware, complex programming, and difficult calibration have kept it largely in the hands of specialists. AI is starting to change that. Across the stack, large language models are helping with algorithm design and code generation, while machine learning is improving circuit optimization, hardware tuning, and noise mitigation. The pattern is becoming clear: AI will not compete with quantum computing. It will make it usable. Rob Wamsley explains where that shift is already happening in practice, where the limits still are, and what this emerging model of human direction, AI assistance, and selective quantum execution means for enterprise teams thinking about next-generation compute.
Speaker: 
Robert Wamsley, Ph.D. — Physicist & Quantum Algorithm Researcher, Quantum Rings · Building tools to simulate quantum futures before the hardware arrives

10:15 — 10:25am

QUANTUM INTERLUDE

Talk 2, 10 minutes

Quantum Is Already Here: Detecting Cancer Through Breath, Powered by AI and Funded by DARPA
Quantum isn't only about computing — it's about sensing. Eva's company Flari uses quantum-grade optical frequency comb technology to detect disease biomarkers in breath with a precision that classical instruments can't match. The catch: the data these sensors generate is so complex that only AI can interpret it at scale. Eva will show how AI and quantum sensing are already converging in healthcare — and how DARPA-funded research is accelerating the path from lab to clinical deployment. This is what it looks like when quantum comes to life in the real world.
Speaker: 
Eva Yao, Ph.D. — Founder & CEO, Flari · DARPA-Funded Quantum Molecular Sensing for Biomedical Research and Disease Detection

10:25 — 10:35am

QUANTUM INTERLUDE

Talk 3, 10 minutes

Why NVIDIA Is Betting on Boulder: Atom Computing, DARPA, and the Race to Build the World's First Useful Quantum Computer
DARPA reviewed 18 of the world's most advanced quantum computing companies for its Quantum Benchmarking Initiative — a rigorous program designed to determine whether fault-tolerant, utility-scale quantum computers can be built by 2033. Only 11 survived to Stage B. Atom Computing, headquartered in Boulder, is one of them — alongside IBM, IonQ, and Quantinuum. In October 2025, Jensen Huang unveiled NVQLink — a new architecture directly connecting NVIDIA's GPU supercomputers to quantum processors — and Atom Computing was named one of its 17 founding hardware partners. NVIDIA's bet is straightforward: GPUs won't be replaced by quantum computers. They'll be the engine that runs them. Justin explains what this convergence means and what the AI-quantum hybrid era will actually look like for enterprise compute.
Speaker: 
Justin Ging — Chief Product Officer, Atom Computing · One of 11 Companies DARPA Selected as a Plausible Path to a Fault-Tolerant Quantum Computer · NVIDIA NVQLink Founding Partner · MIT Leaders for Manufacturing Fellow

Act 2 Continued:
Real time, BiG Memory IS THE NEW CONTROL PLANE & Why Public AI Breaks in the Enterprise

Architecture, memory, and risk, plus how AI behaves at scale.
Takeaway: The most dangerous systems aren't the smartest. They're the ones that have the biggest memories. And the ones that you don't control yourself.

10:35 — 10:45am

Act 2:
Real time, BiG Memory IS THE NEW CONTROL PLANE & Why Public AI Breaks in the Enterprise

Most AI Security Tools Weren't Built for Enterprises: A Preemptive Cyberdefense Approach
Imagine a scenario where fast-moving cyber attacks breach your state-of-the-art cyber defenses and cause catastrophic damage. By the time an attack is detected, it is already too late. This is not hypothetical; cybercrime is projected to cost up to $10.5 trillion globally in 2025. Addressing this requires a mathematically rigorous solution to ensure that customers’ cyber defenses are preemptively, completely, continuously, and verifiably configured to defend against all known attacks that matter to them. Yet, no solution currently exists that addresses this long-standing industry-wide risk, which is worsening by the day due to GenAI-powered and fast-moving attacks.

Cybersecurity Stealth Startup is building the world's first preemptive, complete, continuous, and verifiable cyber defense platform, combining best-in-class threat intelligence, formal mathematical modeling of your defenses for complete verifiability, and grounded and guardrailed AI-powered remediation recommendations to close gaps in your defenses. Join us to kick off your preemptive cyber defense journey.
Speaker: 
Jai Desai — Head of Business Development and Sales Rigor.ai, Founding Member

10:45 — 10:55am

Act 2:
Real time, BiG Memory IS THE NEW CONTROL PLANE & Why Public AI Breaks in the Enterprise

Taming the AI Chaos: From LLM Sprawl to a Private, Multi-Cloud AI Fabric
Enterprises are rapidly entering a phase of LLM sprawl—where 60–80% of AI workloads are now distributed across multiple models, clouds, and environments, driving unpredictable cost, security exposure, and network inefficiency. This session explores how leading organizations are re-architecting around private interconnection to reduce network and egress costs (often by 30–70%), enable seamless multi-model (frontier + open-source) adoption, and future-proof AI infrastructure off the public internet. We’ll also unpack the evolving infrastructure stack—from liquid-cooled GPU clusters to emerging deployments with smaller footprints, driving faster ROI and more versatility.
Speaker: 
Justen Aguillon — Director, Technology Partner Ecosystem, Equinix · Architect of the Fabric Intelligence vision connecting AI providers and enterprise subscribers across 270+ global data centers

10:55 — 11:20am

Act 2:
Real time, BiG Memory IS THE NEW CONTROL PLANE & Why Public AI Breaks in the Enterprise

Panel: Agent Proliferation Could Outrun Government and Enterprise Controls
Agents are proliferating inside enterprises faster than leadership can track. The hardest part isn't choosing a model—it's controlling what agents remember, what they're allowed to do with that memory, and how quickly bad state spreads across tools, systems, and people. Where do real-world deployments actually fail? What guardrails work at runtime—not just on paper?
Moderator: 
Brian Sathianathan — Co-Founder & Chief Technology & AI Officer, Iterate.ai
Panel: 
Sibito Morley — Co-Founder and President, Veromesh.ai · Former Chief Data Officer, Sinch (~1T of Western-world mobile message traffic flows through this network) · Former SVP, Lumen Technologies and CenturyLink
Rob Taylor — AI Attorney, Carstens, Allen & Gourley
Jerry Xu, PhD, FRM — Investment VP, Prudential Financial
Mike Caneja — Director, Product Management, Toast

11:20 — 11:35am

Act 2:
Real time, BiG Memory IS THE NEW CONTROL PLANE & Why Public AI Breaks in the Enterprise

Beyond the Hype: A Practical Pathway Toward AGI with Small Language Models, Knowledge Graphs, and Multi-Agent Systems
The conversation around Artificial General Intelligence (AGI) is often dominated by scale: larger models, more data, and increasing compute. But in real-world systems, scale alone is not enough. This talk presents a grounded, systems-oriented perspective on building intelligent systems that actually work in production. It explores how smaller, specialized language models, when combined with structured knowledge bases and coordinated multi-agent architectures, can deliver reliable, explainable, and scalable intelligence. Drawing from real-world deployments across domains such as agriculture and enterprise systems, Srikanth will unpack: (a) Why smaller, domain-adapted models often outperform general-purpose models in constrained environments (b) How knowledge graphs anchor reasoning, improve traceability, and reduce hallucination (c) The role of multi-agent systems in orchestrating complex, real-world decision-making (d) What it truly takes to move from proof-of-concept (POC) AI to production-grade systems. Rather than asking “How do we build AGI?”, this session reframes the question: “How do we systematically assemble intelligence from modular, reliable components?” For leaders and practitioners, this represents a shift from chasing trends to designing systems that deliver consistent, measurable value.
Speaker: 
Dr. Srikanth Thudumu — Institute of Applied Artificial Intelligence and Robotics (IAAIR)

11:35 — 11:50am

Act 2:
Real time, BiG Memory IS THE NEW CONTROL PLANE & Why Public AI Breaks in the Enterprise

From AI Hype to AI Execution
As we prepare to head into the demo section of IterateOn, which convenes after lunch. Let’s ask the author of Avoiding the Strategy Trap. He’ll talk about how leaders create the conditions for AI agents to deliver real value.

Many organizations struggle to turn AI pilots into real operational impact.

In this talk, Kevin Ertell, author of The Strategy Trap, explains why the challenge is rarely the technology itself. It’s the operating conditions around it.

Drawing on his Six Cs system — Co-Creation, Clarity, Capacity, Communication, Coordination, and Coaching — Kevin shows how leaders can design organizations where AI agents actually improve performance instead of becoming another abandoned experiment.

This session provides a practical lens for leaders responsible for turning AI from an idea into consistent execution.
Speaker: 
Kevin Ertell — Author of The Strategy Trap: Why Companies Fail at Execution, CEO Mistere Advisory, former Global VP Retail Operations at Nike.

11:50am — 12:20pm

Lunch Buffet

Location: Ballroom Pre-Function, St Julien Hotel & Spa

"Data is the new oil. It’s valuable, but if unrefined it cannot really be used. It has to be changed, processed, broken down, and analyzed for it to have value."

Clive Humby, British mathematician and architect of the Tesco Clubcard, 2006

Act 3:
What Governable AI Actually Looks Like

Theme: Control in practice—live demos.
Takeaway: This is what responsible AI looks like in the real world.

12:20 — 12:25pm

Act 3:
What Governable AI Actually Looks Like

A 17-Year-Old Built a Game in a Day. 18.9 Million People Played It in Three Weeks.
This is what vibe coding looks like at its purest: no budget, no team, no marketing plan — just a teenager, agentic tools, and an idea. Highlandrr built Not Cute Anymore Tower on Roblox in a single day using AI-assisted development, then watched 18.9 million people play it in three weeks. Along the way he built a community of 1.3 million Roblox players in under two months — without a marketing department, a growth team, or a dollar of paid acquisition. No enterprise process. No governance framework. No sprint planning. The question this raises for every organization in the room: if a 17-year-old can build and ship a product used by millions in 24 hours, what does that mean for who builds what next — and what happens when your competitors figure that out before you do?
Speaker: 
Highlandrr — Valor High School Student

12:25 — 12:35pm

Act 3:
What Governable AI Actually Looks Like

Forget Automated Code; Now You Can Build an Agent in Minutes
Highlandrr built a game in a day.  Just one person. That's fine for a Roblox game.  But what about building agents for a hospital, a financial services firm, or a public company? This session is the enterprise answer to the question the previous demo raised: how do you move fast with AI without losing control of what it builds? Blake shows what responsible AI development actually looks like from the first line — scope, guardrails, and governance defined before a single line of code is written. Most enterprises skip this step entirely. This demo shows exactly what it costs them, and what it looks like when you get it right.
Speaker: 
Blake Stenstrom — AI Engineer, Iterate.ai · Princeton-Trained Algorithms Expert

12:35 — 12:50pm

Act 3:
What Governable AI Actually Looks Like

AI in Agriculture: When the Data Comes From the Ground Up
Agriculture generates enormous amounts of data — soil conditions, weather patterns, yield histories, supply chain timing, equipment telemetry — and almost none of it has been accessible in a form that drives real decisions. This demo shows how private AI is being applied to that problem at the World Food Bank: turning ground-level data into forecasting, resource allocation, and operational decisions that directly affect food security outcomes. The governance requirements are different from enterprise IT — the infrastructure is distributed, the connectivity is unreliable, and the stakes are measured not in revenue but in whether communities eat. Richard brings a perspective that no other presenter today carries: what it looks like to deploy AI in environments where the margin for error is human, not financial.
Speakers: 
Richard Lackey — CEO & Chairman, World Food Bank

12:50 — 1:05pm

Act 3:
What Governable AI Actually Looks Like

No Cloud. No Power. No Connectivity. AI Anyway: Far Edge Intelligence in the Field
Most AI assumes something many real-world environments can't guarantee: a reliable connection. Point-of-sale systems in basements, pop-up locations, outdoor markets, and remote stores can't wait for a signal — and can't afford to fail when one isn't available.

This session demonstrates what AI looks like when it runs fully offline on constrained hardware — no cloud dependency, no round-trip latency, no single point of failure tied to connectivity. The enabling hardware is Qualcomm's Snapdragon 6490, a platform Iterate.ai has built on specifically to bring inference to the edge without sacrificing governance or control.

Three live demos illustrate the range: AI guiding workers through equipment diagnostics inside an underground mine with zero connectivity; voice-driven ordering at a restaurant counter that keeps processing transactions even when the network drops; and real-time device diagnostics that detect, diagnose, and walk through fixes on-site — no help desk call required.
Speaker: 
Blake Stenstrom — AI Engineer, Iterate.ai · Princeton-Trained Algorithms Expert

1:05 — 1:10pm

Act 3:
What Governable AI Actually Looks Like

Super Agents: The General Contractor Model for AI at Scale
A single AI agent can answer a question, draft a document, or monitor a system. That's useful. But the most powerful AI deployments don't rely on one agent working alone — they use a Super Agent: a coordinating intelligence that breaks complex goals into specialized tasks and spins out sub-agents to execute them in parallel.

Think of it like a general contractor. The GC doesn't do the plumbing, the electrical, and the framing — they direct the specialists who do. A Super Agent works the same way: it receives a high-level objective, decomposes it, dispatches the right agents, monitors their output, and synthesizes the result.

Brian will use Generate to spin one out live on stage — so you can see exactly what orchestration looks like in motion.
Speaker: 
Brian Sathianathan — Co-Founder & Chief Technology & AI Officer, Iterate.ai · Former Engineering Leader, Apple (Secret Products)

1:10 — 1:20pm

Act 3:
What Governable AI Actually Looks Like

Securing the Agentic Future: An Introduction to Lifeboat Runtime and Security Capsules
As AI agents become more autonomous, the risk of catastrophic security breaches with a wide blast radius grows exponentially. Standard containerization is not enough to protect against sophisticated agent-to-agent compromises.

This session introduces Iterate.ai's security-first approach to AI infrastructure, centered on its proprietary Lifeboat inference server. Brian details Security Capsule technology, which isolates every AI session in a dedicated, secure boundary — containing breaches without any performance sacrifice. He also covers how the runtime handles token cost controls and KV cache management, keeping agentic workloads efficient and economical at scale.

This infrastructure runs in Equinix — setting up exactly the architecture Kal will address next.
Speaker: 
Brian Sathianathan — Co-Founder & Chief Technology & AI Officer, Iterate.ai · Former Engineering Leader, Apple (Secret Products)

1:20 — 1:30pm

Act 3:
What Governable AI Actually Looks Like

The Distributed AI Hub: Designing the Future of Inference, Sovereignty, and Scale
As AI moves from centralized training to distributed, real-time inference, the architecture of enterprise AI is fundamentally changing. This session introduces the Distributed AI Hub framework — a blueprint for orchestrating metro-edge inference, sovereign AI deployments, multi-cloud data flows, and emerging requirements like KV cache optimization and agentic workflows — all underpinned by private, secure interconnection. We'll share practical design patterns and recommendations that leading enterprises are adopting today to build low-latency, compliant, and scalable AI systems that will define the industry standard moving forward.
Speaker: 
Kaladhar Voruganti — VP & Senior Technologist, AI, Equinix

1:30 — 1:40pm

Act 3:
What Governable AI Actually Looks Like

When Algorithms Decide How Healthcare Providers Get Paid — Who Wins?
Every year, billions of dollars in healthcare are decided by systems most people never see. Increasingly, those decisions aren’t made by humans; they’re made by AI. Payers are already using AI at scale to deny claims, downgrade severity, and optimize what gets paid. While hospitals are still trying to respond, this imbalance doesn’t just affect providers. It quietly shapes the economics of the entire system—impacting costs, employers, and ultimately everyone who relies on healthcare.For the first time, that dynamic is starting to shift. Hospitals now have access to the same underlying data and the ability to apply AI to uncover what was previously invisible: patterns in underpayments, denials, and payer behavior happening at scale. This session explores how health systems are moving from reactive appeals to proactive intelligence, and what happens when both sides of a multi-billion-dollar system are powered by AI. When decisions move from people to algorithms, the question isn’t just what happens—it’s who has the advantage.

Takeaways:
- How AI is already influencing billions in healthcare payments behind the scenes.
- Why asymmetry in AI capabilities creates real financial consequences.
- How hospitals can use their own data (837/835) to uncover hidden revenue loss.
- What it means to shift from reacting to payer decisions to anticipating them.
Speakers: 
F.X. Campion, M.D. FACP — Clinical Informaticist, Atrius Health, Clinical Instructor, Harvard Medical School
Jeff Wennberg — Co-Founder and Chief Product Officer, Healthcare Financial Data Science

1:40 — 1:50pm

Act 3:
What Governable AI Actually Looks Like

Decades of Dark Data, Finally Useful: Mass Storage AI That Doesn't Break Governance
Most enterprises are sitting on decades of accumulated data — contracts, clinical records, financial documents, compliance archives — that has been completely inaccessible to the organization that owns it. Not misplaced. Not lost. Stored, paid for, and invisible. IBM and Deloitte both put the figure at roughly 90% of enterprise data falling into this category — unstructured, unanalyzed, and until now, unreachable. NetApp's intelligent storage platform changes that. At approximately $200,000, it brings a generative AI interface directly to on-premises mass storage — purpose-built for hospitals, financial services firms, and SLED organizations where data sovereignty and compliance are non-negotiable. This demo shows what it looks like when an enterprise finally turns its dark data into a working asset: private, governed, auditable, and without a single byte leaving the building.
Speakers: 
Andy Sayare — Senior Director Global Alliances for AI, NetApp. Former start-up co-founder, Product Marketer, and Marketing VP

1:50 — 2:00pm

Act 3:
What Governable AI Actually Looks Like

AI-Driven Document Management & Process Automation
Most document AI fails the moment it meets the enterprise. Contracts, compliance documents, clinical records, and classified files don't belong in a public model — and in many industries, sending them there isn't just a bad idea, it's a regulatory violation. On top of that, most of an enterprise's data is unstructured- and most organizations store this data rather than use it.

Private AI is the answer, but most implementations get it wrong: slow retrieval, poor accuracy, no audit trail, and architectures that quietly phone home when no one is watching. This demo shows what actually works — intelligent data routing that keeps sensitive documents inside the perimeter, accurate extraction and summarization without cloud dependency, and governance built into every step of the pipeline.

The use cases span regulated industries where air-gapped deployment isn't optional: legal, clinical, financial, and government environments where the data never leaves the building and the system still has to perform.
Speaker: 
David Richard — Director, Digital Strategy & AI Automation, Terralogic · Specialist in Regulated Data and Private AI Deployments

2:00 — 2:10pm

Act 3:
What Governable AI Actually Looks Like

Generate in Action: Supporting Faster, Smarter Insurance Investigations in Brazil
A practical look at how AI can help insurers and mutual protection organizations streamline claims investigations, reduce fraud risk, and improve operational speed. Based on a Brazil insurance investigation use case, this session will show how Generate can support teams by accelerating evidence review, organizing case information, and helping investigators move from fragmented manual processes to a more intelligent, transparent workflow.
Speaker: 
Miguel Stief — Dominant Success, Global CEO, Executive Advisor, and Transformation Leader

2:10 — 2:20pm

Break

2:20 — 2:30pm

Act 3:
What Governable AI Actually Looks Like

The AI PC: Intelligence That Stays With You
A new category of computing is emerging—and it doesn’t live in the cloud. AI PCs powered by AMD Ryzen™ AI processors bring intelligence directly onto the device, combining CPU, GPU, and a dedicated NPU to run AI workloads locally, instantly, and privately. These machines don’t just respond—they anticipate, learning from your behavior, accelerating tasks before you ask, and organizing information in real time. For enterprises and creators alike, this means faster performance, zero latency, and something even more important: control. Your data stays on your machine. Your workflows run without dependency on external servers. And with platforms like Iterate.ai’s Generate, these devices become fully operational AI hubs—capable of running agentic workflows, analyzing documents, and automating business processes entirely offline. In a world racing toward cloud AI, AMD is making a different bet: that the most powerful AI may be the one that never leaves your device.
Speaker: 
Rakesh Anigundi — ​​Director of Product, Ryzen AI Product Lead, AMD

2:30 — 2:40pm

Act 3:
What Governable AI Actually Looks Like

OpenAI Wants To Own The Retail Checkout; Here’s How
OpenAI and Google (Gemini) are moving to own the entire commerce loop—from discovery to decision to checkout—and that changes everything.

The moment a purchase happens inside the AI, the platform doesn’t just facilitate a transaction; it captures intent, behavior, and begins to build memory—an evolving record of preferences, habits, and past purchases that compounds over time.

Concepts like UCP/ACP are emerging as shorthand for this new model, where agents don’t just recommend products—they remember what you like, anticipate what you need, and increasingly act on your behalf, often without ever sending you to a brand’s website. That’s a profound shift.

For years, brands fought to escape intermediaries—retailers, marketplaces, search engines—to build direct relationships with customers. Now, the intermediary is becoming the interface itself. If the AI remembers your preferences, makes your decisions, and completes your purchases, it starts to look a lot like the brand. And if that’s true, the real battle isn’t for shelf space or ad placement—it’s for control of the checkout, because whoever owns that moment may own the customer.
Speaker: 
Michelle Pacynski — former VP of Innovation, Ulta Beauty

2:40 — 2:45pm

Act 3:
What Governable AI Actually Looks Like

If OpenAI Remembers Every Shopper—and Owns Checkout—Who Owns the Customer?
The demo you just saw isn’t theoretical—it works. The protocols are real. Checkout inside ChatGPT is already happening, and Michelle will show how surprisingly easy it is to set up.

But before every brand rushes to plug in, there’s a harder question—one without a clean answer: if ChatGPT closes the sale, who owns the customer? Not the transaction—the relationship. The data. The memory of every interaction, every preference, every purchase—and the ability to act on it.

For decades, brands have fought to get closer to their customers, moving from department stores to DTC, from wholesalers to owned channels. Amazon reshaped that dynamic, giving brands access while quietly watching what worked—and then building its own competing products, sometimes nearly identical, like eBags packing cubes.

Agentic commerce may be an even bigger shift. When the interface becomes AI, the brand risks becoming invisible, while the platform owns discovery, decision, and retention—and now, memory. And memory compounds. It gets smarter with every interaction, making the AI more valuable to the consumer than any single brand.

Tim has lived this from the brand side at Jockey. This isn’t a solution—it’s a provocation. Because the brands asking this question now may be the only ones who still have a relationship left to protect.
Speaker: 
Tim McCue — SVP Global Operations, Jockey International

2:45 — 2:55pm

Act 3:
What Governable AI Actually Looks Like

A Semi-Autonomous Social Engagement Engine That Stays Governed and Retains Your Brand Voice at Scale
What happens when brand engagement becomes semi-autonomous — but still brand-safe, measurable, and governed? This demo showcases the AI-driven engagement engine built for large brands, designed to monitor, respond, and amplify engagement across TikTok, Instagram, and Facebook in real time. Unlike simple chatbots or auto-replies, this system ingests high-velocity social signals, understands context and brand voice, generates compliant responses, escalates when human intervention is required, and tracks performance and sentiment impact end to end. This is not AI posting randomly. It is instrumented, observable, constrained engagement at scale — and a live example of what governed autonomy looks like in a consumer-facing environment.
Speaker: 
Magnus Tagtstrom — Corporate VP, Iterate.ai · Former Global VP of Innovation, Circle K - Couche-Tard (16,000 stores)

2:55pm — 3:05pm

Act 3:
What Governable AI Actually Looks Like

AI in a Community Hub: Serving People Who Don't Have Enterprise IT Departments
Every demo today has shown AI working inside well-resourced organizations — hospitals, retailers, security firms, global data centers. This one is different. Community hubs serve people who need help navigating benefits, housing, employment, and services — and they do it with thin budgets, volunteer staff, and no dedicated IT infrastructure. This demo shows what responsible AI deployment looks like when the user isn't a knowledge worker with a corporate laptop — it's a parent trying to find childcare, a veteran navigating benefits, or a senior who needs help understanding a medical bill. The governance requirements here are the most human of any demo in this act: no data leaves the community, no decision is made without a human in the loop, and the system has to work for people who didn't choose to interact with AI. Randy shows what that looks like when you get it right.
Speaker: 
Randy Kohn — President, Adventure Mind

Act 4:
What Can't Wait

Theme: The choices that have no later.
Takeaway: The architecture you choose now is the system you'll live with in 2027.

3:05pm — 3:25pm

Act 4:
What Can't wait

The Decisions Enterprises Can't Defer: What 2026 Choices Look Like in 2027
This is not a summary. The day has already made the case. This is the moment where six people who have spent their careers making consequential decisions at scale tell you what they are taking out of this room — and what they think you should do before you leave.Each presenter shares their five takeaways and why they matter. Not talking points. Not consensus. The unfiltered view of executives who have lived the consequences of getting technology strategy wrong — and right
Speakers: 
Frank Kollmar — Former Global Deputy Managing Director, L'Oréal Dermatological Beauty Division · Former President & CEO, L'Oréal Canada

Chris Smith — EVP International & Chief Customer Officer, Jockey International

Hans Peter Brondmo —
former CEO, Everyday Robots, Google X, Start-up Advisor, Visiting Committee Member, MIT Media Lab

Lynda Pak — Former SVP Technology, Estée Lauder

Diane Randolph — Board Director, Dollar Tree (NASDAQ: DLTR) · Board Director, Shoe Carnival (NASDAQ: SCVL) · Former CIO, Ulta Beauty

Sibito Morley — Co-Founder and President, Veromesh.ai · Former Chief Data Officer, Sinch (~1T of Western-world mobile message traffic flows through this network) · Former CTO, Lumen, Century Link, Davita

3:25pm — 3:30pm

Closing

Thank You – Open Floor: Questions, Challenges, Comments
SPeaker: 
Jon Nordmark — CEO & Co-Founder, Iterate.ai · Co-Founder & Former CEO, eBags.com

3:30pm — 5:00pm

Cocktails & Networking

Jill's Restaurant & Bistro Bar  ·  St. Julien Hotel & Spa

"Every company is going to become an AI company. It’s not a question of if, it’s a question of how quickly."

Thomas Kurian, CEO of Google Cloud

AI Symposium III, Boulder CO

Spring 2026 Participants

St. Julien Hotel & Spa

The Fall 2025 AI Symposium II was held at the St. Julien Hotel & Spa, Boulder’s premier destination for luxury, location, and local charm. Make the most of your experience by staying on-site.

The St Julien is just one block from the iconic Pearl Street Mall, a pedestrian promenade lined with over 200 shops, galleries, and restaurants. Known for its vibrant street performers, public art, and mountain-town energy, Pearl Street offers a walkable, immersive slice of Boulder’s progressive and outdoor-loving culture.

Unbeatable Location: Nature + Culture at Your Door

St Julien sits at the crossroads of everything Boulder. In addition to Pearl Street, you’ll find:

  • Boulder Creek Path and hiking trails just across the street
  • Stunning views of the Flatiron Mountains from many rooms
  • Complimentary cruiser bikes to explore the city like a local

Nationally Recognized for Excellence

St Julien Hotel & Spa is one of Colorado’s most celebrated hotels, earning awards that place it in elite company:

  • Forbes Four-Star Award Winner — a distinction held by fewer than 15% of hotels evaluated worldwide
  • AAA Four Diamond Rating — awarded to just 6% of hotels assessed by AAA across North America
  • Frequently featured in Travel + Leisure, Condé Nast Traveler, and U.S. News & World Report as a top Colorado destination