AI Symposium

Private AI for the Enterprise

An invite-only gathering of senior leaders and builders deploying private AI in secure, real-world enterprise environments.

Our focus is on live demonstrations of real private AI deployments—showing how enterprises are operationalizing AI today, from data protection and compliance to measurable business results.

April 13-14, 2026 · St. Julien Hotel, Boulder, Colorado

Register Now
Rapid fire 15-minute demos

What makes this event different?

Participants Represent Each Layer of the GenAI Tech Stack

Our AI Symposiums are all about action. After hearing from top AI leaders, you’ll dive into real-world successes that tackled complex industry challenges. Leave with new friends, plus insights you can immediately apply.

With executives from retail, finance, telecom, and beyond—plus experts across the AI stack—we break down silos to drive real innovation.

No vendor pitches. No hypotheticals. Just live examples of AI models, hardware, and data operating in safe, governed environments.

Spring 2026 Agenda

IterateOn helps executives understand how AI really works—from the ground up.

From infrastructure and models to the application layer, where agents do real work. Usually, with humans in the loop. But, increasingly, on their own.

That understanding leads to outcomes that matter: higher revenue, lower costs, and faster throughput.

Outcomes are the obvious goal. But without seeing the full system, you can’t optimize it. And when you don’t understand the whole picture, you’re forced to trust those who do.

    Invitation Only

    April 13th

    AI Governance & Executive Accountability

    3:00pm — 6:00pm

    Summary

    CLOSED-DOOR EXECUTIVE SESSION
    St. Julien Hotel & Spa, Boulder
    ~25 Senior Leaders  ·  No Press  ·  No Recording
    A highly interactive training session — not a one-way lecture. Designed for ~25 board members and senior executives. Space held for real-time questions and candid discussion. This is the first run of a scalable board AI education program that will continue beyond IterateOn as a repeatable, company-by-company engagement.

    This afternoon is the first half of a two-part argument. Today we build the case that boards can no longer govern what they don't understand — covering fiduciary duty, legal exposure, and what runtime control actually means at the board level. Tomorrow, the full-day session goes a layer deeper: into the architecture, the memory, the failure modes, and what it actually takes to govern AI at the execution layer. If you're here for both days, that's intentional. Today gives you the why. Tomorrow gives you the what.

    3:00pm — 3:05pm

    Opening

    AI Has Already Crossed the Control Threshold
    Most boards still think of AI as a productivity tool. It is now an operational system — running inside enterprises, making decisions, and creating legal exposure before governance structures have caught up. This opening frames the session's core argument: boards that don't understand AI can't govern it, and boards that can't govern it are carrying liability they don't know they have. AI is already operating inside your enterprise — with or without board visibility. Governance structures lag deployment by 12–24 months in most public companies. Accountability and liability are accumulating faster than the earnings impact is visible. Executive ownership must tighten now — not after the first incident.
    SPeakers: 
    Jon Nordmark — CEO & Co-Founder, Iterate.ai
    Harry Surden — Professor of Law, University of Colorado Law School · Leading Scholar at the Intersection of AI and Legal Systems

    3:05pm — 3:15pm

    AI Assessment

    An Invitation to Take a Free Assessment: Are You Actually Ready to Deploy AI? Most Companies Aren't.
    A structured assessment of what AI readiness actually requires across data, governance, tooling, and operational risk. Not a vendor checklist — a diagnostic built from real enterprise deployments.
    SPeaker: 
    Rob Taylor — Enterprise Technology Attorney, Carstens Allen & Gourley LLP · Advising companies on AI governance, data liability, and emerging tech risk

    3:15pm — 3:45pm

    Backdrop panel

    AI Economics & Capital Discipline: What the CFO Knows That the Board Doesn't
    The threat landscape boards were briefed on two years ago no longer exists. AI has collapsed the timeline between intrusion and damage — what used to take attackers days or weeks now takes minutes.

    State-sponsored teams and loosely affiliated criminal networks alike are using frontier AI at every stage of the attack cycle. Your AI agents can now be hijacked through prompt injection — redirected by the content they read to exfiltrate data or take actions you never authorized. Identity is the new perimeter.

    And when an AI-enabled attack moves faster than any human analyst can respond, the question isn't just whether you detect the threat. It's whether your systems can respond before the damage is done.

    Sibito has been inside this fight — working with the FBI and Interpol to fingerprint and disrupt AI-automated attack operations. What he found: most enterprise defenses were built for a slower world.

    Navneet's platform at Rigor.ai was built for this one. The core insight is simple and urgent: AI-enabled attacks don't give you time to be reactive. Rigor.ai detects threats in real time and responds in real time — mathematically rigorous, preemptive, and designed to close every known attack vector before exploitation occurs. Not a dashboard that alerts your team. A system that acts before the damage is done.

    The board's role here is not technical oversight. It is accountability. Three questions every director should be able to answer after this session: Does our organization detect and respond to threats in real time — or do we find out days later? Is our security architecture built for AI-speed attacks, or for the threat landscape of five years ago? And who is specifically accountable for that answer?
    Moderators: 
    Mike Edwards — Prior CEO, four companies · Former Board Member, four public companies

    Rob Taylor, JD — Enterprise Technology Attorney, Carstens Allen & Gourley LLP · Advising companies on AI governance, data liability, and emerging tech risk
    Panel: 
    Sibito Morley, JD — Former Chief Data Officer, Sinch (~70% of Western-world SMS traffic flows through this network) · Former C-level at Lumen, CenturyLink and VP at DaVita · Princeton, BYU educated attorney

    Navneet Yadav — Co-Founder & Chief Product Officer, Rigor.ai · Former Senior Director of Product Management, Palo Alto Networks · Co-Founder, CloudGenix (acquired by Palo Alto Networks) · IIT Bombay

    3:45pm — 4:00pm

    Session I

    AI Economics & Capital Discipline: What the CFO Knows That the Board Doesn't
    Public company credibility and AI cost exposures are at stake. AI spending is often invisible on the balance sheet — buried in cloud costs, fragmented across departments, and rarely surfacing in audit committees until it's already a problem. The real issues: cost opacity and shadow deployments the board doesn't see, margin erosion from inference spend and uncapped cloud consumption, audit scrutiny and reporting gaps emerging in public filings, and the question of capital allocation discipline — what ROI frameworks actually work for AI, and who is accountable when the numbers don't add up. This session frames AI governance as a financial control issue.
    SPeaker: 
    TBD — CFO

    4:00pm — 4:15pm

    Session II

    What Public Boards Must Now Require: Oversight, Fiduciary Duty, and the 'Reasonable Controls' Standard
    AI oversight is no longer optional for public company boards. Securities regulators, institutional investors, and plaintiffs' attorneys are all beginning to ask the same question: what did the board know, when did they know it, and what did they require? The answer is taking shape in courtrooms and regulatory filings: AI oversight is now an explicit board-level fiduciary responsibility. The 'reasonable controls' standard is being applied by courts and regulators today — not in some future enforcement cycle. Audit Committees need a reporting cadence that defines what appears, how often, and in what form. Disclosure is evolving fast — what companies are now putting in their 10-Ks and proxy statements looks very different from two years ago. And director liability is no longer theoretical: there are specific scenarios where personal exposure becomes real, and this session names them.
    SPeaker: 
    Audit Committee Chair, TBD — Public governance rigor, global automation oversight, serious board credibility

    4:15pm — 4:30pm

    Break

    4:30pm — 4:45pm

    Session III

    AI Legal Exposure: Where Liability Is Already Emerging
    The legal landscape around enterprise AI is not theoretical. IP contamination cases are in court. Privacy regulators are issuing fines. Consumer harm claims are being filed. The exposure spans IP contamination — when training data and model outputs create infringement liability — to data leakage and what GDPR, CCPA, and state laws now explicitly require of AI systems. AI misrepresentation and consumer harm claims are already being litigated, not just threatened. Negligent oversight is the standard boards are being held to — and enforcement trajectory is accelerating into 2026 and 2027. This session grounds the governance conversation in legal reality: where the exposure is today, where it's heading, and what it means for every executive in this room.
    SPeaker: 
    Rob Taylor, JD — Enterprise Technology Attorney, Carstens Allen & Gourley LLP · Advising companies on AI governance, data liability, and emerging technology risk

    4:45pm — 5:15pm

    Session IV

    Runtime Control & Enterprise Security: What 'Governing AI' Actually Means at the Execution Layer
    Governance on paper doesn't stop an AI agent from taking action at 2 am. This session translates board-level accountability into operational reality — what it actually means to control an AI system, how you know when it's behaving outside its boundaries, and what it takes to stop it mid-action. The operational layer covers identity for AI agents, zero trust for autonomous systems, observability and intervention authority, and live runtime enforcement. Then the conversation moves one layer deeper — to the network infrastructure most boards have never seen. Your AI agents are traveling across network boundaries that were never designed for autonomous systems, at a latency you can't measure, and through infrastructure you didn't architect for this purpose. That absence of visibility is a risk that belongs on every Audit Committee agenda.
    Moderator: 
    Justen Aguillon — Director, Technology Partner Ecosystem, Equinix · Architect of the Fabric Intelligence vision connecting AI providers and enterprise subscribers across 270+ global data centers
    Panel: 
    Brian Sathianathan — Co-Founder & CTAIO, Iterate.ai · Former Engineering Leader, Apple (Secret Products)

    Stuart Oliver — Principal WW AI GTM & Growth Solutions Leader, NetApp

    Ynjiun Paul Wang, Ph.D. — SVP, USI Group Head, CE & Telematics II Group · Inventor of the 2d barcode, which is on the back of every driver's license and more

    5:15pm — 5:45pm

    Executive Roundtable

    The Five Governance Decisions Every Board Must Make in 2026
    Each of the five questions below represents a decision that every public company board should be able to answer by the end of 2026.
    1. Where is AI operating inside our organization without board-level visibility?
    2. Who owns runtime authority — the ability to stop an AI system mid-action?
    3. How is AI cost tracked, reported, and governed at the board level?
    4. What is reported to the Audit Committee, and on what cadence?
    5. What must change in the next 90 days — and who is accountable for it?
    The goal is not inspiration — it's action commitments. Each participant leaves with a clear view of where their organization stands and what must change.
    Moderator: 
    Rob Taylor — Enterprise Technology Attorney, Carstens Allen & Gourley LLP · Advising companies on AI governance, data liability, and emerging technology risk
    Panel: 
    Mike Edwards — Prior CEO, four companies · Former Board Member, four public companies

    Diane Randolph — Member, two public company boards: Dollar Tree, Shoe Carnival · former CIO, Ulta Beauty

    Jodi Watson — Current or past member of public company boards: Dakota Supply Group, PetMed Express

    Elaine Boltz — Past board member: Brinker International; current board member, AARP · Expert Advisor to BCG

    6:00pm — 9:00pm

    Evening Reception

    St. Julien Hotel & Spa  ·  Boulder, Colorado
    Recognizing leaders advancing AI governance, innovation, and enterprise responsibility.
    Conclude the day with drinks and networking, fostering connections and discussions sparked throughout the day.

    "If the rate of change on the outside exceeds the rate of change on the inside, the end is near."

    Jack Welch

    A Full Day of Events

    April 14th

    Private AI.  Quantum Reality. Governable Systems.

    7:30am — 8:00am

    Continental Breakfast

    Location: Ballroom Pre-Function, St Julien Hotel & Spa

    8:00am — 8:15am

    OPENING

    The Point of No Return: What Happens When AI Stops Being a Tool and Starts Being a System
    Some of you were in the room yesterday afternoon. You heard the governance case — the fiduciary argument, the legal exposure, the question of who owns runtime authority when an AI system acts on its own at 2am. Today is the answer to the question that session was designed to raise: what does any of that actually look like inside a real enterprise system?

    For those joining us for the first time today: welcome. You're arriving at the right moment. The day turns on two irreversible shifts — AI autonomy and governance — and a third one most leadership teams aren't tracking yet: quantum sensing and computing, which are closer than the headlines suggest. The choices enterprises make in 2026 will determine whether they govern AI by 2027, or get governed by it.

    A rapid-fire P&L reality check follows: ten real, profitable AI use cases already driving measurable results across revenue, cost, fraud, compliance, and operations.
    Speaker: 
    Jon Nordmark — CEO & Co-Founder, Iterate.ai · Co-Founder & Former CEO, eBags.com — scaled to $165M in annual revenue and acquired by Samsonite

    Act 1:
    The Autonomous Turn

    Theme: AI is no longer waiting for instructions. It's already moving.Takeaway: We're not choosing tools—we're choosing futures.
    Takeaway: We're not choosing tools—we're choosing futures.

    8:15am — 8:25am

    Act 1:
    The Autonomous Turn

    Theme: AI is no longer waiting for instructions. It's already moving.Takeaway: We're not choosing tools—we're choosing futures.

    Takeaway: We're not choosing tools—we're choosing futures.

    Robotic Cars Didn't Ask Permission: The Real Timeline for Autonomous AI — and What It Takes to Get There Safely
    Autonomous vehicles are already operating on public roads. But when will they be everywhere — and what does it actually take to get there? Chris spent three years building the AI stack at Zoox, designing and training the multimodal language models that interpret driving behavior, handle long-tail scenarios, and support validation. He brings a rare, unvarnished view of what happens when AI systems move from prototype to persistent real-world operation — and what governance gaps look like when the cost of getting it wrong isn't a bad quarter, it's a life.
    Speaker: 
    Chris Heckman, Ph.D. — Professor, University of Colorado · Former AI Stack Engineer, Zoox · Former Postdoctoral Fellow, Naval Research Laboratory

    8:25am — 8:40am

    Act 1:
    The Autonomous Turn

    Theme: AI is no longer waiting for instructions. It's already moving.Takeaway: We're not choosing tools—we're choosing futures.

    Takeaway: We're not choosing tools—we're choosing futures.

    OpenClaw, MoltBook, and the $8 Million Wake-Up Call: What Happens When Agents Go Feral
    In January 2026, OpenClaw went viral. By February: 1.5 million agents created, 770,000 spawned in a single week, 1.49 million database records exposed, an $8 million crypto scam executed, and Cloudflare's stock moved 14%. This is not a hypothetical. A candid teardown of what commercial-scale agent swarms look like when governance is an afterthought.
    Speaker: 
    Todd Sherman — Former CMO, Amplero · Established Amazon's Third-Party Marketplace

    8:40am — 8:55am

    Act 1:
    The Autonomous Turn

    Theme: AI is no longer waiting for instructions. It's already moving.Takeaway: We're not choosing tools—we're choosing futures.

    Takeaway: We're not choosing tools—we're choosing futures.

    2027 Is Not a Thought Experiment: Projecting the Autonomy Cliff
    By 2027, AI systems will act continuously, coordinate across agents, and operate beyond real-time human oversight. Magnus projects what happens when autonomy scales faster than governance — and why the architecture decisions enterprises make today determine whether they retain control at all.
    Speaker: 
    Magnus Tagtstrom — Corporate VP, Iterate.ai · Former Global VP of Innovation, Circle K - Couche-Tard (16,000 stores)

    8:55am — 9:15am

    Act 1:
    The Autonomous Turn

    Theme: AI is no longer waiting for instructions. It's already moving.Takeaway: We're not choosing tools—we're choosing futures.

    Takeaway: We're not choosing tools—we're choosing futures.

    If AI Runs the Institutions, Who Runs AI? On Being Human in the Age of Machines
    Hans frames AI not as a software upgrade but as a societal reconfiguration — a supercharged industrial revolution unfolding 10× faster with 10× the impact. What does it mean when AI increasingly runs corporations, governments, and schools? What does leadership actually mean in that world?
    Speaker: 
    Hans Peter Brondmo — former CEO, Google X Robotics spin-out · Board Member, MIT Media Lab

    Act 2:
    Why Public AI Breaks in the Enterprise

    Theme: Architecture, memory, and risk.
    Takeaway: The most dangerous systems aren't the ones you deployed intentionally. They're the ones you didn't know were running.

    9:15am — 9:30am

    Act 2:
    Why Public AI Breaks in the Enterprise

    Theme: Architecture, memory, and risk.

    Takeaway: The most dangerous systems aren't the ones you deployed intentionally. They're the ones you didn't know were running.

    Your AI Doesn't Store Data the Way You Think It Does
    AI systems don't store information the way IT systems do. They accumulate state through working memory, retrieval, tool outputs, and long-lived context — each with different persistence, cost, and risk profiles. This session explains where memory actually lives, how it decays, and why it behaves nothing like a database.
    Speaker: 
    John Selvadurai, Ph.D. — VP R&D, Iterate.ai · Former SAP Architect

    9:30am — 9:45am

    Act 2:
    Why Public AI Breaks in the Enterprise

    Theme: Architecture, memory, and risk.

    Takeaway: The most dangerous systems aren't the ones you deployed intentionally. They're the ones you didn't know were running.

    The FBI Called. Interpol Called. Here's What AI-Enabled Crime Actually Looks Like Now.
    Threat actors like Scattered Spider — thought to be roughly 1,000 loosely affiliated actors operating across the US and UK — are among the fastest adopters of AI on the planet. And they're not alone.

    State-sponsored AI teams from Russia, China, North Korea, and Iran are now using frontier models at every stage of the attack cycle: reconnaissance, phishing, malware development, and data exfiltration. North Korea uses AI to synthesize intelligence on targets at defense companies. Iran uses it to augment reconnaissance and map business partner networks. China uses it to conduct vulnerability analysis and penetration testing planning against US targets.

    These groups are deploying AI across the full attack surface. Social engineering is one method — cloning executive voices, manipulating help desks, automating identity takeover at scale. But it doesn't stop there. The same groups use agentic AI scripts to infiltrate code repositories and accelerate source code theft, run automated reconnaissance that maps internal networks and locates SOPs faster than any human analyst, exploit leaked credentials to move laterally before anyone knows they're inside, and — increasingly — use prompt injection to hijack AI agents operating inside enterprise systems, redirecting them to exfiltrate data or execute actions their owners never authorized. When your AI agent can be told what to do by the content it's reading, the attack surface isn't just your network. It's every document, email, and data feed your agents touch.

    Sibito has been inside this fight. His team built the predictive fingerprinting system — developed in collaboration with the FBI and Interpol — that detected the behavioral signatures left by AI-automated attack operations and led directly to major arrests. What he found, and what he'll discuss this morning: identity is now the perimeter, and AI has made every attack vector faster, cheaper, and harder to detect.

    The question isn't whether your organization is a target. It's whether your defenses were built for the threat that exists today.
    Speaker: 
    Sibito Morley — Former Chief Data Officer, Sinch (~70% of Western-world SMS traffic flows through this network) · Former CTO, Lumen, Century Link, Davita

    9:45am — 10:00am

    Act 2:
    Why Public AI Breaks in the Enterprise

    Theme: Architecture, memory, and risk.

    Takeaway: The most dangerous systems aren't the ones you deployed intentionally. They're the ones you didn't know were running.

    Three Rings: Why Private AI Is an Architecture Decision, Not a Vendor Choice
    Private AI is not a product category. It is an architectural commitment spanning data, models, and hardware. Brian introduces the Three Rings framework and explains precisely why control, containment, and accountability collapse under public AI architectures — and what it takes to build systems where governance is structural, not performative.
    Speaker: 
    Brian Sathianathan — Co-Founder & Chief Technology & AI Officer, Iterate.ai · Former Engineering Leader, Apple (Secret Products)

    10:00am — 10:15am

    Break

    10:15am — 10:45am

    QUANTUM INTERLUDE

    When the Physics Change, the Architecture Has to Change With It  ·  30 minutes  ·  3 talks
    Boulder is not a coincidence. It has become one of the world's most concentrated clusters of quantum research and development — anchored by CU Boulder's five Nobel Laureates, NIST, and a growing ecosystem of deep-tech companies building the hardware that will define the next era of computing. Thirty percent of the world's quantum sensing companies are in the Boulder area. The next three talks show why that matters right now — and how quantum and AI are converging faster than most leadership teams are tracking.

    10:15am — 10:25am

    QUANTUM INTERLUDE

    Talk 1, 10 minutes
    What Is Quantum Computing — and Why Should Your Organization Start Preparing Right Now?
    Quantum computing isn't science fiction. It's a staged, measurable progression — and one of those stages will break every encryption standard your organization currently relies on. Robert walks through what quantum computing actually is, how it develops in stages (from noisy intermediate-scale systems to fault-tolerant machines), and why the window to prepare your data and security architecture is closing faster than most executives realize. The threat isn't abstract: when a sufficiently powerful quantum computer arrives, RSA encryption, TLS, VPNs, and most of the cryptographic infrastructure of the internet become vulnerable overnight. The question isn't whether this happens — it's whether your organization will be ready when it does.
    Speaker: 
    Robert Wamsley, Ph.D. — Physicist & Quantum Algorithm Researcher, Quantum Rings · Building tools to simulate quantum futures before the hardware arrives

    10:25am — 10:35am

    QUANTUM INTERLUDE

    Talk 2, 10 minutes

    Quantum Is Already Here: Detecting Cancer Through Breath, Powered by AI and Funded by DARPA
    Quantum isn't only about computing — it's about sensing. Eva's company Flari uses quantum-grade optical frequency comb technology to detect disease biomarkers in breath with a precision that classical instruments can't match. The catch: the data these sensors generate is so complex that only AI can interpret it at scale. Eva will show how AI and quantum sensing are already converging in healthcare — and how DARPA-funded research is accelerating the path from lab to clinical deployment. This is what it looks like when quantum comes to life in the real world.
    Speaker: 
    Eva Yao, Ph.D. — Founder & CEO, Flari · DARPA-Funded Pioneer in Quantum-Enabled Molecular Sensing for Early Cancer Detectionsicist & Quantum Algorithm Researcher, Quantum Rings · Building tools to simulate quantum futures before the hardware arrives

    10:35am — 10:45am

    QUANTUM INTERLUDE

    Talk 3, 10 minutes

    Why NVIDIA Is Betting on Boulder: Atom Computing, DARPA, and the Race to Build the World's First Useful Quantum Computer
    DARPA reviewed 18 of the world's most advanced quantum computing companies for its Quantum Benchmarking Initiative — a rigorous program designed to determine whether fault-tolerant, utility-scale quantum computers can be built by 2033. Only 11 survived to Stage B. Atom Computing, headquartered in Boulder, is one of them — alongside IBM, IonQ, and Quantinuum. In October 2025, Jensen Huang unveiled NVQLink — a new architecture directly connecting NVIDIA's GPU supercomputers to quantum processors — and Atom Computing was named one of its 17 founding hardware partners. NVIDIA's bet is straightforward: GPUs won't be replaced by quantum computers. They'll be the engine that runs them. Justin explains what this convergence means and what the AI-quantum hybrid era will actually look like for enterprise compute.
    Speaker: 
    Justin Ging — Chief Product Officer, Atom Computing · One of 11 Companies DARPA Selected as a Plausible Path to a Fault-Tolerant Quantum Computer · NVIDIA NVQLink Founding Partner · MIT Leaders for Manufacturing Fellow

    Act 3:
    Real-Time, Big Memory Is the New Control Plane

    Theme: How AI actually behaves—and breaks—at scale.
    Takeaway: The most dangerous systems aren't the smartest. They're the ones that remember.

    10:45am — 10:55am

    Act 3:
    Real-Time, Big Memory Is the New Control Plane

    Theme: How AI actually behaves—and breaks—at scale.

    Takeaway: The most dangerous systems aren't the smartest. They're the ones that remember.

    Most AI Security Tools Weren't Built for AI: A Live Demo of Preemptive Cyberdefense
    Traditional security platforms were designed to protect networks, applications, and endpoints. They were not designed for AI systems that accumulate memory, call external tools, spawn subagents, and operate across enterprise infrastructure with minimal human oversight. The attack surface is different. The failure modes are different. The defense has to be different.

    Rigor.ai has built a mathematically rigorous cyberdefense management platform — one designed to identify, verify, and remediate every known attack vector that matters, before exploitation occurs. Not reactive. Not probabilistic. Mathematically complete. This live demo shows what that looks like against an AI system operating in a real enterprise environment: the vulnerabilities that standard tools miss, the attack paths that autonomous agents open, and what preemptive remediation actually looks like in practice.
    Speaker: 
    Navneet Yadav — Co-Founder & Chief Product Officer, Rigor.ai · Former Senior Director of Product Management, Palo Alto Networks · Co-Founder, CloudGenix (acquired by Palo Alto Networks) · Distinguished Engineer, Juniper Networks · IIT Bombay

    10:55am — 11:05am

    Act 3:
    Real-Time, Big Memory Is the New Control Plane

    Theme: How AI actually behaves—and breaks—at scale.

    Takeaway: The most dangerous systems aren't the smartest. They're the ones that remember.

    The Great Repatriation: Why AI Workloads Are Moving Off the Public Cloud — and What That Means for Governance
    The cloud-first era is not ending. It is maturing. Sixty-nine percent of enterprises are now actively considering moving workloads back from public to private cloud — and more than a third have already done so. AI is accelerating that shift. When an AI agent operates continuously across public infrastructure, it creates latency, sovereignty, and observability problems that no policy document can fix. Justen makes the case that up to 50% of AI workloads will ultimately run in private or sovereign environments — not because of ideology, but because of physics, compliance, and control. He introduces AgentWatch, a real-time agent observability layer, and shows what it actually looks like to see, track, and govern AI agents across a distributed enterprise infrastructure — before something goes wrong.
    Speaker: 
    Justen Aguillon — Director, Technology Partner Ecosystem, Equinix · Architect of the Fabric Intelligence vision connecting AI providers and enterprise subscribers across 270+ global data centers

    11:05am — 11:15am

    Act 3:
    Real-Time, Big Memory Is the New Control Plane

    Theme: How AI actually behaves—and breaks—at scale.

    Takeaway: The most dangerous systems aren't the smartest. They're the ones that remember.

    The Strategy Trap: How AI Accelerates Bad Strategy Faster Than Good Strategy
    While rapid experimentation is important, good strategy requires up-front thinking. That means putting proper governance in place. Then figure out the desired outcomes. Rapid experimentation can come between those two barbell ends.

    Drawing on his Six Cs framework, Kevin shows how organizations fail when strategy is approved but not executed—and why autonomous AI makes that gap more dangerous and expensive. If you haven't made a clear decision on public vs. private vs. hybrid AI, your agents already made it for you.
    Speaker: 
    Kevin Ertell — Author, The Strategy Trap · Former Global VP, Nike Stores

    11:15am — 11:45am

    Act 3:
    Real-Time, Big Memory Is the New Control Plane

    Theme: How AI actually behaves—and breaks—at scale.

    Takeaway: The most dangerous systems aren't the smartest. They're the ones that remember.

    Panel: Agent Proliferation Could Outrun Government and Enterprise Controls
    Agents are proliferating inside enterprises faster than leadership can track. The hardest part isn't choosing a model—it's controlling what agents remember, what they're allowed to do with that memory, and how quickly bad state spreads across tools, systems, and people. Where do real-world deployments actually fail? What guardrails work at runtime—not just on paper?
    Moderator: 
    Brian Sathianathan — Co-Founder & Chief Technology & AI Officer, Iterate.ai
    Panel: 
    Sibito Morley — Former Chief Data Officer, Sinch (~90% of Western-world SMS traffic flows through this network) · Former SVP, Lumen Technologies and CenturyLink
    Hans Peter Brondmo — former CEO, Google X Robotics spin-out · Board Member, MIT Media Lab

    11:45am — 12:00pm

    Act 3:
    Real-Time, Big Memory Is the New Control Plane

    Theme: How AI actually behaves—and breaks—at scale.

    Takeaway: The most dangerous systems aren't the smartest. They're the ones that remember.

    The Other Side of the Race: What Chinese AI Models Actually Do That American Models Don't
    The narrative around Chinese AI has been defined by bans, export controls, and DeepSeek's cost efficiency. That's the small version of the story. The larger one: Chinese foundation models are being optimized for different objectives — longer context windows tuned for industrial and logistics applications, models trained on manufacturing and supply chain data at a scale American companies don't have, and architectures designed to run efficiently in sovereign, air-gapped environments where cloud dependency is a non-starter. Meanwhile, American models lead on reasoning, coding, and general capability — but are increasingly designed around consumer and developer use cases. For enterprise leaders choosing an AI architecture in 2026, the question isn't which side wins. It's which capabilities you need, where your data lives, and whether your current model choices reflect a strategy or just a default. Brian maps the landscape — what each ecosystem is actually building, where the gaps are real versus overhyped, and what it means for enterprises operating across both.
    Speaker: 
    Brian Sathianathan — Co-Founder & Chief Technology & AI Officer, Iterate.ai

    12:00pm — 12:30pm

    Lunch Buffet

    Location: Ballroom Pre-Function, St Julien Hotel & Spa

    "Data is the new oil. It’s valuable, but if unrefined it cannot really be used. It has to be changed, processed, broken down, and analyzed for it to have value."

    Clive Humby, British mathematician and architect of the Tesco Clubcard, 2006

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.
    Takeaway: This is what responsible AI looks like in the real world.

    12:30pm — 12:35pm

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.

    Takeaway: This is what responsible AI looks like in the real world.

    A 17-Year-Old Built a Game in a Day. 18.9 Million People Played It in Three Weeks.
    This is what vibe coding looks like at its purest: no budget, no team, no marketing plan — just a teenager, agentic tools, and an idea. Highlandrr built Not Cute Anymore Tower on Roblox in a single day using AI-assisted development, then watched 18.9 million people play it in three weeks. Along the way he built a community of 1.3 million Roblox players in under two months — without a marketing department, a growth team, or a dollar of paid acquisition. No enterprise process. No governance framework. No sprint planning. The question this raises for every organization in the room: if a 17-year-old can build and ship a product used by millions in 24 hours, what does that mean for who builds what next — and what happens when your competitors figure that out before you do?
    Speaker: 
    Highlandrr — Creator, Not Cute Anymore Tower (Roblox) · Junior, Valor High School

    12:35pm — 12:40pm

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.

    Takeaway: This is what responsible AI looks like in the real world.

    The First Prompt Is the Most Important One: How to Start an AI Project Right
    Highlandrr had no guardrails. That's fine for a Roblox game. It is not fine for a hospital, a financial services firm, or a public company. This session is the enterprise answer to the question the previous demo raised: how do you move fast with AI without losing control of what it builds? Blake shows what responsible AI development actually looks like from the first line — scope, guardrails, and governance defined before a single line of code is written. Most enterprises skip this step entirely. This demo shows exactly what it costs them, and what it looks like when you get it right.
    Speaker: 
    Blake Stenstrom — AI Engineer, Iterate.ai · Princeton-Trained Algorithms Expert

    12:40pm — 12:50pm

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.

    Takeaway: This is what responsible AI looks like in the real world.

    Watch This: A Governed AI Agent Built Live in Five Minutes
    No slides. No pre-recorded demo. Blake builds a production-ready AI agent from scratch on Generate's no-code platform in five minutes — live, in front of the room. Scope defined. Tools bounded. Memory controlled. Escalation rules set. Guardrails enforced at runtime. The second five minutes answers the question the first five raises: why does starting with governance make the agent more useful, not less? Most enterprises build first and govern later. This demo shows what it looks like — and what it costs — when you get that order wrong.
    Speaker: 
    Blake Stenstrom — AI Engineer, Iterate.ai · Princeton-Trained Algorithms Expert

    12:50pm — 1:05pm

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.

    Takeaway: This is what responsible AI looks like in the real world.

    A Semi-Autonomous Social Engagement Engine That Stays Governed and Retains Your Brand Voice at Scale
    What happens when brand engagement becomes semi-autonomous — but still brand-safe, measurable, and governed? This demo showcases the AI-driven engagement engine built for large brands, designed to monitor, respond, and amplify engagement across TikTok, Instagram, and Facebook in real time. Unlike simple chatbots or auto-replies, this system ingests high-velocity social signals, understands context and brand voice, generates compliant responses, escalates when human intervention is required, and tracks performance and sentiment impact end to end. This is not AI posting randomly. It is instrumented, observable, constrained engagement at scale — and a live example of what governed autonomy looks like in a consumer-facing environment.
    Speaker: 
    Magnus Tagtstrom — Corporate VP, Iterate.ai · Former Global VP of Innovation, Circle K - Couche-Tard (16,000 stores)

    1:05pm — 1:15pm

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.

    Takeaway: This is what responsible AI looks like in the real world.

    Shopping Inside Gemini and ChatGPT
    Search is changing. Discovery is moving inside AI interfaces. And commerce is following. This demo shows what happens when product information, pricing logic, availability, and checkout workflows are inserted directly into ChatGPT and Gemini — enabling consumers to discover, evaluate, and complete purchases entirely inside a conversational interface. It's early, but the parallel is worth taking seriously: this may be what messaging-based commerce in China looked like before WeChat became a super app. Using structured commerce protocols — UCP and ACP — the demo walks through product discovery inside AI, inventory verification, dynamic pricing logic, secure checkout orchestration, and transaction completion without a single redirect. The result is commerce without a website — and a preview of where consumer expectations are heading.
    Speakers: 
    Tim McCue — VP, Jockey International
    Randy Kohn — President, AppBrew & Adventure Mind · First Salesperson at Attentive (scaled to $6B)

    1:15pm — 1:20pm

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.

    Takeaway: This is what responsible AI looks like in the real world.

    Who Owns the Customer When the Storefront Is Someone Else's AI?
    The demo you just saw works. The commerce protocols are real. The checkout inside ChatGPT is real. But before every brand in this room races to implement it, there is a question that doesn't have a clean answer yet: if ChatGPT closes the sale, who owns the customer? Not the transaction — the relationship. The data. The next purchase. The ability to remarket, retain, and build loyalty over time. For decades, brands have fought to own the direct relationship — moving from department stores to their own websites, from wholesalers to DTC, from search ads to owned audiences. Agentic commerce may be the most significant reversal of that trend in a generation. Tim has lived this question from the brand side. This is a five-minute provocation, not a solution — because the honest answer is that no one knows yet. But the brands that ask it now will be better positioned than the ones who figure it out after they've already handed over the relationship.
    Speaker: 
    Tim McCue — VP, Jockey International

    1:20pm — 1:30pm

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.

    Takeaway: This is what responsible AI looks like in the real world.

    Already Running: Four Private AI Agents Saving Real Time and Real Money Behind the Firewall.
    These aren't prototypes. They're production agents running inside real organizations today — observable, controllable, and fully private. A Clinical Documentation Assistant accelerating a 150-doctor radiology practice without touching PHI. A Contract and Compliance Reviewer that flags risk and produces audit-ready summaries. An Operational Workflow Optimizer that automates internal processes with human escalation built in. An Executive Intelligence Assistant that delivers role-based summaries with controlled memory. Dr. Dan Reed presents the clinical reality — what it actually means to deploy AI in a regulated medical environment where a mistake isn't a bad quarter, it's a patient. The closing question: where does this actually live? The IBM presenter connects the demo to infrastructure reality — why private AI is a physical architecture decision and not just a policy choice.
    Speaker: 
    Dr. Dan Reed — Radiology Oncologist · Owner, Radiology Oncology Practice · 150-Physician Practice

    1:30pm — 1:40pm

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.

    Takeaway: This is what responsible AI looks like in the real world.

    Decades of Dark Data, Finally Useful: Mass Storage AI That Doesn't Break Governance
    Most enterprises are sitting on decades of accumulated data — contracts, clinical records, financial documents, compliance archives — that has been completely inaccessible to the organization that owns it. Not misplaced. Not lost. Stored, paid for, and invisible. IBM and Deloitte both put the figure at roughly 90% of enterprise data falling into this category — unstructured, unanalyzed, and until now, unreachable. NetApp's intelligent storage platform changes that. At approximately $200,000, it brings a generative AI interface directly to on-premises mass storage — purpose-built for hospitals, financial services firms, and SLED organizations where data sovereignty and compliance are non-negotiable. This demo shows what it looks like when an enterprise finally turns its dark data into a working asset: private, governed, auditable, and without a single byte leaving the building.
    Speaker: 
    Jeff Liborio — Director, Global Strategic Alliances, AI/Technology Partners, NetApp

    1:40pm — 1:50pm

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.

    Takeaway: This is what responsible AI looks like in the real world.

    No Cloud. No Power. No Connectivity. AI Anyway: Far Edge Intelligence in the Field
    Most AI architecture assumes something that retail environments can't always guarantee: a reliable connection. Point-of-sale systems in basements, pop-up locations, outdoor markets, festivals, and remote stores can't wait for a signal — and they can't afford to fail when one isn't available. This demo shows what AI looks like when it runs fully offline on constrained hardware, with no cloud dependency, no latency from a round trip to a data center, and no single point of failure tied to connectivity. The enabling hardware is Qualcomm's Snapdragon 6490 chip — a platform Iterate.ai has built on specifically to bring inference to the edge without sacrificing governance or control. The architecture handles real retail workloads: inventory awareness, customer interaction, transaction processing, and exception handling — all running locally, all observable, all stoppable. For retail operators managing hundreds or thousands of locations, this isn't a future capability. It's a deployment decision available right now.
    Speaker: 
    Ynjiun Paul Wang, Ph.D. — SVP, USI · Inventor of the 2D Barcode, Found on Every Driver's License in America

    1:50pm — 2:00pm

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.

    Takeaway: This is what responsible AI looks like in the real world.

    Air-Gapped and Accurate: Document AI That Works Where the Internet Can't Go
    Most document AI fails the moment it meets the enterprise. Contracts, compliance documents, clinical records, and classified files don't belong in a public model — and in many industries, sending them there isn't just a bad idea, it's a regulatory violation. Air-gapped AI is the answer, but most implementations get it wrong: slow retrieval, poor accuracy, no audit trail, and architectures that quietly phone home when no one is watching. This demo shows what actually works — intelligent data routing that keeps sensitive documents inside the perimeter, accurate extraction and summarization without cloud dependency, and governance built into every step of the pipeline. The use cases span regulated industries where air-gapped deployment isn't optional: legal, clinical, financial, and government environments where the data never leaves the building and the system still has to perform.
    Speaker: 
    David Richard — Director, Digital Strategy & AI Automation, Terralogic · Specialist in Regulated Data and Air-Gapped Deployments

    2:00pm — 2:10pm

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.

    Takeaway: This is what responsible AI looks like in the real world.

    Hit Pause: Live Runtime Control of an AI Agent Mid-Action
    Every governance conversation focuses on models, prompts, and agents. Almost none of them focus on the network layer that those agents run across — which is where latency, data sovereignty, and observability actually break down at scale. This demo goes there. Justen shows how to pause, redirect, and terminate an AI agent in real time based on policy, cost, or security constraints. Not after the fact. Mid-action. This is what "human in the loop" looks like when it's built into the architecture rather than written into a policy document. The infrastructure making it possible: Equinix's Private Service Exchange — currently in beta — a governed private fabric connecting AI inference providers and enterprise subscribers across 270+ data centers in 77 markets. Fabric Intelligence adds the control plane on top: automated routing decisions, live telemetry, and dynamic segmentation. KPIs from real deployments include reduced time-to-inference, fraud detection acceleration, and complete elimination of public internet exposure for AI workloads. The demo is live. The agent is real. The pause button works.
    Speakers: 
    Justen Aguillon — Director, Technology Partner Ecosystem, Equinix · Architect of the Fabric Intelligence vision connecting AI providers and enterprise subscribers across 270+ global data centers
    Rob Taylor — Enterprise Technology Attorney, Carstens Allen & Gourley LLP · Advising companies on AI governance, data liability, and emerging technology risk

    2:10pm — 2:20pm

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.

    Takeaway: This is what responsible AI looks like in the real world.

    Inventory Without Guesswork: Private AI Forecasting at Retail Scale
    Inventory is where retail margins live or die — and it's one of the last major operational problems that most retailers are still solving with spreadsheets, gut instinct, and lagging reports. This demo shows what changes when private AI takes over forecasting, replenishment, and exception management at scale. No sensitive operational data leaves the building. No public model gets access to supplier relationships, pricing logic, or demand signals that competitors would pay to see. Cheryl brings the retailer's perspective — what it actually took to deploy this across hundreds of Shoe Carnival locations, what the governance requirements looked like, and what the P&L impact has been. Magnus shows the architecture: how private AI handles the edge cases, the exceptions, and the decisions that rules-based systems can't make — and how you know when to trust it and when to stop it.
    Speakers: 
    Cheryl Lindauer — SVP & CIO, Shoe Carnival
    Magnus Tagtstrom — Corporate VP, Iterate.ai · Former Global VP of Innovation, Circle K - Couche-Tard (16,000 stores)

    2:20pm — 2:30pm

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.

    Takeaway: This is what responsible AI looks like in the real world.

    Scheduling 7,000 Security Guards: AI in High-Stakes, High-Consequence Operations
    Most AI governance conversations use hypothetical stakes. This one doesn't. Tarian manages physical security operations across thousands of locations — and when an AI system makes a scheduling error, a coverage gap doesn't mean a missed deadline. It means an unprotected facility, a delayed response, or a person in the wrong place at the wrong time. This demo shows how private AI optimizes workforce scheduling, coverage gaps, and response times across 7,000 security personnel in real time — handling the complexity that no spreadsheet or rules-based system can manage at this scale. The governance requirements here are not theoretical: every autonomous decision the system makes has to be observable, auditable, and stoppable. Mark walks through what that looks like in practice — how the system earns trust, where human override is built in, and what the operational impact has been since deployment.
    Speaker: 
    Mark Stephens — SVP IT & BI, The Tarian Group

    2:30pm — 2:40pm

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.

    Takeaway: This is what responsible AI looks like in the real world.

    AI in Agriculture: When the Data Comes From the Ground Up
    Agriculture generates enormous amounts of data — soil conditions, weather patterns, yield histories, supply chain timing, equipment telemetry — and almost none of it has been accessible in a form that drives real decisions. This demo shows how private AI is being applied to that problem at the World Food Bank: turning ground-level data into forecasting, resource allocation, and operational decisions that directly affect food security outcomes. The governance requirements are different from enterprise IT — the infrastructure is distributed, the connectivity is unreliable, and the stakes are measured not in revenue but in whether communities eat. Richard brings a perspective that no other presenter today carries: what it looks like to deploy AI in environments where the margin for error is human, not financial.
    Speaker: 
    Richard Lackey — CEO & Chairman, World Food Bank

    2:40pm — 2:50pm

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.

    Takeaway: This is what responsible AI looks like in the real world.

    AI in a Community Hub: Serving People Who Don't Have Enterprise IT Departments
    Every demo today has shown AI working inside well-resourced organizations — hospitals, retailers, security firms, global data centers. This one is different. Community hubs serve people who need help navigating benefits, housing, employment, and services — and they do it with thin budgets, volunteer staff, and no dedicated IT infrastructure. This demo shows what responsible AI deployment looks like when the user isn't a knowledge worker with a corporate laptop — it's a parent trying to find childcare, a veteran navigating benefits, or a senior who needs help understanding a medical bill. The governance requirements here are the most human of any demo in this act: no data leaves the community, no decision is made without a human in the loop, and the system has to work for people who didn't choose to interact with AI. Randy shows what that looks like when you get it right.
    Speaker: 
    Randy Kohn — President, AppBrew & Adventure Mind

    2:50pm — 3:00pm

    Act 4:
    What Governable AI Actually Looks Like

    Theme: Control in practice—live demos.

    Takeaway: This is what responsible AI looks like in the real world.

    From Prompt to Production: Building a Governed AI Agent End to End
    At 12:40 this afternoon, Blake built a governed AI agent live on stage in five minutes. This is what happened next. Magnus and Blake will walk through the complete arc — from that first prompt to a fully deployed, observable, production-ready agent — showing how private AI moves from idea to operation without losing governance at any step along the way. Memory-bound. Tools scoped. Escalation rules enforced. Audit trail intact. This is not a separate demo. It is the conclusion of everything this act has been building toward: what it actually looks like when you take the principles from this morning — architecture, memory, runtime control, observability — and collapse them into a single end-to-end demonstration. The agent that started as a five-minute live build is now in production. That's the point.
    Speakers: 
    Magnus Tagtstrom — Corporate VP, Iterate.ai · Former Global VP of Innovation, Circle K - Couche-Tard (16,000 stores)
    Blake Stenstrom — AI Engineer, Iterate.ai · Princeton-Trained Algorithms Expert

    Act 5:
    What Can't Wait

    Theme: The choices that have no later.
    Takeaway: The architecture you choose now is the system you'll live with in 2027.

    3:00pm — 3:25pm

    Act 5:
    What Can't Wait

    Theme: The choices that have no later.

    Takeaway: The architecture you choose now is the system you'll live with in 2027.

    The Decisions Enterprises Can't Defer: What 2026 Choices Look Like in 2027
    This is not a summary. The day has already made the case. This is the moment where six people who have spent their careers making consequential decisions at scale tell you what they are taking out of this room — and what they think you should do before you leave.

    Each presenter shares their five takeaways and why they matter. Not talking points. Not consensus. The unfiltered view of executives who have lived the consequences of getting technology strategy wrong — and right.
    Panel: 
    Mike Edwards — Prior CEO, four companies · Former Board Member, four public companies — Prior CEO, four companies · Former Board Member, four public companies
    Diane Randolph — Board Director, Dollar Tree (NASDAQ: DLTR) · Board Director, Shoe Carnival (NASDAQ: SCVL) · Former CIO, Ulta Beauty
    Frank Kollmar — Former Global Deputy Managing Director, L'Oréal Dermatological Beauty Division · Former President & CEO, L'Oréal Canada
    Brian Tilzer — Former EVP, Chief Digital, Analytics & Technology Officer, Best Buy · Former SVP, CVS and Staples · Board Member, Signet Jewelers
    Chris Smith — EVP International & Chief Customer Officer, Jockey International
    Hans Peter Brondmo — former CEO, Google X Robotics spin-out · Board Member, MIT Media Lab
    Lynda Pak — Former SVP Technology, Estée Lauder
    Sibito Morley — Former Chief Data Officer, Sinch (~90% of Western-world SMS traffic flows through this network) · Former SVP, Lumen Technologies and Century Link

    3:25pm — 3:30pm

    Closing

    Thank You – Open Floor: Questions, Challenges, Comments
    SPeaker: 
    Jon Nordmark — CEO & Co-Founder, Iterate.ai · Co-Founder & Former CEO, eBags.com

    3:30pm — 5:00pm

    Cocktails & Networking

    Jill's Restaurant & Bistro Bar  ·  St. Julien Hotel & Spa

    "Every company is going to become an AI company. It’s not a question of if, it’s a question of how quickly."

    Thomas Kurian, CEO of Google Cloud

    AI Symposium III, Boulder CO

    Spring 2026 Participants

    Accolades from Attendees

    Collette Tauscher
    Technology & Supply Chain Strategy Leader I Starbucks I Columbia Sportswear I Nike
    Reflecting on an inspiring week at the IterateOn AI Symposium in Boulder.

    The intersection of AI and quantum computing isn't just theoretical anymore—it's happening now, and the pace is remarkable. What struck me most wasn't just the technology itself, but the caliber of minds working to shape its application.

    We're entering an era where the tools we use to understand our world—how we map complex systems, manage operations, and measure outcomes—will fundamentally change. The leaders and builders I met this week are at the forefront of that transformation.

    Grateful to the IterateOn team for creating space for these critical conversations.
    Karla Arzola
    CIO | Board Member | Strategist | Campbell County Health | HCA HealthOne | Swedish Medical Center
    Reflections from the IterateOn Cross-Industry AI Symposium

    Last week’s AI Symposium hosted by Iterate.ai brought together some of the sharpest minds driving real transformation with artificial intelligence, not just talking about it, but building and deploying it.

    From healthcare to retail, manufacturing, and finance, one thing was clear: AI isn’t a future concept anymore, it’s an operational necessity.
    Top takeaways that hit home for me:
    • AI wins are now measured in ROI, not prototypes. The best case studies showed measurable cost reduction and productivity gains — and in healthcare, that translates directly to better patient outcomes.
    • Cross-industry learning is where the real breakthroughs happen. Seeing how other sectors use AI to optimize operations, predict issues, and automate complexity gave me new ideas for healthcare applications.
    • Governance and ethics are rising to the top of the AI maturity curve. The “move fast and break things” phase is over. We’re entering the “move smart and scale responsibly” era.
    • A personal highlight? The demo on how AI agents helped uncover $14M in lost revenue in healthcare , a reminder that innovation and financial stewardship can (and should) go hand in hand.
    Huge thanks to the Iterate.ai team for creating a no-fluff, high-value experience that focused on what’s real and working now!
    Rory Reichelt
    ISV Partnerships & Enterprise AI GTM | 2025 CRN People to Know Recipient | Intel
    This is the kind of AI event we need more of...real conversations, real tech, no fluff. Love seeing the full GenAI stack come together to talk strategy and execution.
    Vish Panchal
    GTM, AI Appliances | ASA Computers
    I got to see firsthand how AI is shifting from experimentation to execution.

    The message to takeaway was AI is already solving real problems, from food insecurity to healthcare to creativity, but privacy and trust are key without them, innovation can’t grow.

    The best companies are those moving fast, testing often, and learning every day.

    Iterate brought together brilliant minds from Ulta Beauty, Adobe, Intel, Oracle, IBM, Dell, and many more, plus a State Senator, a Harvard professor, and AI model creators across industries to prove it.

    We watched secure, on-prem AI can transform document retrieval and enterprise workflows.

    From privacy-first infrastructures to agentic AI, every discussion reinforced one thing. Grateful to be part of such an inspiring event.
    Robert Wamsley Ph.D.
    Researcher | Leader | Quantum Rings
    Yesterday, Quantum Rings was honored to speak at the IterateOn AI Conference in Boulder, CO—a leading event uniting top enterprise AI leaders nationwide.

    Our team joined the "Quantum + AI: A Field Trip Into the Future" session at the Colorado Quantum Incubator. The session featured insightful talks from our own Bob Wold and Robert Wamsley, alongside our friends Eva Yao (CEO of FLARI Tech), Scott Sternberg (Executive Director of CuBits + Colorado Quantum Incubator), Wendy Lea (Board, Elevate Quantum), and Colorado state senator Mark Baisley.

    The convergence of quantum computing and AI holds transformative potential. We’re thrilled to see some of the top leaders from the AI space looking for ways to drive innovation together.
    Gurpreit Juneja
    VP | CAIO | AI | Digital Transformation | Data Science | VisaSTACK Infrastructure 
    If AI were a power plant, IterateOn was the switchyard—high voltage, grounded lines, real load.

    The number that stopped the room? $14M. Found by ONE revenue agent.
    𝗧𝗛𝗘 $𝟭𝟰𝗠 𝗦𝗧𝗢𝗥𝗬
    One revenue-integrity agent.
    $14M in quiet leakage found.
    No fanfare. Just math.

    This is what happens when we stop building chatbots and start building systems that DO.

    IterateOn—you built a builder's room, way to go!
    Luis Duarte
    Co-founder & CEO | Amoofy
    Deeply inspired by the conversations at IterateON 2025.

    A heartfelt thank you to  the entire Iterate.ai team for hosting such a forward-thinking and generous gathering — a space where innovation, future-now efforts, and technology met.

    I was struck by how often the topic of human stories surfaced throughout the sessions — a powerful reminder that even as AI, data, and automation evolve, the heartbeat of progress remains our ability to listen, connect, and make meaning together.

    Special gratitude to Jonathan Greechan from Founder Institute for the insights and for always creating bridges that connect founders driven by both intellect and empathy.

    And a big shoutout to Chris Byrne, former IP lawyer and venture partner at Samsung, who beautifully articulated that “the one thing that hasn’t changed in over 300,000 years is how humans codify information — in a story.”
    Ganesh Harinath
    Former Vice President & CTO @ Verizon Media | Founder & CEO at Fiducia
    𝗧𝗵𝗲 𝗶𝗻𝘃𝗶𝘁𝗲-𝗼𝗻𝗹𝘆 𝗜𝘁𝗲𝗿𝗮𝘁𝗲𝗢𝗻 𝗦𝘆𝗺𝗽𝗼𝘀𝗶𝘂𝗺 𝗶𝗻 𝗕𝗼𝘂𝗹𝗱𝗲𝗿, 𝗖𝗼𝗹𝗼𝗿𝗮𝗱𝗼 𝘄𝗮𝘀 𝗼𝘂𝘁𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 — 𝗲𝗻𝘁𝗶𝗿𝗲𝗹𝘆 𝗳𝗼𝗰𝘂𝘀𝗲𝗱 𝗼𝗻 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝗔𝗜 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀.

    It was a wonderful opportunity to make new friends and connections, bringing together a diverse community of leaders and offering a clear view into how the next generation of intelligent and immersive experiences is rapidly taking shape....
    Exceptional event.

    St. Julien Hotel & Spa

    The Spring 2026 AI Symposium will be held at the St. Julien Hotel & Spa, Boulder’s premier destination for luxury, location, and local charm. Make the most of your experience by staying on-site.

    Boulder is one of the most startup-dense cities in the U.S. on a per-capita basis and is home to Techstars, one of the world’s most influential startup accelerators. It is also a global center for quantum computing, with companies like Quantinuum and Atom Computing headquartered here—both identified by DARPA as two of the nine most strategically important quantum companies in the world. Anchored by CU Boulder, NIST, and a deeply collaborative innovation culture, Boulder sits at the intersection of frontier science and company creation.

    Unbeatable Location: Nature + Culture at Your Door

    St Julien sits at the crossroads of everything Boulder. In addition to Pearl Street, you’ll find:

    • Boulder Creek Path and hiking trails just across the street
    • Stunning views of the Flatiron Mountains from many rooms
    • Complimentary cruiser bikes to explore the city like a local

    Nationally Recognized for Excellence

    St Julien Hotel & Spa is one of Colorado’s most celebrated hotels, earning awards that place it in elite company:

    • Forbes Four-Star Award Winner — a distinction held by fewer than 15% of hotels evaluated worldwide
    • AAA Four Diamond Rating — awarded to just 6% of hotels assessed by AAA across North America
    • Frequently featured in Travel + Leisure, Condé Nast Traveler, and U.S. News & World Report as a top Colorado destination

    Register Now

    The future of AI moves faster—and smarter when we share real experiences, not just headlines.

    At IterateOn, we’re here to help. Join the conversation. Partner with us. Or if you're curious about the agenda, reach out—we’ll connect you with the right person.

    This is peer-driven progress. Let’s build it together.

    Register Now