Frontrunners AI Conference

Frontrunners AI and Data Conference

Frontrunners is an annual software developers conference from the Washington, DC area. This year Frontrunners will be held on March 27, 2026 and will feature AI!

We Need Volunteers!

Help us out! The Frontrunners AI developers conference needs some volunteers! Most of the jobs are pretty simple and do not require actually missing any of the conference. So, no worries! You would be just a regular attendee who has a responsibility.

Volunteer Here!

Tickets!

Frontrunners 2026 General Admission tickets are available now (current price through at least 3/13).

Frontrunners Conference
Frontrunners Conference
Frontrunners Conference

Our Venue

Frontrunners 2026 will be held at Van Metre Hall at GMU's Arlington Campus, a fantastic facility for both speakers and attendees.

Sign up for News!

Don't miss out. Be the first to hear Frontrunners news. Sign up for news.

Sponsors

For sponsorship information visit our sponsorship page, or email us at info at frontrunners dot tech.

2026 Speakers

Neil Chaudhuri

The Promise and Peril of Model Context Protocol

Danielle Maxwell

Stop Guessing, Start Testing: A Practical Framework for Evaluating LLMs and Prompts

Steve Drucker

Codepocalypse Now: Stop Worrying and Embrace Agentic AI Development

Chrissy Ching Mott

From Hype to Impact: How AI Is Reshaping the Way We Hire Technical Talent

Mark DeRosa

Data Quality is a Journey, not a Destination

Jofia Jose Prakash

Tiered AI Architectures: Cut Cost, Latency, and Brittleness Without Losing Quality

Hirenkumar Dholariya

AI Modernization for Regulated Data Using Semantic Intelligence

Jeremy Curcio

Engineering Better Prompts for AI Assisted Development

Courtney Yatteau

AI-Powered Gamification for the Web

Craig Wong

Lessons from a Three-Week AI-Assisted Refactor with Amazon Q Developer

Kapil Poreddy

Building a Real-Time LLM Classification Pipeline for Under $0.001 Per Request

Sandhya Subramani

My AI Just Upgraded Itself

NOTE: We are still confirming and announcing speakers. Our speaker list is subject to change.

2026 Schedule

8:30 AM
Registration
9:30 AM
Welcome
9:40 AM
Danielle Maxwell — Room 126 — Stop Guessing, Start Testing: A Practical Framework for Evaluating LLMs and Prompts
Working with LLMs can feel like trying to communicate in a brand new language. Your prompt says one thing, the model infers another, and the output is not what you expected. It can make you question your communication skills or whether AI is all hype. What if the problem lies somewhere in the middle?
When you write a function, you know what the end result should look like. LLMs introduce variability that can't be resolved by writing the "perfect prompt." They don't know your preferences, standards, or what "good" means until it is defined explicitly and verified systematically. Without that, it's all guesswork.
Developers know that testing surfaces problems early, yet most guidance focuses on enhancing prompts and maximizing context. In this talk, I'll walk through a practical framework for bringing software testing discipline to working with LLMs: defining what the LLM should do, establishing human-reviewed scoring metrics, and building structured test cases that generate valuable insight rather than just more outputs to scroll through.
Danielle Maxwell is a Software Engineer at Rotational Labs, specializing in integrating AI into scalable architectures. Her expertise in full-stack development enables her to connect front-end interfaces with back-end AI-driven processes, enhancing system usability.
Danielle enjoys watching Liverpool matches (when they win) and discovering new foods while traveling.
10:30 AM
Courtney Yatteau — Room 126 — AI-Powered Gamification for the Web
AI can make gamification feel personal and responsive, without turning your app into a giant game engine. In this session, we'll build a set of small, practical features you can drop into real web apps: dynamic story beats, sentiment-aware feedback, and adaptive challenge tuning that reacts to player behavior.
You'll see how to wire these features into a modern JavaScript app with lightweight demos that focus on patterns you can reuse in real apps.
Courtney Yatteau is a Developer Advocate on Esri's Developer Experience Team, where she helps developers build better web applications with modern JavaScript tools, libraries, and mapping technology. Before Esri, she taught computer science and mathematics, which shaped her approach to making technical concepts clear, approachable, and visual.
Jofia Jose Prakash — Room 125 — Tiered AI Architectures: Cut Cost, Latency, and Brittleness Without Losing Quality
Building production-grade AI agents is less about choosing the smartest model and more about designing a system that stays fast, predictable, and affordable at scale. This session introduces a tiered model-routing approach that assigns each workflow step to the cheapest tier that meets the quality bar, backed by strict schemas, validation gates, and bounded escalation. We will walk through a reference architecture from dispatch to retrieval, planning, tool execution, verification, and final synthesis, then show what to instrument so routing decisions stay honest: time-to-outcome, success rate, retry rate, escalation rate, and cost per successful resolution.
Jofia Jose Prakash is an Architect and Enterprise Strategist who builds enterprise-grade AI systems that deliver measurable business and societal impact. With end-to-end AI/ML expertise from data and model development to deployment and governance, she designs ethical, scalable solutions that align with business goals, regulations, and responsible-AI standards. She also contributes to the AI community through thought leadership, mentoring, and initiatives that advance transparency and accountability.
11:20 AM
Kapil Poreddy — Room 126 — Building a Real-Time LLM Classification Pipeline for Under $0.001 Per Request
Everyone's building with LLMs. Almost nobody's talking about what it actually takes to make them reliable, fast, and cheap enough for production, especially when the output has real consequences.
I built an open-source system called Food-as-Medicine (FAM) that classifies grocery ingredients against family health profiles and delivers real-time nutritional risk scores and healthier swap suggestions at the point of purchase. In this talk, I'll open up the entire pipeline of this system and walk through the engineering decisions regarding its prompt architecture, JSON schema enforcement, hybrid classification, the scoring function, cost engineering, and telemetry and feedback loops.
I'll also discuss what broke. The failure modes are instructive and generalizable to any LLM pipeline you're building.
You'll leave with a production-tested prompt engineering pattern for structured LLM outputs, a hybrid LLM-plus-rules architecture that applies to any classification problem, cost modeling formulas for budgeting LLM API usage at scale, and an open-source codebase to start from.
My work operates at the intersection of large-scale digital infrastructure and population health transformation designing AI-enabled, cloud-native systems that translate intelligence into real-world outcomes at a national scale.
My areas of focus include: high-reliability, cloud-native platforms, multicloud architecture optimization for resilience, cost efficiency, and continuous availability, AI-driven engineering systems from agentic automation to intelligent incident response, and retail-integrated digital health and care delivery models.
Across two decades, my work has progressively evolved from digital health foundations to architecting infrastructure capable of delivering preventive care, diagnostics, and essential services to millions, demonstrating how software engineering leadership can directly shape societal outcomes. I actively contribute through mentoring, research publications, industry talks, and open-source initiatives focused on scalable AI systems, digital health innovation, and resilient infrastructure design.
12:00 PM
Lunch Break
1:20 PM
Chrissy Ching Mott — Room 126 — From Hype to Impact: How AI Is Reshaping the Way We Hire Technical Talent
AI is no longer a future-state conversation. It is actively changing how technical teams are built, how work is structured, and how talent decisions impact the business. This session cuts through the hype to focus on what we are seeing firsthand in the tech hiring market.
We will explore how AI is influencing hiring models, including the growing shift toward contract and contract-to-hire talent as organizations seek flexibility, speed, and reduced long-term risk. We will also examine how these changes are affecting our clients’ businesses more broadly, from valuation and operating models to workforce planning and long-term talent strategy.
Finally, we will address the reality of job displacement. Rather than broad generalizations, this discussion looks at which roles are truly being impacted by AI-driven automation, where demand is increasing, and how these shifts are contributing to changes in unemployment across technical and adjacent roles.
This talk is designed to provide practical insight for leaders and practitioners who need to make informed talent decisions in an AI-influenced market, grounded in real client outcomes and market data rather than speculation."
Chrissy Ching Mott is the Managing Partner of Celertek, a national boutique staffing and talent advisory firm that partners with enterprise and high-growth organizations to build high-performing technology and digital teams. Under her leadership, Celertek has earned a strong reputation for delivering thoughtful, results-driven talent solutions grounded in market insight and long-term partnership.
Steve Drucker — Room 125 — Codepocalypse Now: Stop Worrying and Embrace Agentic AI Development
The "AI is coming for your job" panic is the wrong movie. Software teams just got a new kind of coworker: an agent that can read, write, refactor, test, and ship code alongside you. In this fast-paced talk, we will demystify agentic AI development, what it is (and what it isn't), why it changes the shape of engineering work, and how to adopt it with confidence.
We will cover the practical playbook: choosing the right tasks for agents, building guardrails (tests, linting, CI gates, permissions), designing “human-in-the-loop” review loops, and avoiding the classic failure modes like prompt spaghetti, silent regressions, and security faceplants. You’ll leave with a clear mental model, a handful of repeatable patterns, and the confidence to treat agents as leverage, not lore, so you can ship faster while keeping quality and accountability intact.
Steve Drucker is the SVP/Technology at flyExclusive who builds modern, AI-powered products at the intersection of software, operations, and customer experience. He’s currently focused on turning complex workflows into clean, data-driven tools, from real-time communications and document automation to smarter quoting and scheduling systems in private aviation. Steve is known for blending pragmatic engineering with a sharp product eye, translating messy real-world constraints into systems that scale.
2:10 PM
Neil Chaudhuri — Room 126 — The Promise and Peril of Model Context Protocol
Model Context Protocol took the AI world by storm, and it has become the de facto standard for building agents. MCP offers enormous potential for automating workflows, but there are critical concerns as well. This talk teaches you to harness MCP today and offers a peek into the future of agents.
Neil Chaudhuri is the President of Vidya, LLC, headquartered in Virginia and specializing in modernization for government and commercial clients since 2010.
With 25 years of experience as a software architect, engineer, and team lead, Neil brings a wealth of knowledge in modern programming languages including Kotlin, Java, Scala, TypeScript, Go, and Python. His expertise spans multiple domains including AI, cloud native, web development, APIs, and cybersecurity. Neil is also a blogger, speaker, and occasional podcast guest.
Jeremy Curcio — Room 125 — Engineering Better Prompts for AI Assisted Development
When asking AI to help refactor a method, do you get back something that runs, but would never pass review? The difference between frustrating and useful AI interactions comes down to how you build your prompt. This talk offers practical advice you can apply to get better results.
My name is Jeremy Curcio, I’m a software engineer currently working at Login.gov as an Integration Engineer, splitting my time between supporting partners to develop their integrations and writing code to support the mission. I’ve been developing software professionally for 12 years, with the last 6 years being working in Ruby. Outside of work, I am very involved with my kids by coaching their baseball, soccer, and basketball teams, as well as being President of their schools’ PTA.
2:40 PM
Snack Break
3:10 PM
Sandhya Subramani — Room 126 — My AI Just Upgraded Itself
What happens when agents encounter tasks they cannot solve? Most agents fail because their capabilities are fixed. This talk explores a new approach using open-source frameworks where agents can now adapt themselves by dynamically generating and integrating new tools at runtime. We will examine the architectural mechanisms that enable this, including how new functionality is validated and incorporated safely during execution. Through a live demo, attendees will see an agent encounter a novel task, build the required skill, and complete the workflow without redeployment. Attendees will additionally gain practical engineering insights into the tradeoffs and design considerations involved in building autonomous agentic systems that evolve beyond their original design.
Sandhya Subramani is a Sr. Developer Advocate with 8+ years of experience in Applied AI Research, specializing in Large Language Models and agentic AI systems. She has developed and deployed AI solutions at organizations including Amazon, Warner Bros, and Fidelity Investments. Her work centers on turning cutting edge research into practical applications, and she is passionate about helping developers build intelligent systems that solve complex problems at scale. https://www.linkedin.com/in/sandhyasubramani/
Mark DeRosa — Room 125 — Data Quality is a Journey, not a Destination
Data quality is often treated as a final validation step rather than a continuous discipline woven throughout the entire data lifecycle. We will reframe data quality as an ongoing journey that begins with requirements and continues through design, development, testing, deployment, and operational monitoring. The journey includes coverage of seven (7) data quality dimensions throughout the process: 1) Accuracy, 2) Completeness, 3) Consistency, 4) Integrity, 5) Timeliness, 6) Uniqueness, and 7) Reasonableness. Attendees will learn how quality decisions made early, such as defining data boundaries and acceptance criteria, directly impact downstream reporting, analytics, artificial intelligence (AI) models, and decision confidence.
Using practical examples, we will describe how to embed data quality controls into requirements, architecture, design, pipelines, testing frameworks, and DataOps workflows. We will emphasize collaboration between business, engineering, and analytics teams to shift quality “left” and prevent common issues before they reach Production. Attendees will leave with a real-world, lifecycle-based framework, and actionable techniques for building trusted, scalable analytics that are essential for all systems, especially AI systems.
Mark DeRosa is the Vice President of Data & Analytics at MetaPhase in Reston, Virginia. Mark provides hands-on technical leadership that leverages data assets and improves analytical capabilities while sharing intellectual capital across the company in unique and interesting ways. Mark is an Army veteran with several decades of experience designing and developing robust enterprise solutions and now serves as a subject matter expert for federal clients, covering a wide range of mission requirements. As a seasoned technologist, he is experienced in all phases of the project life cycle from idea to implementation, including sustainment and training. Mark earned a dual bachelor’s degree in Computer Science & Mathematics from the University of Pittsburgh and his masters degree in Data Analytics Engineering from George Mason University.
4:00 PM
Craig Wong — Room 126 — Lessons from a Three-Week AI-Assisted Refactor with Amazon Q Developer
Over three weeks, I completed a sustained refactor of a personal project using Amazon Q Developer as an AI assistant. This talk provides a practical account of what actually happened: how tasks were structured, how collaboration patterns emerged, and how the work evolved across 187 commits and 74 hours of development. I’ll describe where AI assistance accelerated progress, where it introduced friction, and the methods that proved reliable in real use. Attendees will gain a concrete, metrics‑backed case study of incremental development with AI in a real‑world engineering workflow.
Craig is a cybersecurity developer focused on building coherent, resilient systems. He’s currently exploring AI‑assisted workflows through a side project that blends language‑tool ergonomics, interpreter‑style reasoning, and a vi‑inspired editing model. His work centers on practical, incremental development patterns that reveal how humans and AI collaborate when the goal is clarity, momentum, and real engineering progress.
Hirenkumar Dholariya — Room 125 — AI Modernization for Regulated Data Using Semantic Intelligence
Discover how AI, semantic intelligence, and vector search can break data silos in regulated industries and unlock faster, more accurate insights. Learn a clear roadmap for building scalable, compliant, and future-ready data ecosystems.
Hirenkumar Narendrabhai Dholariya is a Senior Data Engineering Architect with more than 18 years of experience supporting global Fortune 500 organizations. He specializes in data engineering, AI, automation, and cloud-based enterprise solutions. His work spans large-scale data platforms, Generative AI integration, real-time pipelines, and advanced analytics that drive measurable business value. He has designed and delivered high-performance data ecosystems using Databricks, AWS, Kafka, Python, SQL, Delta Lake, and modern big data technologies. His portfolio includes architecting AI-driven platforms such as Gene.AI, building intelligent customer analytics frameworks, leading large-scale cloud modernization programs, and designing end-to-end data lake architectures that integrate AI, semantic intelligence, and real-time decisioning capabilities. Throughout his career, he has improved system performance, reduced infrastructure costs, automated complex workflows, strengthened governance, and enabled predictive insights for business units across industries, including life sciences, finance, and media. His leadership style focuses on innovation, collaboration, and building scalable data solutions that enhance decision-making and operational excellence.
5:00 PM
Happy Hour at Spider Kelly's bar!
3181 Wilson Blvd, Arlington, VA 22201