The Data Day Texas 2026 Sessions
We just now beginning to publish the sessions info. Expect to see many session abstracts added over the next few weeks.
Context > Prompts: Context Engineering Deep Dive
Context Engineering is the art and science of designing and managing the information environment that enables AI systems to function reliably at scale. This technical session examines why focusing on prompt engineering alone leads to production failures and demonstrates how proper context management transforms unreliable AI into systems that actually work. We'll explore the fundamental difference between crafting instructions (prompts) and engineering what the model actually processes (context). You'll understand the four core activities of context engineering: persisting information outside the context window, selecting relevant information for each step, compressing histories to manage token limits, and isolating contexts to prevent interference. The session covers critical failure modes including context poisoning where errors compound over time, context distraction where historical information overwhelms reasoning, context confusion from irrelevant data, and context clash from contradictory information. We'll examine why these failures are inevitable without proper engineering and demonstrate specific techniques to prevent them. Through architectural patterns, we'll review context management in existing frameworks. You'll see how declarative approaches eliminate prompt string manipulation, how vector databases enable semantic memory, and how orchestration platforms coordinate context flow.
Observability, Evaluation, and Guardrails for Self-Optimizing Agents
As AI workflows and agents transition from experimentation to production, ensuring reliability, safety, and continuous optimization becomes crucial. Yet most projects begin with prompt engineering or model selection, without an observability and evaluation framework in place. This leads to brittle systems and missed opportunities for improvement. In this session, I'll explore how to build self-optimizing agents by integrating a monitoring framework for observability and evaluation with DSPy, a powerful framework for structured AI workflows. I'll cover why metrics matter, what to measure, and how evaluation outputs themselves can become the training data that drives optimization. You’ll see how real-time datasets generated from evaluations can be used to trigger optimization workflows. For example, when agent performance trends downward (e.g., task usefulness scores dropping below a threshold), prior high-scoring examples can be injected into DSPy workflows to optimize behavior in real time. I'll walk through a live demonstration: monitoring a DSPy workflow, observing metric trends, and triggering an optimization workflow when a guardrail is crossed. The session will close with a discussion of future directions for observability-first AI development.
Data Governance Keynote
Existence over Essence? Data Governance in times of AI
Data governance lies at the heart of socio-technical systems. This includes the foundation for how organizations can integrate automation, AI, and AI agents. As organizations shift their focus in AI from innovation to adaptation, the challenge extends beyond traditional governance to include ethical, strategic, operational implications of autonomous, probabilistic systems. AI is no longer only a tool but a force that is changing the composition of socio-technical systems, reshaping human-machine interactions, accountability structures, and organizational decision-making.
Data governance must refocus on its core and embrace the roles it has to play:
1. Data Negotiation, aligning business, regulatory, technological demands with data requirements.
2. Data Direction, providing strategic orientation to ensure data and AI contribute to organizational purpose and values.
3. Data Audit, embedding accountability across human and machine decision-making.
The move from essence (data as static value) to existence (value through context and application) reframes governance as a force that is intentional structuring socio-technical systems. To operationalize AI at scale, organizations must unify data and AI governance, embedding transparency, fairness, and human oversight into adaptive feedback loops. Ultimately, governance is less about controlling and more about shaping organizational meaning, ensuring that AI amplifies human agency rather than eroding it.
DataOps Is Culture, Not a Toolchain
DataOps is often framed as a collection of tools. In practice it is a culture and a set of engineering behaviors adapted from software and platform teams to the realities of data work. This talk explores the cultural foundations of DataOps, including continuous improvement, learning from failure, blameless retrospectives, and measurement. We will explore the difference between DataOps and DevOps, then define what good measurement looks like for data teams. We will map DataOps outcomes to DORA while also drawing from SPACE and DevEx to capture satisfaction, collaboration, cognitive load, and flow. You will leave with concrete rituals, metrics, and anti-patterns to watch for.