written by Eric J. Ma on 2026-05-10 | tags: agentic conference llm data mlops applied governance workforce strategy systems
In this blog post, I share my experiment at ODSC East 2026, where I analyzed talk abstracts to uncover the conference's true zeitgeist. By categorizing sessions into five key zones—agentic AI systems, LLM engineering, data infrastructure, applied AI, and governance—I reveal how AI builder culture is evolving into systems culture. I also reflect on my own workshop experience and the shifting focus from models to repeatable systems. Curious about what trends are shaping the future of AI conferences?
This year I ran a small experiment at ODSC East 2026.
As I was speaking and catching up with old friends at the conference, I could only attend a slice of sessions. So I thought, rather than try to catch every last talk, what if I could figure out what the zeitgeist of the conference was using just the talk abstracts? If I scraped the schedule and abstracts across talks, workshops, and keynotes, can they reveal the conference center of gravity, and hence the zeitgeist of the conference?
I used Cursor Agent on Premium for the scrape and extraction workflow.
The process was straightforward:
https://schedule.odsc.ai/ as the source of truthThis pass includes 237 sessions from the live schedule backend, exported as structured JSON.
I ran a multi-agent categorization pipeline. Four independent coding agents each read 50 full abstracts and proposed a five-category taxonomy. Then three cross-review agents read all 200 abstracts and all four proposals, identified points of agreement and disagreement, and each produced a unified taxonomy. A final arbitrator agent resolved the remaining disputes using majority vote, reading the abstracts directly for tiebreaks.
Corpus snapshot:
The five categories that emerged from this process follow.
This zone captures the shift from prompt craft to system design: agent architectures, multi-agent orchestration, tool use, MCP and A2A protocols, harness engineering, agent memory, simulation sandboxes, guardrails, and production deployment patterns for systems where AI plans, decides, and acts autonomously.
Sub-zeitgeist in this zone:
If you are building agentic systems right now, treat architecture as the first design surface. Start with runtime boundaries, failure recovery, and orchestration patterns, then layer prompts into that structure.
This zone covers the model layer: training, fine-tuning, quantization, inference optimization, RAG architecture, prompt engineering, model interpretability, hallucination research, evaluation methodology, and novel computational substrates.
Sub-zeitgeist in this zone:
Use this zone as a nudge to benchmark for real operating conditions, not demo conditions. Optimize for latency, cost, and stability together, and pick model and fine-tuning strategies that survive production constraints.
This zone includes the substrate work that determines whether agents reason well in production: data platforms, pipelines, data quality frameworks, real-time streaming architectures, training and serving infrastructure, data modeling, and ML operational systems.
Sub-zeitgeist in this zone:
For readers shipping AI systems, this is a reminder to diagnose context and data pathways before blaming the model. Invest in data contracts, context structure, and retrieval quality early, because those choices determine downstream reliability.
This zone spans two related audiences: introductory training courses teaching core skills (Python, SQL, R, statistics, ML basics) and domain-specific AI applications in healthcare, finance, defense, biopharma, accessibility, and marketing. The common thread is that the audience is learning or applying rather than researching.
Sub-zeitgeist in this zone:
The move for practitioners here is to design human roles and escalation paths alongside technical architecture. Adoption succeeds when accountability, decision rights, and domain workflows are specified as clearly as APIs and evals.
This zone is where technical capability meets organizational uptake: enterprise AI strategy and transformation, governance frameworks, regulation and compliance, trust engineering, workforce transformation, career development, and the societal and human dimensions of AI.
Sub-zeitgeist in this zone:
A practical takeaway for teams is to translate trust goals into runnable checks. Build evaluation and guardrail loops that run continuously, and make policy language executable in your delivery workflow.
To see how these categories relate to an unsupervised view of the same abstracts, I embedded all 193 substantive abstracts using all-MiniLM-L6-v2, clustered them with HDBSCAN, and projected the embeddings into 2D with UMAP. The companion Marimo notebook includes an interactive scatter plot where you can toggle between the agent-based zone classification (the five categories above) and the embedding-based HDBSCAN clusters. Mouse over any point to see the talk title, speaker, track, and full abstract. You can open it directly in your browser with molab.
The two views do not perfectly align, and that is instructive. Where the embedding clusters split a zone, it usually means the zone contains sub-communities with distinct vocabulary (for example, "agent architecture" talks cluster separately from "agent ops" talks within Zone 1). Where the embedding clusters merge zones, it means the abstracts share enough language that an unsupervised method cannot tell them apart.
If I compress the whole conference into one sentence, it is this: ODSC East 2026 feels like the year AI builder culture became systems culture.
The numbers bear this out. Agentic AI systems accounted for 28% of substantive sessions, more than any other zone. LLM and foundation model engineering took another 19%. The event still celebrates model progress, but the practical energy sits in the joints between components: agent architecture, data infrastructure, evaluation, deployment, and decision-making structures inside organizations.
I also notice a healthy coupling of technical and leadership tracks. That pairing usually shows up when teams are moving from experimentation budgets to accountability budgets.
I taught a 1 hr workshop called "How to Do Agentic Data Science" on Day 1. The abstract reflected my original plan: use Python scripts executed with uv run, one script per plot, and pair that with markdown journals and reports so the workflow stayed inspectable and reproducible.
Then I learned about Marimo Pair about a month before the session. At that point, conscience kicked in for me. I felt a strong obligation to avoid giving folks a workflow that could feel outdated within a quarter of a calendar year. So I pivoted live and ran a coding demo with Marimo Pair because that workflow felt cleaner and tighter for the same core ideas I wanted to teach. Those ideas stayed simple: slow down, look at your data directly, gate analyses one plot at a time, and use LLMs to help with documentation while keeping human judgment in charge. Marimo made that "look at your data" step much more natural through direct dataframe display in the notebook flow.
Overall, I was floored by the early response - when I ducked out briefly to grab some water, I saw a wall of people outside, and the demand hit me all at once. Others I met in the hallway and in the VIP/Speakers room gave overall positive feedback, and agreed with my own assessment that the content would be better done with a longer session, which would give me the space to be more hands-on.
@article{
ericmjl-2026-odsc-east-2026-zeitgeist,
author = {Eric J. Ma},
title = {ODSC East 2026's Zeitgeist and Conference Report},
year = {2026},
month = {05},
day = {10},
howpublished = {\url{https://ericmjl.github.io}},
journal = {Eric J. Ma's Blog},
url = {https://ericmjl.github.io/blog/2026/5/10/odsc-east-2026-zeitgeist},
}
I send out a newsletter with tips and tools for data scientists. Come check it out at Substack.
If you would like to sponsor the coffee that goes into making my posts, please consider GitHub Sponsors!
Finally, I do free 30-minute GenAI strategy calls for teams that are looking to leverage GenAI for maximum impact. Consider booking a call on Calendly if you're interested!