The conversation about AI in analytics has been stuck in a tired binary. On one side: AI will automate analysts out of a job. On the other: AI is hype, nothing changes, carry on. Both are wrong in the same way — they assume the object under threat is the analyst. It isn't. The artifact under threat is the dashboard.

A dashboard is, at heart, a pre-frozen answer to an anticipated question. Someone, months ago, sat in a meeting and imagined you would want to know weekly revenue by region, and they built the view that answers it. If your actual question is slightly different — revenue by region excluding the Midwest promo, or revenue by region attributable to new customers — the dashboard is silent. You file a ticket, you wait, you get a new dashboard, and the cycle repeats. The dashboard was always a compromise. We accepted it because the alternative — bespoke analysis on demand — was prohibitively expensive. That constraint is changing.

When a conversational model can reliably translate intent into a correct SQL query, validate its own work against known business rules, surface a sensible visualization, and explain its reasoning in plain language — the dashboard's reason for existing evaporates. Not its informational content, which remains valuable, but its role as the primary interface between business users and data. The interface becomes a conversation.

AI and data convergence
The interesting convergence isn't AI replacing analysts. It's AI replacing the dashboard as the primary interface to analytical work.

The honest state of the technology

Let's be clear-eyed about where we are. Text-to-SQL on enterprise data, in the wild, works well about 60–70% of the time on moderately complex questions. On simple ones it hits 90%+. On genuinely complex ones — multi-CTE queries with window functions, correct semantic joins, accurate handling of slowly-changing dimensions — it's closer to 30%. That gap is the difference between an interesting demo and a production system a CFO will trust.

What closes the gap is not a bigger model. Bigger models have been shipping for three years and the gap has closed only modestly on enterprise data. What closes the gap is semantic infrastructure — metric layers, clean metadata, well-documented schemas, authoritative business-rule definitions. This is the plumbing everyone has been promising to invest in since 2015. The arrival of useful LLMs has, finally, made that investment pay off. Companies with mature semantic layers are seeing AI querying work. Companies without them are seeing hallucinated numbers and building nervous governance committees.

LLMs don't fix your data. They amplify it. Clean data gets more useful. Messy data gets more wrong, faster.

What actually changes

1. The analyst's job moves up one level.

The parts of the analyst role that were always tedious — writing the hundredth slight variation of a revenue query, reformatting a result into a chart, documenting what a metric means for the twelfth time — these get compressed. What doesn't get compressed is the harder work: knowing which question is worth asking, diagnosing why a number is strange, designing an experiment, turning a finding into a decision. The analyst becomes less of a technician and more of a detective. For many analysts, this is a welcome relief. For some, it's terrifying, because those harder skills are exactly the ones that were always optional.

2. Self-service actually works, for the first time.

A decade of self-service BI investments have underdelivered because the interface demanded too much of the user — knowledge of the schema, the tool, the metric catalog, the quirks of the data. Conversational interfaces lower every one of those bars. A regional manager can type "why did my bakery category soften last week" and get a real answer with a real chart and a real explanation. Not always right, but often enough to be useful — and, crucially, explainable when wrong. This is the first interface that meets business users where they are.

3. Dashboards don't disappear — they change role.

Dashboards survive, but as monitoring artifacts, not exploration ones. You want a standing view of KPIs you check every Monday. That's a dashboard. You want to chase down why last week's number was weird. That's a conversation. The two modes coexist; the ratio changes. We estimate that within three years, more than half of all ad-hoc analytics questions in mature organizations will be answered through a conversational interface rather than a dashboard visit.

4. The data team's unit of output changes.

Historically, a data team's visible output was dashboards and reports. In an AI-mediated world, the visible output becomes the semantic layer itself — the metric definitions, the joins, the authoritative business rules. This is deeper, more leveraged work. It also means the data team's craft becomes less performative (fewer pixel-perfect reports) and more architectural. Some team members will love this. Some will not.

Analytics dashboard
Dashboards survive — as monitoring surfaces, not exploration surfaces. The ratio shifts.

What doesn't change (and what breaks if you forget it)

It is tempting, in the presence of a powerful new tool, to believe the fundamentals have changed. They have not. The question of whether a number is trustworthy is harder, not easier, in an AI-mediated world, because the reasoning is less visible. A dashboard, for all its limitations, had a build-time artifact you could inspect. A conversational answer is generated on demand from a chain of inferences. Auditing that chain — was the right table joined, was the right filter applied, was the right metric definition used — becomes its own discipline.

The companies that have deployed LLM querying successfully share three traits. They have a clean, documented semantic layer. They have a review process for AI-generated analyses that matches the stakes of the decision. And they are honest with their users about when the model is certain and when it's guessing. The third is the hardest. Users want confident answers. Models, left unchecked, will give them — even when they shouldn't.

The quietly important shift: analysis becomes a conversation, not a deliverable

There's a deeper change that doesn't make the keynote slides. When analysis happens inside a conversation — a back-and-forth of "and what about…", "can you break that down by…", "is this seasonal" — the analytical artifact stops being a static chart and starts being a thread. A narrative. A trace of reasoning. Some of the most interesting analytics work we're seeing now is not "the chart" but the conversation that produced the chart, preserved and searchable across the organization.

This is closer to how analysts have always actually worked — iteratively, exploratory, following thread after thread — and further from how dashboards have always asked users to work — receptively, consumptively, at static snapshots. The tooling is finally catching up to the craft.

The dashboard was never the work. The dashboard was a compromise. The work was always the thinking behind it.

Practical advice if you are responsible for this

If you're a data leader looking at this landscape and wondering what to do, here's the most honest counsel we can offer:

Invest in the semantic layer before you invest in the AI. Every successful deployment we've seen has a mature metric catalog, documented joins, and an authoritative source for every critical KPI. The AI sits on top. If you reverse the order, you will ship hallucinations to your business users and spend 18 months rebuilding trust.

Pilot on a bounded domain. Don't try to cover all of company data on day one. Pick a domain where the data is clean and the questions are well-understood — say, marketing funnel, or store operations — and deploy AI querying there. Learn what breaks. Then expand.

Keep humans in the loop for decisions above a threshold. For low-stakes questions, let the model answer directly. For questions that inform meaningful spending or strategic calls, route the AI's output through a human analyst before it reaches the decision-maker. This isn't distrust of the technology. It's appropriate calibration of risk.

Rename what your analysts do. If your org chart still says "Reporting Analyst," the role is about to feel increasingly small. Retitle and refocus: Decision Partner, Insights Engineer, Question Partner. This isn't cosmetic. It reshapes how the rest of the business engages with the team, and it signals to the analysts themselves where the center of gravity is moving.

A final, slightly contrarian thought

The narrative that "AI will replace analysts" flatters AI and insults analysts. It assumes analysts were ever doing what a machine could easily do. The best analysts were always doing something else: translating fuzzy business concerns into crisp analytical framings, noticing what wasn't being asked, sniffing out that a data pipeline had silently broken two weeks ago. Those skills are not on the roadmap for any model we've seen.

What is on the roadmap is the automation of the 40% of analyst work that analysts themselves never enjoyed. That's not a threat. That's a gift. The professionals who accept the gift, and lean into the harder work the gift leaves behind, will have the most interesting decade of their careers.