We've reviewed, rebuilt, or retired a large and frankly embarrassing number of dashboards over the last decade — across industries, tools, and team sizes. Along the way, a short list of principles kept asserting themselves. The great dashboards, the ones that survived reorgs and tool migrations and the departure of the analysts who built them, shared the same ten traits. The bad ones had their own ten, neatly inverted. Here they are.

Dashboard design
The best dashboards answer a single question, clearly. The worst try to answer all of them and answer none.

1. Every dashboard answers one question.

If you can't fill in the blank "This dashboard exists to answer: ______" in a single sentence, the dashboard is already in trouble. Great dashboards have a mission so clear a stranger could state it after ten seconds. "How did sales perform this week, and what drove the variance." "Is customer support on track to meet SLAs." Bad dashboards attempt to be encyclopedias. They drown the one question they should be answering in a sea of charts that might also be useful for some other question that nobody is actually asking right now.

2. The first chart is the headline.

Users give a dashboard about three seconds before they decide whether to engage. The top-left chart — the one their eye lands on first — needs to communicate the single most important thing. If that's a headline KPI, make it big, make it current, make it comparative (vs last week, vs target). If you bury the lede under a stack of filters or a decorative donut chart, most users will close the tab and go back to Slack.

If the first three seconds don't answer the dashboard's one question, the remaining thirty charts won't be read.

3. Numbers without comparison are noise.

"Revenue: $4.2M" is not information. It's a sticker. "Revenue: $4.2M, up 7% vs last week, 12% below target" is information. Every headline number should have at least one, preferably two, comparisons — typically a time comparison (WoW, YoY) and a target comparison (vs plan, vs forecast). This is the single most common mistake we see, and the easiest to fix.

4. One axis, one scale, one meaning.

Dual-axis charts are almost always a mistake. They pretend that two series on the same visual space are comparable when they are, by definition, not comparable — you had to give them different scales to fit. The reader's eye reads proximity as correlation, and the chart has just manufactured a correlation that doesn't exist. If you need to show two series with different units, use two charts stacked vertically. Your reader will thank you.

5. Color is a variable, not a decoration.

Every color on a dashboard should encode something. If a bar chart is all different colors for no reason, the reader's brain is searching for a pattern that isn't there. Reserve color to highlight: the thing that is above target, the thing that is off, the segment being focused on. Grey is underrated. Most of a good dashboard is grey, with one or two strategic bursts of color pointing the reader at what matters.

Data visualization
Color as variable, not decoration. The best dashboards are mostly grey, with one loud note pointing at the story.

6. Filters are a liability, not a feature.

Each filter you add is a decision you're passing to the user. Users are not analysts; most will never touch a filter. What they will do is look at whatever's on screen by default and treat it as the truth. If your dashboard only tells a useful story when the user selects "region = West" and "product = premium," then the default view is a lie. Choose sensible defaults. Remove filters that don't meaningfully change the story. A dashboard with zero filters is a feature, not a limitation.

7. Every chart has a title that makes a claim.

A title like "Revenue by Region" is a label. A title like "West region has carried total revenue for four consecutive quarters" is a claim. Claim-based titles are dramatically better. They tell the reader what you want them to see. They make the chart's purpose unmissable. They also force you, the designer, to know what story the chart is telling — and if you don't, you learn that you don't, which is valuable information.

8. Density comes from focus, not from adding more.

There is a temptation, especially under pressure from stakeholders who all want their thing on the dashboard, to add. Another chart, another KPI, another row of small multiples. Resist. Great dashboards look information-dense because every element is doing real work, not because the designer crammed more in. If a chart doesn't materially help the reader answer the dashboard's one question, it doesn't belong. Politely tell the stakeholder you'll build them a separate view.

9. The dashboard explains itself.

Every metric should be defined somewhere on the dashboard, ideally on hover, at minimum on a companion "definitions" page one click away. What is "active user"? What is "qualified pipeline"? The fact that you and three other analysts know is not enough. Your dashboard will be used by a dozen people you've never met, none of whom will ask. Many of them will infer a definition and be wrong. Define your terms inline. It is the highest-ROI thing you can do in a single afternoon.

10. The dashboard is a product, not a deliverable.

The best dashboards are maintained. They are versioned. Someone owns them. When the underlying data schema changes, the dashboard is updated. When the business question evolves, so does the view. Dashboards that are treated as one-off deliverables decay quickly — a metric definition drifts, a filter breaks, a data source gets deprecated, and within a year the thing is lying to its users. Assign an owner. Schedule a quarterly review. Deprecate views that no one uses anymore. Treat the dashboard like the product it actually is.

The inverted ten

If you want a rapid audit of any dashboard in your organization, you can run it against the inverse of this list. Does the dashboard try to answer too many questions? Does the first chart fail to state the headline? Are numbers shown without comparisons? Is there a dual-axis chart, or gratuitous color? Are there more than a handful of filters? Do the chart titles describe rather than claim? Is there visual clutter from additions that accreted over time? Are metric definitions missing or hidden? Does anyone own this, or has it been orphaned?

Most dashboards, honestly audited, fail four or five of these. The encouraging news is that none of these principles require a new tool, a new platform, or a new vendor. They require taste, discipline, and a willingness to say no. Those are transferable skills. They'll outlast whatever BI software you happen to be using this year.

Good dashboard design is a subtractive art. The best dashboards are the ones you almost didn't add things to.

A closing note on taste

Dashboard design is often discussed as if it were a technical discipline. It isn't. It's a design discipline with technical constraints. The difference between a good dashboard and a great one is not the chart type selection — any competent analyst gets that right — it's the thousand small decisions about what to leave out, what to emphasize, what to call a thing, where to put the whitespace. Those are taste decisions, built over years of making worse versions and noticing what didn't work.

If you want to get better at dashboards, look at more of them. The good, the bad, the indifferent. Ask yourself, before you scroll past, what is this trying to tell me and how well is it telling it. Build a personal collection of ones you admire. The fastest way to make a great dashboard is to have seen a thousand great dashboards first.