Skip to main content

Data Equity Principle 6

Ensure data visualizations promote inclusion and awareness across culturally, linguistically, and racially diverse audiences.


An equitable approach to data visualization ensures data do not reinforce stereotypes and deficit narratives and are accessible to multiple audiences. Data visualization refers to the graphs, icons, pictures, colors, order, and labels used to represent patterns in data. Using visual representations to portray findings has the power to distill large amounts of evidence into digestible, visual narratives. However, if done without an equitable lens, visualizations can “otherize” particular groups, reinstate bias, and obscure findings for audiences without research backgrounds. Statistics are grounded in real people and communities. Data users have the power to reflect dignity, empathy, and respect for those narratives through equitable visualization practices.

Equitable data visualization employs colors, labels, ordering, graphics, and icons in consideration of the lived experiences that data communicate to the intended audience. In addition to following federal accessibility guidelines, data users should carefully consider how visualization elements might reinforce stereotypes. For example, graduated color palettes imply a scale, so should not be used for categorical data, such as listing racial groups. Similarly, choosing a male-presenting icon to depict a school principal can reinforce a stereotype that female-presenting individuals are not suited for leadership roles. Titles and labels should use person-first language, such as “people with disabilities” instead of “disabled people.” Asset-based framing can also shape how readers view statistics and the people behind them—for example, by showing the number of students “meeting benchmarks” as opposed to the number of students “below grade level.” As another example, data visualizations should not default to using White students or individuals as the benchmark for other groups, but must be mindful of which comparisons are most clear and meaningful.

Equitable data visualizations must keep their audience in mind, which should include the greater community from which the data were gathered. Using overly technical and jargon-filled visualizations is not only dismissive of some audiences, but also removes data ownership from communities and puts power back in the hands of researchers and decision makers. Accessibility, however, does not imply oversimplification. Data users must ensure the reader has the context, references, and annotations needed to appropriately interpret the data. In addition to information on the source of the data, when and why they were collected, who they represent, and limitations of the data, visualizations should include narrative text or other data that put outcomes in context and illuminate the systems that create disparities.

Visualizing data in context

A 2020 ProPublica interactive report titled What Coronavirus Job Losses Reveal about Racism in America allowed readers to explore trends in employment outcomes by race, gender, age, education, and income. As users scroll down the page, they see sub group comparisons in employment trends Narrative text in callout boxes provides structural interpretations for the shown disparities. Rather than exclude or combine subgroups with very small sample sizes (for example, Native American men without a high school degree), the ProPublica team displayed a callout box acknowledging the missing data. At the bottom of the page, text cautions readers against comparing subgroups with small differences and discusses other possible explanations for the trends. By providing contextual information and clearly acknowledging the shortcomings of the data, this data visualization tool offered readers key information to make informed inferences.

Applying this Principle

Key phases for this principle
Example applications

Build a team with diverse lived experiences to decrease the likelihood that implicit bias might appear in data visualizations. Establish common language norms, review processes, and iterative collaboration at the outset to ensure data teams embed inclusiveness in their own processes and therefore, their products.


Acknowledge whom the analysis or resulting visualization does not represent. Acknowledging which groups are missing, whether due to insufficient data or the focus of the study, leaves space for improvement in future efforts. Consider whom to include in the “other” category and whether such a category is necessary. Identify the contextual information needed to appropriately interpret the data, including any limitations.


Ensure visualizations are accessible and are not likely to cause harm, such as by reinforcing stereotypes (consult the Urban Institute’s Do No Harm Guide for specific guidance on colors, labels, ordering, graphics, and icons). Provide opportunity for feedback, allowing community members to validate or reject the narrative portrayed and confirm that the visualization is easy to interpret. Although receiving feedback from community members is not always possible, try to offer them access before publication.

Reflection Questions

  • Which groups or findings are readers’ eyes drawn to in this visualization? Is that the focus of the analysis?
  • What does the ordering or spatial organization of the data imply, even if inadvertently?
  • Do the colors, pictures, or icons reinforce any stereotypes? Could this visualization cause any potential harm if interpreted incorrectly? Which groups are considered in the “other” category? Do they exhibit similar trends, or are you grouping them for convenience? Can you use another term instead?
  • Is the visualization’s message clear and easy to interpret, without requiring large amounts of text? If not, is a visualization necessary?

 Be On The Lookout

Be careful to not consistently place one race or gender as the default group in visualizations. Across U.S. government surveys and data reports, including the census, “White” is listed first and coded with a “1” in data records. Using “White” as the default or the primary group in data visualizations suggests that the experience of White people represents the benchmark, or standard, to measure desired outcomes against. Altering the order in which data appear depending on the focus of the analysis can not only avoid perpetuating harmful norms, but can also convey findings more clearly and meaningfully.

Additional Resources

  • Do No Harm Guide. This comprehensive guide by the Data Quality Campaign offers principles, norms, and pitfalls to consider when applying equity awareness in data visualization. It includes a racial equity in data visualization checklist to keep on hand when producing data visuals.
  • Reverse Engineering Data Viz for Equity. This We All Count article details how data users can test their data visualizations against an audience’s understanding by using the Reverse Legend test. This technique helps assess how accessible a graphic is or how clear its message comes across to broad audiences if taken out of context.
  • Designing Data Visualization with Empathy. This article by Bui argues for an empathy-centered approach to data visualization. The author highlights the focus of human-centered and person-first data use, arguing that focusing on the individual behind the data point through graphics, narrative, and context leads to stronger action.


The framework's recommendations are based on syntheses of existing research. Please see the framework report for a list of works cited.

This website was funded in part by the Bill & Melinda Gates Foundation.