Ozz, Adobe Stock Images

Do Data Dashboards Truly Illuminate?

Higher-ed systems are drowning us in visual noise.

In 2010, when I first entered the institutional-research field, the most common request from academic departments was a simple, almost humble “I need data.” A decade later, that request had evolved—sharpened, stylized, and rebranded—into “I want a data dashboard.” The linguistic shift is subtle, but the cultural transition it represents is enormous and not necessarily for the better. What appears to be progress toward more sophisticated data engagement often masks a deeper dysfunction in how higher education creates, presents, consumes, and understands datapoints.

The phrase “data rich and information poor” aptly describes the environment in which colleges now operate. The phrase “data rich and information poor,” or DRIP, aptly describes the environment in which colleges and universities now operate. Institutions often seem to pile data onto dashboards in a competitive sprint to demonstrate their analytical prowess. Who are they trying to impress? Donors eager for evidence of impact. Local businesses seeking accountability. Accreditors hungry for any documentation for their files. State agencies demanding transparency. The result is frequently an impressive array of aesthetically pleasing dashboards—made easier to produce with ubiquitous visualization tools such as Power BI and Tableau—featuring a dizzying assortment of trendlines, color palettes, and interactivities that delight many stakeholders but, crucially, impress more than they illuminate.

Trendlines, color palettes, and interactivities delight many stakeholders but, crucially, impress more than they illuminate. A strong illustration of this dynamic can be seen in the North Carolina Community College System, which offers an extensive suite of dashboards that together form a kind of visual encyclopedia. Yet, as the Nobel Prize-winning economist Herbert Simon famously observed, “a wealth of information creates a poverty of attention.” That principle is on full display here. Across more than 70 dashboards with several hundred possible permutations, the system has created something akin to a theme park for data-dashboard enthusiasts. For evaluators, policymakers, or more detached stakeholders, however, the abundance dazzles more than it clarifies. The trees are shown in granular detail, but little is immediately clear about the summative health, or the proverbial “vital signs,” of the forest itself.

North Carolina is hardly alone. Across the country, institutions find themselves in a kind of situational irony: Dashboards designed to inform end up overwhelming the senses and undercutting thoughtful analysis. The very tools intended to enhance understanding become barriers to it.

Part of the problem lies in how dashboards are requested and produced. Dashboard builders—often hapless institutional-research analysts—are placed in the unfortunate role of human vending machines, dispensing visualizations on demand from “higher ups.” These professionals are seldom substantively empowered to say “no,” even when a request adds complexity without adding value. Departments often fear being left out of what might be called the dashboard arms race; to them, dashboard presence equals institutional significance. In this way, dashboards have become the digital counterpart of the traditional strategic plan: a symbol of relevance, whether or not the content merits inclusion.

But the deeper issue is not the dashboards themselves—it is what their proliferation reveals about higher education’s persistent struggle to define and commit to a coherent set of performance metrics. Unlike businesses that rely on well-established indicators such as P/E ratios, profit margins, ROI, ROA, or net income, the nearly 5,000 accredited colleges and universities in the United States have no comparable standard. For decades, calls for higher education to operate more like a social-enterprise sector—rather than a social-welfare cooperative—have intensified. Yet the sector continues to resist standardization.

Why the resistance? Some of it is rooted in fear or in the instinct for turf protection. Some of it stems from a misunderstood view of shared governance that treats data ownership as a political rather than a collaborative matter. And some of it is tied to a romantic, almost nostalgic conviction that the impact of education cannot be fully measured. Whatever the reason, the vacuum created by the absence of common measures is swiftly filled by anything that can be visualized at all—often accompanied, figuratively, by the kitchen sink.

Presentation trumps process as dashboards allow users to bypass the analytical rigor. This leads to an even more concerning consequence: Presentation trumps process as dashboards allow users to bypass the analytical rigor that spreadsheets or similar traditional tools once required. Sleek visuals obscure the underlying methodology. Filters replace definitions. Trendlines substitute for understanding. Because humans are inherently visual creatures, the allure of a polished dashboard can overshadow important questions: What decisions led to these categorizations? What assumptions are built into the calculations? What limitations accompany the data? These questions fade into the background when presentation eclipses process.

Because humans are inherently visual creatures, the allure of a polished dashboard can overshadow important questions. Security concerns form another rarely discussed side-effect of the modern dashboard ecosystem. As enterprise resource planning systems (ERPs), data warehouses, and analytics tools shift from on-premises hosting to cloud-based environments, the connective tissue in the digital architecture becomes more complex and more vulnerable. Data pipelines, authentication layers, and API integrations—while generally reliable—are not immune to disruptions. Whether caused by system failures, vendor-side outages, or deliberate external attacks, such disturbances can have significant downstream consequences. In contrast, the humble spreadsheet living quietly on an internal server is comparatively insulated. In a world where institutional data flows through multiple platforms, vendors, and jurisdictions, even brief interruptions can impede operations, delay reporting, or jeopardize compliance.

None of this is an argument for eliminating dashboards. On the contrary, a well-conceived and thoughtfully executed set of dashboards is invaluable. When dashboards focus on core performance indicators—aligned with widely accepted standards such as those used by major accreditors, the National Center for Education Statistics, or state regulatory agencies—they become powerful instruments for transparency, decisionmaking, and institutional accountability. A concise suite of dashboards that showcase the “vital signs” of a college or university offers far more clarity than a sprawling assortment designed to satisfy every dean, vice president, project manager, and union leader.

But this raises an important question: What should institutions do with all the other data that departments want visualized?

The answer lies in rebalancing expectations. Just as in the business world, not all data warrant dashboard treatment. Operational, departmental, or project-level information is often best managed through traditional analytical methods—well-organized spreadsheets, narrative reports, or targeted briefing documents. These formats encourage deeper engagement, deliberative thinking, and more critical analysis. They do not tempt users into mistaking aesthetics for analysis. Rather, they privilege substance over style.

The real strength of a dashboard lies in its ability to provide a clear, compelling, high-level picture of organizational performance. That strength dissipates when dashboards proliferate without discipline, drowning users in visual noise. In such environments, dashboards cease to illuminate and instead contribute to the very data fog they were intended to cut through.

In the end, the ancient maxim “less is more” applies powerfully to modern academic data culture. Colleges and universities in the 21st century are well advised to resist the temptation to equate volume with value when it comes to the visualization of random datapoints. Fewer dashboards—crafted with intention, overseen by empowered institutional-research professionals, aligned with meaningful standards, and used consistently to inform decisions at a high level—will often outperform a vast array of dazzling but directionless visualizations that impress without illuminating.

Esam Sohail Mohammad is the executive director of institutional effectiveness and planning at Rogue Community College in southern Oregon and is a past chairman of the Kansas Council of Institutional Researchers of Two-Year Organizations (CIRTO). The views expressed are the author’s own and do not reflect those of any organization with which he is affiliated.