How do you spot a hot trend in higher education? Hint: Headlines in industry outlets simultaneously insist that everyone wants it, no one is sure what it is, and nobody knows if it’s any good.
Welcome to the wonderful world of “microcredentials.” They’re the new new thing in higher education. And that’s cause enough for well-deserved skepticism. Yet the idea has promise, even if it will require battling to fend off the charlatans and usual suspects.
But let’s start with the basics: What are microcredentials anyway?
The idea is simple: Formal learning has to be somehow measured. Microcredentialing offers a way to more precisely assess and document that learning. Microcredentials signal completion of short courses in specific marketable skills. Some of the more popular and useful ones cover discrete expertise in high-demand fields like IT support, data analytics, and cybersecurity.
Because traditional degrees represent a bundle of courses, earning them tends to be drawn-out and unnecessarily expensive.Formal education, whether the focus is anthropology or Adobe Photoshop, inevitably entails demonstrating completion of a course of study. Typically, we do this by requiring students to attend a class for a requisite chunk of time and earn a passing grade. Post-secondary education has made a lucrative business of bundling these courses into various credentials and degrees, which can help recipients land a job.
This model has problems. Learners can complete courses without ever mastering essential skills and knowledge. Because credentials and degrees represent a bundle of courses, earning them tends to be drawn-out and unnecessarily expensive and can compel learners to sit through classes they may find unrewarding or irrelevant. Consequently, employers wind up sorting through applicants who’ve spent years pursuing costly degrees of uncertain content, even when all hiring managers really seek is a particular skill set.
It seems obvious there should be a better alternative. That’s where microcredentialing comes in. If credentials are competency-based, they can better ensure that students actually master the material in question; make it easier to hold providers accountable for successfully teaching the relevant knowledge or skills; and allow for timely, cost-effective learning.
There’s good reason, however, to ask whether microcredentialing will actually work this way. For one thing, many microcredentials are time-based rather than competency-based—they’re just short, week-long courses. Thus, there’s no assurance that students have mastered the content. And there’s no good way to judge whether specific offerings are rigorous, cost-effective, or respected by employers.
Even when microcredentialing is competency-based, things are much tougher in practice than in theory. Doing microcredentialing well requires determining what should be covered in a given credential, how to measure it, and how to assess the result.
You may be thinking, “Wait, none of this is really all that new. Community colleges and industry certification programs have been doing this kind of thing for a very long time.” Such a reaction isn’t wrong. What’s arguably changed is the scale and scope of things. New technologies make feasible a more bespoke, robust, and employer-friendly market, with a massive number of increasingly hyper-specialized credentials.
Doing microcredentialing well requires determining what should be covered, how to measure it, and how to assess the result.As of this writing, Credential Finder lists 46,133 credentials offered by 2,261 organizations. And that captures only a small slice of the total credentials out there. The possibilities and the problems are both easy to see. How does one know which are valuable? How do employers know which to take seriously? If employees are paying to earn credentials, who will reliably monitor and track all of this?
Indeed, as with any innovation in the post-secondary space, microcredentialing risks being captured by colleges or tech vendors and turned into a cash cow or a forum for silliness (or both). For instance, the State University of New York (SUNY) has positioned itself as a leader in the space. But the results are not inspiring. SUNY’s vast list of microcredentials includes “Fundamentals of Diversity, Equity, Inclusion and Sense of Belonging (DEIS) for Leaders”; “Global Alumni Relations”; “Understanding the Impact of Environmental Social Governance (ESG)”; “Crime and Justice in a Diverse Society”; and “Feminism and Visual Literacy.” These all coexist with more straightforward offerings like “Programming with Python” and “Microsoft Office Expert.”
The sprawling SUNY roster brings to mind the excesses of a continuing-education catalog. For a credential whose whole raison d’etre is its work-relevance and precision, an encyclopedic list dotted with vague, politicized, and hard-to-define topics is plenty disconcerting.
Given all of this, is there any reason to think that microcredentials will prove to be anything more than MOOCs 2.0? After all, a decade ago, there was a chorus of voices insisting that massive open online courses would “revolutionize” education. They didn’t.
That failure, though, may have been a function of what MOOCs sought to do, which was expand access to college courses. The bet was that lots of non-students wanted to sit in front of their laptops or tablets and learn from college professors, either as a leisure activity or because it’d be professionally useful. It turned out that few found these courses all that entertaining. Meanwhile, with good reason, employers didn’t trust that participating in (or even completing) a MOOC was a reliable measure of expertise.
Whereas MOOCs sought to make higher education more broadly available, microcredentialing is (theoretically) designed to address a more practical need. The emphasis on learning and validating useful skills in more efficient ways means that microcredentialing has the potential to be more significant. While SUNY’s phone book of ideologically-tinged coursework may hold limited labor-market value, something like Google’s credentials might amount to much more.
Microcredentialing is (theoretically) designed to address a more practical need than MOOCs did.So, does this mean that microcredentials really are the new new thing, after all? Well, the jury is very much out.
It depends on whether there are clear standards as to what constitutes essential skills or knowledge in a given course.
It depends on whether course assessments are rigorous, valid, and reliable.
It depends on whether the credentials get politicized or distorted by academic fashion.
It depends on whether courses are focused, cost-effective, and marketable, or just a new way for colleges or tech firms to make a buck.
Right now, microcredentialing can best be described as “all of the above”—it’s a stew of the sensible and the silly. The challenge will be to harness the promising parts and fend off the rest. That will prove a daunting task.
There is, though, cause to think things will progress. Increased computing power makes it easier to design, assess, and track microcredentials. In a graying workforce, the need for customized skill acquisition will continue to grow. Enhanced worker mobility and accelerated retraining will resonate with policymakers concerned about labor-market shifts (such as those fueled by clean energy or geopolitics). And the appeal of just-in-time, stackable, cost-effective credentials suits a nation increasingly used to tailored offerings.
My two cents? I suspect that microcredentials (in some form or fashion) will eventually be a big part of any attempt to rethink higher education and workforce training. But that’ll require changing rules, funding streams, hiring practices, and more. In short, while “eventually” could mean the 2020s, I’d bet it’s more likely to mean the 2030s … or 2040s.
So don’t hold your breath.
Frederick M. Hess is director of education policy studies at the American Enterprise Institute and author of The Great School Rethink (Harvard Education Press, 2023).