Siora Photography, Unsplash The Higher Learning Commission (HLC), one of the nation’s largest institutional accreditors with more than 1,000 member organizations, recently announced a new endorsement application for short-term-credential providers, including microcredential organizations. Given the explosive growth of the non-degree credential marketplace (more than 1 million credentials are now available, and roughly 60 percent of all credentials come from non-academic providers), it is no surprise that HLC wants to attempt to tame what many have described as the “wild west” of the credential landscape. HLC member institutions also appear eager for such oversight: In a recent survey, 91 percent expected to expand their alternative credential offerings.
It is reasonable to question whether accreditors are genuinely prepared to judge microcredential providers. Recently, some have argued that accreditors should step up to the plate to help evaluate these programs using mechanisms parallel to those already applied to colleges and universities. But skepticism is well warranted. A recent joint report from the American Enterprise Institute and the Burning Glass Institute, entitled “Holding New Credentials Accountable for Outcomes: We Need Evidence-Based Funding Models,” found that only “12 percent of credentials deliver significant wage gains that earners wouldn’t have otherwise gotten,” and “just 18 percent of credential earners are likely to see wage increases their peers don’t enjoy.” The researchers analyzed 23,444 of the most commonly reported credentials from 2,056 providers, hardly a small sample.
Evidence from the traditional accreditation space raises red flags. Given the limited number of credentials that provide a good return on investment, it is reasonable to question whether accreditors, whose track record of evaluating higher-education institutions is weak, are genuinely prepared to judge microcredential providers. Evidence from the traditional accreditation space raises red flags. A 2022 report found that fewer than three percent of accreditor decisions actually punished a college for bad results or weak academic programs. And, in 2014, the Government Accountability Office reported that fewer than one percent of accredited schools ever lose their accreditation. Moreover, accreditor inaction does not imply that the institutions they approve perform well. One recent report found that more than a third of accredited colleges don’t graduate even half of their students, yet those schools still receive over $20 billion in federal student aid each year.
With this context in mind, it is worth taking a closer look at the process HLC has created and asking whether it is truly rigorous and whether other bodies, including the newly formed Commission for Public Higher Education (CPHE), should be accrediting microcredential providers at all.
HLC’s Endorsement Process
HLC announced its new endorsement application for short-term-credential providers in the fall of this year. Providers may apply by paying the $10,000 application fee (which rises to $15,000 after September 1, 2026), along with annual dues of $5,000 once approved. The endorsement is valid for two years, after which the providers must complete a renewal process and pay a $1,500 renewal fee. HLC describes the endorsement process as a review of the provider’s “finances and operations, educational offerings, and student learning.” Notably, the endorsement applies to the provider, not to individual credentials.
The application is divided into three primary sections: “Stability and Operational Strengths,” “Offerings Alignment to Workforce and Assurance of Learning,” and “Learner Information and Protections.”
The first section asks providers to disclose their affiliations and recognitions, whether they are registered with Credential Engine or hold recognitions from organizations such as ACE Credit Recommendations, the Better Business Bureau, Digital Promise, or many others. It also requires providers to list their offerings (certificates, certifications, microcredentials, badges, etc.) and identify the industries they serve and the distribution channels they use to deliver their content.
The second section focuses on alignment with workforce needs. Providers must document how they assess labor-market demand for their program, how they align offerings to those needs, and how they ensure programs remain current. They must also explain their criteria for learner achievement and the assessment mechanism used to verify skills.
The third section pertains to learner protections. Providers must indicate whether they disclose key information related to cost, time to completion, labor-market value, and return on investment and whether they maintain policies related to business continuity, refunds, complaints, and data protection. They must also report whether they offer guidance on the use of artificial intelligence.
The Higher Learning Commission’s process suffers from a crucial weakness. After submission, applications are reviewed by one to two trained “endorsement evaluators,” who provide a recommendation for endorsement. HLC has the final say on whether providers are approved. The whole process takes 4-6 weeks. Providers may reapply in a future cycle if they are not recommended.
The lack of transparency makes it difficult for policymakers to know whether an endorsement truly reflects program quality. While HLC is right to require disclosures related to costs, time to completion, labor-market outcomes, and ROI, its process suffers from a crucial weakness. HLC sets no clear thresholds for any of these measures, nor does it publicly describe how applications are scored. Providers appear responsible for the quality of their own evidence, but HLC does not specify what counts as adequate performance. This lack of transparency makes it difficult for students, policymakers, or institutions to know whether an endorsement truly reflects program quality.
Writing for the Martin Center this summer, Sherman Criner aptly observed that “credentials should be judged … by whether they help learners get jobs, switch careers, or move up the wage ladder.” Yet HLC’s framework focuses heavily on inputs: processes, disclosures, and alignment checks rather than the actual outcomes experienced by learners. HLC should shift away from relying on inputs and instead require verifiable data demonstrating whether earners gain jobs, experience upward mobility, or meaningfully improve their wages. HLC should emphasize rigorous, data-driven analysis of the outcomes achieved by credential earners.
HLC’s decision to endorse entire providers rather than individual credentials is another significant shortcoming. As noted by the “Holding New Credentials Accountable” report, “a provider’s brand is no guarantee of good outcomes.” The report noted differences in wage gains across Stanford University’s programs, for example. Stanford’s Data Science Foundations program amounts to $4,200 in incremental wage gains, while its project-management certification “fails to boost earnings meaningfully.” If HLC endorses providers as a whole, it risks extending its stamp of approval to high-ROI and low-ROI programs alike. Endorsing individual credentials, or at the very least requiring and evaluating program-level outcome evidence, would be a far more meaningful approach.
Finally, the absence of a public rubric or scoring system means the rigor of the evaluation remains opaque. Without published scoring thresholds, summary rationales for decisions, or comparative data, HLC’s endorsement risks becoming yet another bureaucratic label, valuable to the provider seeking legitimacy but not necessarily to the student seeking value. After the first round of applicants, HLC should commit to transparency, including the publication of scoring criteria, decision rationales, and aggregated performance data. Providing information on how rigorous the evaluation process was will help outside organizations better understand the weight of HLC endorsement and more about the providers themselves. As state lawmakers consider providing state funding for these credentials, they should be concerned about program quality and whether the endorsement accurately reflects such quality.
What Role Should the Commission for Public Higher Education Play?
The Commission for Public Higher Education, established in June 2025, represents six university systems including the UNC System and is designed to accredit public institutions using a framework that concentrates on measurable student outcomes rather than institutional inputs. Given its mission, CPHE may be in a better position than HLC to evaluate non-degree credentials, particularly if states intend to fund them.
If states want to support short-term credentials, CPHE could evaluate and accredit individual programs, not entire providers. If states want to support short-term credentials, CPHE could evaluate and accredit individual programs, not entire providers. This aligns with CPHE’s focus on outcomes, which could be composed of measures such as graduation rates, job placement, wage gains, return on investment, and other empirically grounded performance indicators. States have a clear interest in ensuring that programs receiving public support deliver real economic mobility rather than merely low-cost, low-value offerings.
However, CPHE is not the only possible evaluator. States could create their own review bodies or partner with independent organizations such as the Burning Glass Institute to evaluate programs for outcomes metrics. Introducing multiple evaluators would encourage competition both among reviewing bodies and among credential providers, leading to improved program quality and greater transparency.
Conclusion
Students pursuing education, traditional degrees, or nontraditional paths deserve confidence that their investment of time and money will pay off. As short-term credentials continue to expand and diversify, HLC’s new endorsement process, as currently designed, focuses heavily on inputs, offers little transparency, and fails to require meaningful evidence of learner outcomes. HLC should strengthen its endorsement by publishing scoring rubrics, setting clear performance thresholds, and evaluating the outcomes of individual programs rather than endorsing entire providers.
Other actors, including CPHE and state-led evaluation bodies, should consider developing rigorous, outcomes-based approaches to evaluating short-term credentials.
Madison Marino Doan is a policy analyst in the Center for Education Policy at the Heritage Foundation and the coauthor, most recently, of Slacking: A Guide to Ivy League Miseducation.