Skip to main content

Indicator: Institutions’ contributions to student outcomes

Definition

Schools and colleges contribute to students’ short- and long-term outcomes.

RECOMMENDED METRIC(S)

•    K–12: Schools’ contributions to student outcomes, including achievement, attendance, social-emotional learning, college enrollment, and earnings, using value-added models 
•    Postsecondary: Colleges’ contributions to student outcomes, including graduation rates, earnings, and student loan repayment, using value-added models 

Type(s) of Data Needed

Administrative data; assessment data; student transcript data; surveys

Why it matters

School effectiveness measures aim to capture schools’ impacts on student achievement on test scores, as well as more long-term outcomes, such as high school graduation, college access and success, and eventual earnings. Relative to status measures such as college enrollment or completion rates, which conflate who institutions serve with how well they serve them, approaches to measuring institutions’ contributions to student outcomes consider students’ initial performance levels (for example, using prior test scores) or other background characteristics. These analyses can paint a different picture of institutional effectiveness than status measures. For instance, analyses of nationwide data by the Educational Opportunity Project at Stanford University showed that, although test scores are higher, on average, in more affluent school districts, the relationship between school affluence and student outcomes does not hold when examining student learning growth. Measures of institutional effectiveness can thus help E-W systems identify the institutions that exceed (or fail to meet) expected outcomes for students given their prior performance.
 

Evidence of disparate access to effective schools is mixed across studies, which are based on different measures, outcomes, and settings. For example, one large-scale study of schools’ contributions to students’ performance on the ACT found that schools with greater shares of students from low-income households or Black, Indigenous, or Latino students tended to have lower value-added scores.  On the other hand, a study that measured Louisiana high schools’ contributions to students’ high school graduation, college enrollment and persistence, and earnings found little or no relationship between schools’ contributions to these outcomes and the share of students from low-income households in the school. At the postsecondary level, researchers who have measured colleges’ contributions to student outcomes have found variation across institutions, but they have not examined how they relate to students’ demographic characteristics. However, although college selectivity has little or no relationship to value-added, inputs such as instructional expenditures per student and faculty-to-student ratio are significantly positively related to colleges’ value-added. 

What to know about measurement

Value-added and other growth models require linking schools or colleges to student outcome data (such as test scores from two or more academic years, so growth can be measured). As of 2021, all states included at least one approach to measuring growth on standardized tests in their school accountability plans under the Every Student Succeeds Act (ESSA). The most popular approach was student growth percentiles (used by 24 states as of 2019); eight states implemented value-added measures. One appeal of value-added models relative to other approaches is that schools’ contributions to multiple student outcomes can be examined. Using K–12 records, value-added models have been used to measure schools’ contributions to student attendance, course completion rates, social-emotional learning, and high school graduation, in addition to test scores. Recent work also has linked K–12, postsecondary, and wage records to measure schools’ contributions to longer-term outcomes.  In places that do not already calculate value-added or similar measures, framework users should consult with experts to implement this indicator, as there are different approaches to computing value-added that have different technical and practical considerations. In practice, many states use other approaches to incorporating student growth data as part of their school accountability systems, which vary in validity and comparability as measures of schools’ contributions to student outcomes. Users should also carefully consider the results of value-added measures so as not to reinforce existing inequalities by “explaining away” inter-group differences that might be addressed by system conditions or interventions.

Source frameworks

This indicator appeared in three source framework reviewed for this report. Our recommendation to use value-added models to measure an institution’s contributions to student growth draws from the National Academies research to define quality in higher education. We also draw from Deutsch et al.’s discussion of promotion power.

References

The framework's recommendations are based on syntheses of existing research. Please see the framework report for a list of works cited.