Indicator: Teachers’ contributions to student learning growth Breadcrumb Home Indicators Teachers’ Contributions To Student Learning Growth Definition Teachers contribute to students’ learning growth. Recommended Metric(s) Percentage of instructors demonstrating above average contributions to student learning, as measured by student growth on state standardized tests or other outcomes (for example, using value-added models or student growth percentiles)View CEDS ConnectionPercentage of instructors demonstrating above average contributions to student learning as measured by student growth on state standardized tests or other outcomesCEDS Connections offer guidance, including data elements and step-by-step analysis recommendations, for how to calculate select metrics. Type(s) of Data Needed Administrative data; assessment data Why it matters As noted earlier, teachers are viewed as one of the most important contributors to student learning and social-emotional development.8, 9, 10, 11, 12, 13, 14, 15 One approach to measuring their contributions to student learning relies on measuring their students’ growth on learning outcomes (sometimes called “value-added”). Relative to status measures like proficiency rates, which conflate who instructors teach with how well they teach them, value-added models measure contributions to student outcomes by considering students’ initial performance levels (for example, using prior test scores) or other background characteristics. When teaching effectiveness is measured as instructors’ contributions to student learning, evidence of disparities in access to highly effective instructors is mixed. Some studies find no differences in the average value-added of teachers of students from low- versus high-income households.16, 17 Others do find disparities along student household income, race, and ethnicity, though they are usually small.18, 19, 20, 21, 22 One study of more than 11,000 teachers in 10 school districts found that the highest performing teachers (in value-added to student achievement) were underrepresented in the most disadvantaged middle schools, but not in elementary schools, though these patterns varied across districts.23 At the postsecondary level, less research has been done on college instructors’ contributions to student learning, though existing studies have found substantial differences in instructors’ value-added on student outcomes such as course grades and subsequent course-taking patterns.24, 25, 26 However, these studies have not examined whether students from low-income households and students of color have equal access to effective college instructors. What to know about measurement Value-added and other growth models require linking instructors to student outcome data (such as test scores from two or more academic years, so growth can be measured). As of 2019, 15 states use value-added or other growth models in a formal capacity to measure teacher effectiveness in K–12, with another two states using them formatively, and 10 states reporting local control over the decision to use value-added.1 At the postsecondary level, measurement of college instructor value-added is challenging because instructors often design and administer their own assessments. One way to address this shortcoming is to measure instructor impacts on subsequent grades and student course-taking patterns, though this method would not produce effectiveness measures for instructors who teach advanced-level courses.2 In places that do not already calculate value-added or similar measures, framework users should consult with experts to implement this indicator, as there are different approaches to computing value-added having different technical and practical considerations. (For a review of research on measuring value-added, see Koedel et al.)3 These approaches may result in differences in measures of instructors’ effectiveness. For example, using student growth percentiles instead of value-added scores would have resulted in 14 percent of teachers in one district being placed in a different performance category.4 We caution against using value-added data as the only measure of teaching effectiveness (our recommendations also include measures based on classroom observation and student survey data—see classroom observations of instructional practice and student perceptions of teaching). When used for high-stakes accountability, measures of teachers’ contributions to student learning may have unintended consequences (for example, leading to practices such as “teaching to the test”). These three measures have been shown to be valid and complementary measures of teaching effectiveness.5 Evaluation systems based on multiple measures may be more reliable than those based on a single measure.Under the Every Student Succeeds Act (ESSA), some states have moved away from value-added models as an approach to teacher evaluation and toward a measure of student growth based on student learning objectives. This change resulted in part from concerns (including lawsuits and protests) regarding the uses of test scores for teacher evaluation purposes. Student learning objectives are included in teacher evaluation plans in 28 states.6 Accepted measures of student learning objectives can include state tests, district benchmarks, school-based assessments, and teacher and classroom-based measures. These differences would make it difficult to compare data across contexts on whether students are meeting student learning objectives. In addition, there is limited evidence on the validity or reliability of student learning objectives.7 Source frameworks This indicator, or a version of measuring teacher effectiveness, appeared in five source frameworks reviewed for this report. Our recommendation to measure teacher effectiveness through student growth on standardized assessments draws from the National Research Council’s Key National Education Indicators. References 1Close, K., Amrein-Beardsley, A., & Collins, C. (2019). Mapping America’s teacher evaluation plans under ESSA. Phi Delta Kappan. https://kappanonline.org/mapping-teacher-evaluation-plans-essa-close-amrein-beardsley-collins/2Figlio, D., & Schapiro, M. (2021). Staffing the higher education classroom. Journal of Economic Perspectives, 35(1), 143-162. https://doi.org/10.1257/jep.35.1.1433Koedel, C., Mihaly, K., Rockoff, J. E. (2015). Value-added modeling: A review. Economics of Education Review, 47, issue C, 180–195. https://doi.org/10.1016/j.econedurev.2015.01.006 4Walsh, E., & Isenberg, E. (2015). How does value added compare to student growth percentiles? Statistics and Public Policy, 2(1), 1-13. https://doi.org/10.1080/2330443X.2015.10343905Chaplin, D., Gill, B., Thompkins, A., & Miller, H. (2014). Professional practice, student surveys, and value-added: Multiple measures of teacher effectiveness in the Pittsburgh public schools. Regional Educational Laboratory Mid-Atlantic. Institute of Education Sciences, U.S. Department of Education. https://eric.ed.gov/?id=ED545232 6Close, K., Amrein-Beardsley, A., & Collins, C. (2019). Mapping America’s teacher evaluation plans under ESSA. Phi Delta Kappan. https://doi.org/10.1177/0031721719879150 7Gill, B., Bruch, J., & Booker, K. (2013). Using alternative student growth measures for evaluating teacher performance: What the literatura says. Regional Educational Laboratory Mid-Atlantic, Institute of Education Sciences, U.S. Department of Education.. https://ies.ed.gov/ncee/rel/Project/369 8Aaronson, D., Barrow, L., & Sander, W. (2007). Teachers and student achievement in the Chicago public high schools. Journal of Labor Economics, 25(1), 95-135. https://doi.org/10.1086/5087339McCaffrey, D. F., Sass, T. R., Lockwood, J. R., & Mihaly, K. (2009). The intertemporal variability of teacher effect estimates. Education Finance and Policy, 4(4), 572-606. https://doi.org/10.1162/edfp.2009.4.4.57210Nye, B., Konstantopoulos, S., & Hedges, L. V. (2004). How large are teacher effects? Educational Evaluation and Policy Analysis, 26(3), 237-257. https://doi.org/10.3102/0162373702600323711Rivkin, S. G., Hanushek, E. A., & Kain, J. F. (2005). Teachers, schools, and academic achievement, Econometrica, 73(2), 417-458. https://www.jstor.org/stable/359879312Rockoff, J. E. (2004). The impact of individual teachers on student achievement: Evidence from panel data. American Economic Review, 94(2), 247-252. https://doi.org/10.1257/000282804130224413Sanders, W. L., & Rivers, J. C. (1996). Cumulative and residual effects of teachers on future student academic achievement (Research Progress Report). University of Tennessee Value-Added Research and Assessment Center. https://www.heartland.org/publications-resources/publications/cumulative-and-residual-effects-of-teachers-on-future-student-academic-achievement14Blazar, D., & Kraft, M. A. (2017). Teacher and teaching effects on students’ attitudes and behaviors. Educational Evaluation and Policy Analysis, 39(1), 146-170. https://doi.org/10.3102/016237371667026015Jackson, K. C. (2018). What do test scores miss? The importance of teacher effects on non-test score outcomes. Journal of Political Economy, 126(5). https://doi.org/10.1086/69901816Chetty, R., Friedman, J. N., & Rockoff, J. E. (2014). Measuring the impacts of teachers II: Teacher value-added and student outcomes in adulthood. American Economic Review, 104(9), 2633-2679. https://doi.org/10.1257/aer.104.9.263317Isenberg, E., Max, J., Gleason, P., & Deutsch, J. (2021). Do low-income students have equal access to effective teachers? Educational Evaluation and Policy Analysis. https://doi.org/10.3102/0162373721104051118Sass, T., Hannaway, J., Xu, Z., Figlio, D., & Feng, L. (2012). Value added of teachers in high-poverty schools and lower poverty schools. Journal of Urban Economics, 72, 104–122. https://doi.org/10.1016/j.jue.2012.04.00419Mansfield, R. (2015). Teacher quality and student inequality. Journal of Labor Economics, 33(3), 751–788. http://dx.doi.org/10.1086/67968320Goldhaber, D., Lavery, L., & Theobald, R. (2015). Uneven playing field? Assessing the teacher quality gap between advantaged and disadvantaged students. Educational Researcher, 44(5), 293–307. https://doi.org/10.3102/0013189X1559262221Goldhaber, D., Quince, V., & Theobald, R. (2016a). Has it always been this way? Tracing the evolution of teacher quality gaps in U.S. public schools (CALDER Working Paper No. 171). National Center for Analysis of Longitudinal Data in Education Research. https://doi.org/10.3102/000283121773344522Goldhaber, D., Quince, V., & Theobald, R. (2016b). Reconciling different estimates of teacher quality based on value added (CALDER Brief 14). National Center for Analysis of Longitudinal Data in Education Research. https://caldercenter.org/publications/reconciling-different-estimates-teacher-quality-gaps-based-value-added23Glazerman, S., & Max, J. (2011). Do low-income students have equal access to the highest-performing teachers? (NCEE Evaluation Brief 2011-4016). National Center for Education Evaluation and Regional Assistance. https://eric.ed.gov/?id=ED51796624Carrell, S. E., & West, J. E. (2010). Does professor quality matter? Evidence from random assignment of students to professors. Journal of Political Economy, 118(3). https://doi.org/10.1086/65380825Figlio, D. N. Schapiro, M. O., & Soter, K. B. (2015). Are tenure track professors better teachers? The Review of Economics and Statistics, 97(4), 715-724. https://doi.org/10.1162/REST_a_0052926Xiaotao Ran, F., & Xu, D. (2018). Does contractual form matter? The impact of different types of non-tenure track faculty in college students’ academic outcomes. Journal of Human Resources, 56, 878-921. https://doi.org/10.3368/jhr.54.4.0117.8505R