National Academies Press: OpenBook
« Previous: 11 Summary: A New Path for NAEP
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. Washington, DC: The National Academies Press. doi: 10.17226/26427.
×

References

AASA. (2021, September). School District Spending of American Rescue Plan Funding. Available: https://aasa.org/uploadedFiles/ARP-Survey-Findings-090121.pdf.

American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. (2014). Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association.

Beaton, A.E. (1990). Epilogue. In A.E. Beaton and R. Zwick (Eds.), The Effect of Changes in the National Assessment: Disentangling the NAEP 1985-1986 Reading Anomaly (pp. 165–168). Princeton, NJ: Educational Testing Service.

Beaton, A.E., Barone, J.L., Campbell, A., Ferris, JJ., Freund, D.S., Johnson, E.G., Johnson, J.R., Kaplan, B.A., Kline, D.L., MacDonald, W., Mead, N.A., Mislevy, R.J., Mullis, I.V.S., Narcowich, M.A., Norris, N.A., Rogers, A.M., Sheehan, K.M., Yamamoto, K., Zwick, R., Braden, J., Burke, J., Caldwell, N, Hansen, M.H., Lago, J.A., Rust, K., Slobasky, R., and Tepping, B.J. (1988). The NAEP 1985-86 Technical Report. Educational Testing Service. Available: https://eric.ed.gov/?id=ED355248.

Bejar, I.I. (2011, August). A validity-based approach to quality control and assurance of automated scoring. Assessment in Education: Principles, Policy & Practice, 18(3), 319–341.

Bejar, I.I. (2019). ASVAB AIG (WK, AR, MK, and GS) in Minutes of the Defense Advisory Committee on Military Personnel Testing: September 26-27, 2019 Meeting. Available: https://dacmpt.com/wp-content/uploads/2020/04/Full-DACMPT-Meeting-Minutes-Sep-2019-FINAL.pdf.

Bennett, R. (2011). Automated Scoring of Constructed-Response Literacy and Mathematics Items. Available: https://www.researchgate.net/publication/260346149_Automated_Scoring_of_Constructed-Response_Literacy_and_Mathematics_Items.

Bergner, Y., and von Davier, A.A. (2019). Process data in NAEP: Past, present, and future. Journal of Educational and Behavioral Statistics, 44(6), 706–732. Available: https://doi.org/10.3102/1076998618784700.

Bridgeman, B., Trapani, C., and Attali, Y. (2012). Comparison of human and machine scoring of essays: Differences by gender, ethnicity, and country. Applied Measurement in Education, 25(1), 27–40. Available: https://doi.org/10.1080/08957347.2012.635502.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. Washington, DC: The National Academies Press. doi: 10.17226/26427.
×

Brown, G. (2019). Technologies and infrastructure: Costs and obstacles in developing large-scale computer-based testing. Education Inquiry, 10(1), 4–20. Available: https://doi.org/10.1080/20004508.2018.1529528.

Burstein, J., and Chodorow, M. (1999, June). Automated essay scoring for nonnative English speakers. In Proceedings of the ACL99 Workshop on Computer-Mediated Language Assessment and Evaluation of Natural Language Processing. College Park, MD.

Cahill, A., Fife, J., Riordan, B., Vajpayee, A., and Galochkin, D. (2020). Context-based automated scoring of complex mathematics responses. In Proceedings of the 15th Workshop on Innovative Use of NLP for Building Educational Applications. Seattle, WA.

Center for Process Data. (n.d.). American Institutes for Research. Available: https://www.air.org/project/center-process-data.

Chingos, M.M. (2012, November). Strength in Numbers: State Spending on K-12 Assessment Systems. Brown Center on Education Policy at Brookings. Available: https://www.brookings.edu/wp-content/uploads/2016/06/11_assessment_chingos_final_new.pdf.

Circi, R., Sikali, E., Sahin, F., Zheng, X., Hicks, J., Youn Lee, S., and Caliço, T.A. (2020, June 10). The Future is Here: Analyzing NAEP Process Data Using R. American Educational Research Association Virtual Research Learning Series. Available: https://www.aera.net/Professional-Opportunities-Funding/AERA-Virtual-Research-Learning-Series2020.

Corbett-Davies, S., and Goel, S. (2018). The measure and mismeasure of fairness: A critical review of fair machine learning. Cornell University ArXiv. Available: https://arxiv.org/abs/1808.00023.

DiCerbo, K., Lai, E., and Ventura, M. (2020). Assessment design with automated scoring in mind. In A. Rupp, P. Foltz, and D. Yi (Eds.), Handbook of Automated Scoring. Boca Raton, FL: CRC Press.

Embretson, S.E., and Kingston, N.M. (2018). Automatic item generation: A more efficient process for developing mathematics achievement items? Journal of Educational Measurement, 55(1), 112–131. Available: https://doi.org/10.1111/jedm.12166.

Ferrara, S., Lai, E., Reilly, A., and Nichols, P.D. (2017). Principled approaches to assessment design, development and implementation. In A.A. Rupp and J.P. Leighton (Eds.), The Handbook of Cognition and Assessment, Frameworks, Methodologies and Applications (pp. 41–74). West Sussex, UK: Wiley.

Fife, J. H. (2017). The M-Rater™ Engine: Introduction to the Automated Scoring of Mathematics Items (Research Memorandum No. RM-17-02). Princeton, NJ: Educational Testing Service.

Foltz, P., Yan, D., and Rupp, A. (2020). The past, present, and future of automated scoring. In A. Rupp, P. Foltz, and D. Yi (Eds.), Handbook of Automated Scoring. Boca Raton, FL: CRC Press.

Ghosh, D., Klebanov, B., and Song, Y. (2020, April). An exploratory study of argumentative writing by young students: A transformer-based approach. In Proceedings of the 15th Workshop on Innovative Use of NLP for Building Educational Applications. Seattle, WA.

Gierl, M.J., and Haladyna, T.M. (2013). Automatic Item Generation: Theory and Practice. New York: Routledge.

Glaser, R., Linn, R., and Bohrnstedt, G. (1997). Assessments in Transition: Monitoring the Nation’s Educational Progress. Stanford, CA: National Academy of Education.

Haertel, E. (2016). Future of NAEP Long-Term Trend Assessments. A white paper prepared for the National Assessment Governing Board. Available: https://www.nagb.gov/content/dam/nagb/en/documents/newsroom/naep-releases/naep-long-term-trend-symposium/long-term-trends.pdf.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. Washington, DC: The National Academies Press. doi: 10.17226/26427.
×

Herold, B. (2016, February 3). PARCC Scores lower for students who took exams on computers: Discrepancy raises questions about fairness. Education Week. Available: https://www.edweek.org/teaching-learning/parcc-scores-lower-for-students-who-took-exams-on-computers/2016/02.

Hoagwood, K.E., Olin, S.S., Storfer-Isser, A., Kuppinger, A., Shorter, P., Wang, N.M., Pollock, M., Peth-Pierce, R., and Horwitz, S. (2018). Evaluation of a train-the-trainers model for family peer advocates in children’s mental health. Journal of Child and Family Studies, 27(4), 1130–1136. Available: http://dx.doi.org.library.capella.edu/10.1007/s10826-017-0961-8.

Hutchinson, B., and Mitchell, M. (2019, January). 50 years of test (un)fairness: Lessons for machine learning. In FAT* ’19: Conference on Fairness, Accountability and Transparency. Atlanta, GA.

Irvine, S.H. (2002). Introduction. In S.H. Irvine and P.C. Kyllonen (Eds.), Item Generation for Test Development. New York: Routledge.

Irvine, S.H. (2014). Computerised Test Generation for Cross-National Military Recruitment: A Handbook. Amsterdam: IOS Press.

Jacobson, L. (2021, August 6). Board overseeing Nation’s Report Card moves past equity dispute, adopting ‘forward-looking’ plan for new reading tests. T74 Newsletter. Available: https://www.the74million.org/board-overseeing-nations-report-card-moves-past-equity-dispute-adopting-forward-looking-plan-for-new-reading-tests.

Kosh, A.E., Simpson, M.A., Bickel, L., Kellogg, M., and Sanford-Moore, E. (2019). A cost–benefit analysis of automatic item generation. Educational Measurement: Issues and Practice, 38(1), 48–53. Available: https://doi.org/10.1111/emip.12237.

Leacock, C., and Zhang, X. (2014, April). Identifying Predictors of Machine/Human Reliability for Short Response Items. Paper presented at the annual conference of the National Council on Measurement in Education. Philadelphia, PA.

Leacock, C., Messineo, D., and Zhang, X. (2013, April). Issues in Prompt Selection for Automated Scoring of Short Answer Questions. Paper presented at the annual conference of the National Council on Measurement in Education. San Francisco, CA.

Lockee, B.B. (2021). Shifting digital, shifting context: (Re)considering teacher professional development for online and blended learning in the COVID-19 era. Educational Technology Research and Development, 69, 17–20. Available: https://link.springer.com/article/10.1007/s11423-020-09836-8.

Lord, F.M. (1980). Applications of Item Response Theory to Practical Testing Problems. Mahwah, NJ: Erlbaum.

Lottridge, S., Burkhardt, A., and Boyer, M. (2020). Automated scoring [Digital ITEMS Module 18]. Educational Measurement: Issues and Practice, 39(3).

Lottridge, S., Wood, S., and Shaw, D. (2018). The effectiveness of score-ability ratings in predicting automated scoring performance. Applied Measurement in Education, 31(3), 215–232.

Luecht, R.M. (2005). Some useful cost-benefit criteria for evaluating computer-based test delivery models and systems. Journal of Applied Testing Technology, 7(2). Available: www.testpublishers.org/journal.htm.

———. (2006). Operational issues in computer-based testing. In D. Bartrum and R.K. Hambleton (Eds.), Computer-based Testing and the Internet: Issues and Advances (pp. 39–58). New York: Wiley and Sons.

———. (2012a). An introduction to assessment engineering for automatic item generation. In M. Gierl and T. Haladyna (Eds.), Automatic Item Generation (pp. 59–101). New York: Taylor & Francis/Routledge.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. Washington, DC: The National Academies Press. doi: 10.17226/26427.
×

———. (2012b). Automatic item generation for computerized adaptive testing. In M. Gierl and T. Haladyna (Eds.), Automatic Item Generation (pp. 196–216). New York: Taylor & Francis/Routledge.

———. (2014). Computerized adaptive multistage design considerations and operational issues. In D. Yan, A. A. von Davier, and C. Lewis (Eds.), Computerized Multistage Testing: Theory and Applications (pp. 69–83). London, UK: CRC Press/Taylor & Francis Group.

———. (2016). Computer-based test delivery models, data and operational implementation issues. In F. Drasgow (Ed.), Testing and Technology: Improving Educational and Psychological Measurement (pp. 179–205). New York: Routledge.

———. (2020a). Generating performance-level descriptors under a principled assessment design paradigm: An example for assessments under the Next-Generation Science Standards. Educational Measurement Issues and Practice, 39(4), 105–115.

———. (2020b). The Challenges of Principled Item Design. NCME symposium, Principled item design: State-of-the-art symposium paper for the Annual Meeting of the National Council on Measurement in Education. Online.

Luecht, R., and Burke, M. (2020). Reconceptualizing items: From clones and automatic item generation to task model families. In R. Lissitz and H. Jiao (Eds.), Applications of Artificial Intelligence to Assessment (pp. 25–49). Baltimore, MD: Information Age Publishers.

Markowetz, A., Błaszkiewicz, K., Montag, C., Switala, C., and Schlaepfer, T.E. (2014). Psycho-informatics: Big data shaping modern psychometrics. Medical Hypotheses, 82(4), 405–411. Available: https://doi.org/10.1016/j.mehy.2013.11.030.

Mathias, S., and Bhattacharyya, P. (2020, April). Can Neural Networks Automatically Score Essay Traits? 15th Workshop on Innovative Use of NLP for Building Educational Applications. Seattle, WA.

McGraw-Hill Education CTB. (2014, December 24). Smarter Balanced Assessment Consortium Field Test: Automated Scoring Research Studies (in accordance with Smarter Balanced RFP 17). Available: http://www.smarterapp.org/documents/FieldTest_AutomatedScoringResearchStudies.pdf.

Michel, R. (2021). Remotely proctored K-12 high stakes standardized testing during COVID-19: Will it last? Educational Measurement: Issues and Practice, 39(3), 28–30.

Mislevy, R.J. (2006). Cognitive psychology and educational assessment. In R.L. Brennan (Ed.), Educational Measurement, 4th ed. (pp. 257–306). Washington, DC: American Council on Education.

———. (2019). On integrating psychometrics and learning analytics in complex assessments. Data Analytics and Psychometrics: Informing Assessment Practices, 1–52.

Mislevy, R.J., Steinberg, L.S., and Almond, R.G. (2003). On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspectives, 1, 3–66.

Mosquin, P., and Chromy, J. (2004). Federal Sample Sizes for Confirmation of State Tests in the No Child Left Behind Act. Commissioned by the NAEP Validity Studies Panel. Available: https://www.air.org/resource/report/federal-sample-sizes-confirmation-state-tests-no-child-left-behind-act.

NAEP (National Assessment of Educational Progress). (2013). NAEP Technical Documentation: NAEP Scoring. Available: https://nces.ed.gov/nationsreportcard/tdw/scoring.

NAGB (National Assessment Governing Board). (1996, August 2). Redesigning the National Assessment of Educational Progress. Available: https://www.air.org/resource/report/federal-sample-sizes-confirmation-state-tests-no-child-left-behind-act.

———. (2017, May 19–20). Official Summary of Governing Board Actions. Available: https://www.nagb.gov/content/dam/nagb/en/documents/what-we-do/quarterly-board-meeting-materials/2017-08/02-may-2017-board-meeting-minutes.pdf.

———. (2018, March 3). Framework Development Policy Statement. Available: https://www.nagb.gov/content/dam/nagb/en/documents/policies/framework-development.pdf.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. Washington, DC: The National Academies Press. doi: 10.17226/26427.
×

———. (2019a, November 15). Strategic vision activities led by COSDAM. In Committee on Standards, Design and Methodology Agenda (p. 3). Available: https://www.nagb.gov/content/dam/nagb/en/documents/what-we-do/quarterly-board-meeting-materials/2019-11/05-committee-on-standards-design-and-methodology.pdf.

———. (2019b). Reading Framework for the 2019 National Assessment of Educational Progress. U.S. Department of Education. Available: https://files.eric.ed.gov/fulltext/ED604485.pdf.

———. (2021, August 5). National Assessment of Educational Progress Schedule of Assessments. Available: https://www.nagb.gov/content/dam/nagb/en/documents/naep/Schedule%20of%20Assessments_080521.pdf.

NASEM (National Academies of Sciences, Engineering, and Medicine). (2017). Evaluation of the Achievement Levels for Mathematics and Reading on the National Assessment of Educational Progress. Washington, DC: The National Academies Press.

Nation’s Report Card. (n.d.). Response Process Data from the 2017 NAEP Grade 8. Mathematics Assessment. Available: https://www.nationsreportcard.gov/process_data.

NCES (National Center for Educational Statistics). (2012). NAEP: Looking Ahead—Leading Assessments into the Future. Washington, DC: NCES.

———. (2013). The Nation’s Report Card: Trends in Academic Progress 2012 (NCES 2013 456). Washington, DC: Institute of Education Sciences, U.S. Department of Education.

NGSS Lead States (Next Generation Science Standards Lead States). (2013). Next Generation Science Standards: For States, By States. Washington, DC: The National Academies Press.

O’Malley, F., and Norton, S. (2022). Maintaining the Validity of the NAEP Frameworks and Assessments in Civics and U.S. History. Commissioned by the NAEP Validity Studies Panel. Washington, DC: American Institutes for Research.

Oranje, A., Mazzeo, J., Xu, X., and Kulick, E. (2014). A multistage approach to group-score assessments. In D. Yan, A. van Davier, and C. Lewis (Eds.), Computerized Multistage Testing; Theory and Applications. New York: Routledge.

Page, E. (2003). Project Essay Grade. In Automated Essay Scoring: A Cross-Disciplinary Perspective (pp. 43–54). Mahwah, NJ: Lawrence Erlbaum Associates.

Partnership for Assessment of Readiness for College and Careers. (2015, March 9). Research Results of PARCC Automated Scoring Proof of Concept Study.

Patz, R., Lottridge, S., and Boyer, M. (2019, April). Human Rating Errors and the Training of Automated Raters. Paper presented at the National Council on Measurement in Education. Toronto, CA.

Provasnik, S. (2021). Process data, the new frontier for assessment development: Rich new soil or a quixotic quest? Large-Scale Assessments in Education, 9(1), 1. Available: https://doi.org/10.1186/s40536-020-00092-z.

Raczynski, K., Choi, H-J., and Cohen, A. (2021, June). Using Latent Class Analysis to Explore the AI Score-Ability of Constructed-Response Items. Paper presented at the National Council on Measurement in Education (NCME). Online.

Ramineni, C., and Williamson, D. (2018). Understanding mean score differences between the e-rater® automated scoring engine and humans for demographically based groups in the GRE® general test. ETS Research Report Series, 2018(1), 1–31.

Riordan, B., Bichler, S., Bradford, A., Chen, J., Wily, K., Gerard, L, and Linn, M. (2020, April). An Empirical Investigation of Neural Methods for Content Scoring of Science Explanations. 15th Workshop on Innovative Use of NLP for Building Educational Applications. Seattle, WA.

Roll, I., and Winne, P.H. (2015). Understanding, evaluating, and supporting self-regulated learning using learning analytics. Journal of Learning Analytics, 2(1), 7–12. Available: https://doi.org/10.18608/jla.2015.21.2.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. Washington, DC: The National Academies Press. doi: 10.17226/26427.
×

Romero, C., and Ventura, S. (2020). Educational data mining and learning analytics: An updated survey. WIREs Data Mining and Knowledge Discovery, 10(3), e1355. Available: https://doi.org/10.1002/widm.1355.

Rudner, L.M. (2010). Implementing the Graduate Management Admission Test computerized adaptive test. Chapter 8 in W.J. van der Linden and C.A.W. Glas, eds., Elements of Adaptive Testing. New York: Springer. Availabile: https://link.springer.com/content/pdf/10.1007/978-0-387-85461-8.pdf.

Shermis, M., and Hamner, B. (2013). Contrasting state-of-the-art automated scoring of essay: Analysis. In The Handbook of Automated Essay Evaluation: Current Applications and New Directions (pp. 313–346). New York: Routledge Academic.

Shermis, M., and, Lottridge, S. (2019, April). Communicating to the Public about Machine Scoring: What Works, What Doesn’t. Paper presented at the National Council on Measurement in Education. Toronto, CA.

Shermis, M., Mao, L., Mulholland, M., and Kieftenbeld, V. (2017). Use of automated scoring features to generate hypothesis regarding language-based DIF. International Journal of Testing, 17(5), 1–21.

Steiber, A. (2014). The Google Model: Managing Continuous Innovation in a Rapidly Changing World. Switzerland: Springer International.

Strauss, V. (2020, May 30). Testing giants ACT and college board struggle amid COVID-19 pandemic. Washington Post. Available: https://www.washingtonpost.com/education/2020/05/30/testing-giants-act-college-board-struggle-amid-covid-19-pandemic.

Swain, M., Wise, L., and Kroopnick, M. (2018). Feasibility of a Multi-Stage Testing Design. Presentation at NCME (NYC), in the session on Maintaining Quality Assessments in the Face of Change.

Topol, B., Olson, J., and Roeber, E. (2014, February). Pricing Study: Machine Scoring of Student Essays. Available: https://www.gettingsmart.com/wp-content/uploads/2014/02/ASAP-Pricing-Study-Final.pdf.

Ul Hassan, M., and Miller, F. (2019). Optimal item calibration for computerized achievement tests. Psychometrika, 84, 1101–1128. Available: https://doi.org/10.1007/s11336-019-09673-6.

van der Linden, W.J., and Pashley, P.J. (2010). Item selection and ability estimation adaptive testing. In W.J. van der Linden and C.A.W. Glas (Eds.), Elements of Adaptive Testing (pp. 3–30). New York, NY: Springer.

Verschoor A., Berger S., Moser U., and Kleintjes, F. (2019). On-the-fly calibration in computerized adaptive testing. In B. Veldkamp and C. Sluijter (Eds.), Theoretical and Practical Advances in Computer-Based Educational Measurement: Methodology of Educational Measurement and Assessment. Cham: Springer. Available: https://doi.org/10.1007/978-3-030-18480-3_16.

von Davier, A.A., Deonovic, B., Yudelson, M., Polyak, S.T., and Woo, A. (2019). Computational psychometrics approach to holistic learning and assessment systems. Frontiers in Education, 4, 69. Available: https://doi.org/10.3389/feduc.2019.00069.

Wang, X., Talluri, S.T., Rose, C., and Koedinger, K. (2019). UpGrade: Sourcing student open-ended solutions to create scalable learning opportunities. In L@S ’19: Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale, Article 17. https://doi.org/10.1145/3330430.3333614.

Way, D., and Strain-Seymour, E. (2021). A Framework for Considering Device and Interface Features that May Affect Student Performance on the National Assessment of Educational Progress. White paper commissioned by the NAEP Validity Studies (NVS) Panel. Available: https://www.air.org/resource/report/framework-considering-device-and-interface-features-may-affect-student-performance.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. Washington, DC: The National Academies Press. doi: 10.17226/26427.
×

Williamson, D., Xi, X., and Breyer, F.J. (2012). A framework for the evaluation and use of automated scoring. Educational Measurement: Issues and Practice, 31(1), 2–13.

Wind, W., Wolfe, E., Engelhard, G., and Foltz, P. (2017). The influence of rater effects in training sets on the psychometric quality of automated scoring for writing assessments. International Journal of Testing, 18(1), 1–23.

Winter, P. C., Karvonen, M., and Christensen, L. L. (2018, August). Developing item templates for alternate assessments of English language proficiency. Madison, WI: University of Wisconsin–Madison, Alternate English Language Learning Assessment (ALTELLA). Available: http://altella.wceruw.org/resources.html.

Wood, S. (2020). Public perception and communication around automated essay scoring. In A. Rupp, P. Foltz, and D. Yi (Eds.), Handbook of Automated Scoring: Theory into Practice. Boca Raton, FL: CRC Press.

Yan, D., and Bridgeman, B. (2020). Validation of automated scoring systems. In D. Yan, A. Rupp, and P. Foltz (Eds.), Handbook of Automated Scoring: Theory into Practice. Boca Raton, FL: CRC Press.

Young, T., Hazarika, D., Poria, S., and Cambria, E. (2017). Recent Trends in Deep Learning Based Natural Language Processing. Available: https://arxiv.org/pdf/1708.02709.pdf.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. Washington, DC: The National Academies Press. doi: 10.17226/26427.
×

This page intentionally left blank.

Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. Washington, DC: The National Academies Press. doi: 10.17226/26427.
×
Page 109
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. Washington, DC: The National Academies Press. doi: 10.17226/26427.
×
Page 110
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. Washington, DC: The National Academies Press. doi: 10.17226/26427.
×
Page 111
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. Washington, DC: The National Academies Press. doi: 10.17226/26427.
×
Page 112
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. Washington, DC: The National Academies Press. doi: 10.17226/26427.
×
Page 113
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. Washington, DC: The National Academies Press. doi: 10.17226/26427.
×
Page 114
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. Washington, DC: The National Academies Press. doi: 10.17226/26427.
×
Page 115
Suggested Citation:"References." National Academies of Sciences, Engineering, and Medicine. 2022. A Pragmatic Future for NAEP: Containing Costs and Updating Technologies. Washington, DC: The National Academies Press. doi: 10.17226/26427.
×
Page 116
Next: Appendix A: Biographical Sketches of Panel Members and Staff »
A Pragmatic Future for NAEP: Containing Costs and Updating Technologies Get This Book
×
 A Pragmatic Future for NAEP: Containing Costs and Updating Technologies
Buy Paperback | $20.00 Buy Ebook | $16.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The National Assessment of Educational Progress (NAEP) - often called "The Nation's Report Card" - is the largest nationally representative and continuing assessment of what students in public and private schools in the United States know and can do in various subjects and has provided policy makers and the public with invaluable information on U.S. students for more than 50 years.

Unique in the information it provides, NAEP is the nation's only mechanism for tracking student achievement over time and comparing trends across states and districts for all students and important student groups (e.g., by race, sex, English learner status, disability status, family poverty status). While the program helps educators, policymakers, and the public understand these educational outcomes, the program has incurred substantially increased costs in recent years and now costs about $175.2 million per year.

A Pragmatic Future for NAEP: Containing Costs and Updating Technologies recommends changes to bolster the future success of the program by identifying areas where federal administrators could take advantage of savings, such as new technological tools and platforms as well as efforts to use local administration and deployment for the tests. Additionally, the report recommends areas where the program should clearly communicate about spending and undertake efforts to streamline management. The report also provides recommendations to increase the visibility and coherence of NAEP's research activities.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!