by

Pondering PISA’s Promise for Higher Ed

How should the United States interpret last week’s international PISA test scores? And do the results of the assessment, which is administered to 15-year-olds around the world in reading, math, and science, have significant implications for higher education? Some thoughts:

The stellar scores of students from Shanghai on the exams, which are sponsored by the Organisation for Economic Cooperation and Development (OECD), instantly fed into U.S. angst about our place in the worldwide educational pecking order. But they shouldn’t necessarily be read that way. Shanghai simply isn’t representative of China as a whole: It’s a talent magnet and the beneficiary of extensive government investment in education. Scores for the United States and other nations, by contrast, reflect the performance of a cross-section of teenagers. The New York Times commendably notes this in the second and third paragraphs of its story, but the inevitable “the Chinese Are Eating Our Lunch” meme may simply be too hard for commentators and policy makers to resist.

Note, too, that the performance of U.S. students hasn’t declined. They actually made some gains in science and math, rising to the international average in the former while remaining below average in the latter among the 34 OECD member countries. In reading, scores were more or less unchanged, leaving American teenagers in the middle of the pack internationally.

These results will of course serve as a Rorschach test for everybody’s views on what kinds of education policy are most successful or deleterious, whether in high-performing places like China, Finland, South Korea, or here in the lackluster United States. Probably the best and funniest instant-analysis of this phenomenon came in this laconic early-morning blog post from Kevin Huffman. “The cool thing about this is that we can do this again in 2012-13,” he concluded. “It’s so awesome to have a Sputnik moment every three years.”

As I’ve argued in the context of university education, the high-performance of other nations ought not to be cause for hand-wringing in the United States. Educational improvement is not a zero-sum phenomenon—we’re all better off in a world in which more countries successfully build human capital.

All this said, whether or not one believes that we need a Manhattan Project, a Marshall Plan, or some equivalent, there is little question that the United States is underperforming vis-à-vis its potential. The mediocre showing of U.S. students is not something about which we ought to be complacent. And our students’ disappointing results on previous PISA tests have had a useful effect in galvanizing continuting efforts at elementary and secondary education reform.

PISA results also matter for higher education, in the United States and everywhere else, because strong secondary-school preparation is vital to creating successful universities. Without improvements throughout the educational pipeline, even better access to postsecondary education may not be accompanied by a corresponding level of degree completion, as the U.S. experience, unfortunately, demonstrates.

International comparisons of universities on measures such as graduation rates can, like the PISA results, be misread as winner-take-all exercises. Nevertheless, we could use new assessments that compare higher-education systems around the world and, where necessary, spur underperforming nations into action. I believe global university rankings can be more helpful than critics acknowledge, but they compare individual institutions rather than entire countries, and they do nothing to measure what undergraduates actually learn on campus.

As it happens, the OECD is also playing an important role in improving this state of affairs with its relatively new AHELO project, short for Assessment of Higher Education Learning Outcomes. This academic year and next, AHELO is carrying out a feasibility study of tests in specific subjects—economics and engineering—and in “generic skills” such as analytical reasoning in 15 OECD countries, including Mexico, the United States, Sweden, Egypt, South Korea, Japan, and Australia. The initiative as currently envisioned will examine a modest number of universities in each country, not national samples of students. No wonder. A 2006 background memo describing the nascent project as “PISA for Higher Education” generated considerable controversy. Ever since, the OECD has backpedaled furiously, taking pains to note that AHELO isn’t intended to rank nations at all.

Still, it isn’t hard to see why a substantive comparison of teaching and learning in universities around the world could be very useful. That contentious 2006 memo (no longer available online, alas) made the case very well:

a direct assessment of the learning outcomes of higher education could provide governments with a powerful instrument to judge the effectiveness and international competitiveness of their higher education institutions, systems and policies in the light of other countries’ performance, in ways that better reflect the multiple aims and contributions of tertiary education to society.

The AHELO project faces considerable methodological challenges. But if it moves ahead successfully, as I hope it does, its architects hope eventually to produce value-added results, looking not only at a snapshot of what students know but at how much they improve during their time at university. This would be a breakthrough in global higher-education assessment. Moreover, protestations from the OECD notwithstanding, it’s easy enough to see how a cross-national comparison of individual institutions today could eventually become a comparison of representative samples of student performance in different countries,  just like … PISA! That would mean, yes, a global ranking of postsecondary learning outcomes.

Like last week’s international test results for 15-year-olds, such measures might give rise to misinterpretation, misplaced zero-sum alarmism, and so forth. But new and improved tools for self-assessment and global comparisons on important dimensions of higher education, whether created as an outgrowth of AHELO or through other initiatives, could serve as catalysts for improvement that are useful—and overdue.

Return to Top