• November 28, 2014

OECD Project Seeks International Measures for Assessing Educational Quality

The first phase of an ambitious international study that intends to assess and compare learning outcomes in higher-education systems around the world was announced here on Wednesday at the conference of the Council for Higher Education Accreditation.

The study, called the Assessment of Higher Education Learning Outcomes, is a project of the Organisation for Economic Cooperation and Development.

Richard Yelland, of the OECD's Education Directorate, is leading the project, which he said expects to eventually offer faculty members, students, and governments "a more balanced assessment of higher-education quality" across the organization's 31 member countries.

After decades of quantitative growth in higher education, learning outcomes are becoming a central focus worldwide, Mr. Yelland noted in his presentation. For example, one aspect of the Bologna Process, through which a number of European nations are harmonizing their degree systems, involves defining learning outcomes. Learning-outcome measures are also increasingly being incorporated into quality-assurance mechanisms.

"Consensus is emerging on the need to improve and ensure quality for all," Mr. Yelland said.

The OECD project's first phase, or "strand," will be a feasibility study focused on developing learning measures.

Measuring General Skills

To determine to what extent generic skills can be measured across diverse institutions, languages and cultures, the feasibility study is adapting the Collegiate Learning Assessment, an instrument developed by the Council for Aid to Education in the United States, to an international context. The online assessment will seek to measure generic skills, such as problem solving, critical thinking, and practical application of theory. The questions are not specialized, so that they can be answered by most undergraduates, regardless of their field of study.

At least six nations are participating in the feasibility study. At least 14 countries are expected to participate in the full project, with an average of 10 institutions per country and about 200 students per institution, so that the sample size will consist of about 30,000 students.

Mr. Yelland's presentation drew several questions from the audience.

"I'm skeptical about some of the instruments that can be used for this analysis," Eduardo Marçal Grilo, a former minister of education for Portugal who represents a European foundation that he said is thinking of providing financial backing for the project.

"The problem is not to evaluate, but to do a comparison," he said. The project's target population will be students nearing the end of three-year or four-year degrees, and will eventually measure student knowledge in economics and engineering. Even though it will take into account students' backgrounds and national differences, Mr. Grilo said, and he wondered how you could effectively compare mechanical-engineering students from institutions in Britain, Italy, Switzerland, and the United States.

Mr. Yelland replied that this first phase of the project will involve determining just which common factors exist. Because its focus is on general skills, "part of it is going above content, to look at the way in which engineers actually use the knowledge they have," he said.

Another member of the audience worried about the implications for the autonomy of higher-education institutions, with so much discussion about producing a tool for comparing institutions. "If it is voluntary, why would any higher-education institution, any top institution, agree to use this tool if they think it is not in their best interest? Why would they want to take that risk?"

More Than a Ranking System

Mr. Yelland said in an interview that he thought much of the skepticism about the project stemmed from concerns that the project would end up being just another ranking system. "This isn't going to be a ranking," he said categorically. "It is so much more. If we manage to produce reliable data, some people may well turn it into rankings, but that is not what this is about."

Karine Tremblay of OECD, who is coordinating the first phase of the project, noted that all of the existing rankings of universities are based on available data, although they vary in which criteria they emphasize. The OECD's project will provide new measurements whose object, she said, is not to offer the kind of snapshot assessment that rankings do, but to provide institutions with useful feedback. "If you think of a ranking as a house, this would allow improving the quality of the bricks," she said.

While the goal of the project is not to produce another global ranking of universities, the growing preoccupation with such lists has crystallized what Mr. Yelland described as the urgency of pinning down what exactly it is that most of the world's universities are teaching and how well they are doing it. "This is not about the top 100. There will only ever be 100 institutions in the top 100," he said. Rather, the project is about "shining a light where no light currently exists," into how the rest of the estimated 20,000 higher-education institutions in the world fare in teaching.

Mr. Grilo, despite his skepticism, said he thought the project was "worthwhile in itself" for the information it would generate. Others shared the view that it could yield valuable insights.

Judith S. Eaton, president of the Council for Higher Education Accreditation, said she was also skeptical about whether the project would eventually yield common international assessment mechanisms. But she added that "no matter what, there will be gains for the academic community."

As higher education becomes increasingly globalized, Ms. Eaton noted, the same sets of issues recur across borders and systems, about how best to enhance student learning and strengthen economic development and international competitiveness. Whatever it ends up yielding, she said, the OECD project is at the center of an emerging "international higher-education space and an international quality-assessment community."

Comments

1. 11211250 - January 28, 2010 at 07:58 am

This is a critically important step if we are to find better ways to teach STEM, for instance. With this kind of data we can compare say, China, with the US in content knowledge - but then also look at new metrics like scientific reasoning and problem solving - to see how well students taught under two very different methodologies use what they learn. We can perhaps learn how to develop strong critical-thinking skills in our children - which can only lead to a better world for all. After all, as Jefferson said, "Whenever the people are well-informed, they can be trusted with their own government." This should work globally.

2. grifflee - January 28, 2010 at 10:30 am

Using common assessments to compare what students are learning and using the data for program improvment is a good idea. Limiting the assessment to the skills that can be discerned by a test is not.

Every test comes with two important limitations: 1) students must produce work in a short period of time before they take a bathroom break; and 2) students must be isolated from the real world of chaotic and confusing information and competing voices to prevent cheating.

These two limitations prevent tests from evaluating some of the most important skills students will need in the real word of work and citizenship: 1) locating quality information; 2) evaluating sources; 3) identifying gaps in knowledge, contradictions, and ambiguities; 3) careful analysis of data; 4) integration of source material; 5) synthesizing ideas; 6) arranging and rearranging lengthy, detailed information; 6) ethical attributions of source material; 7) seeking and addressing multiple perspectives; 8) using a variety of rhetorical stragegies; and 9) developing solutions and recommendations that reflect deeply-held values. Some test-makers claim that their tests do evaluate these skills, but they do so at a greatly simplified level

Although the CLA and other tests attempt to assess students' abilities in these areas, the necessity of limiting access to the real world of information and ideas and conducting the assessment in one sitting results in assessments far beneath the cognitive challenges posed by even a typcial first-year composition course.

Tests can provide solid data about whether students have mastered the facts and concepts required in a specific discipline. They can also test whether students have mastered the skills required for convergent-thinking tasks where a single right answer is expected, such as solving math problems. When applied to the so-called general skills, however, they are inept. The terms "general skills" or "generic skills" belie the complexity of good writing, verbal reasoning, problem-solving, and critical thinking, in which no single right answer is expected.

The danger of the initiative described in this article is an underestimation of the complexity of higher-order cognitive skills and the overestimation of the constrcut validity of tests to measure them. If results from tests are used to "drive" improvement in curriculum and instruction, they will inevitably drive them downward to the simplistic level of the tests.

This is not to say that assessments of higher-order skills are impossible and we should give up. Many of us are developing higher-quality assessments, including ways to make portfolio scoring reliable, use of common rubrics, and engaging faculty in collaborative assessments made possible by Web 2.0 technologies. We should continue to use tests for content knowledge and convergent-thinking skills, but resist testng of higher-order cognition. A better way is coming!

Add Your Comment

Commenting is closed.

subscribe today

Get the insight you need for success in academe.