Close readers of the test scores reported on Wednesday in U.S. News & World Report’s rankings should approach those numbers with caution. After all, the hunt for gold-standard data will take you into some gray areas.
A handful of colleges have made news recently for intentionally fudging their enrollment data. In other cases, numbers are reported honestly but erroneously. Sometimes discrepancies arise because college officials interpret survey questions differently.
In many cases, the SAT scores that colleges report to the Education Department’s Integrated Postsecondary Education Data System, or Ipeds, differ from those reported to U.S. News. Recently, The Chronicle examined the SAT scores of 224 colleges ranked among national universities and liberal-arts colleges in last year’s U.S. News guide. Roughly one-fourth of those colleges submitted different scores to U.S. News and to the Education Department for the classes that entered in the fall of 2010.
Of those institutions, the typical college sent a combined median mathematics and verbal score to U.S. News that was five points higher. But in a dozen cases, the U.S. News scores were 15 or more points higher.
Ross B. Peacock, director of institutional research at Oberlin College, says the college’s data submitted to U.S. News excluded students who were exclusively enrolled in Oberlin’s conservatory, unlike the data submitted to Ipeds. As a result, the median SAT score for the college was 25 points higher in U.S. News.
Excluding those students was an error, according to Mr. Peacock. After he was contacted by The Chronicle, he says, he corrected last year’s scores with U.S. News and included conservatory students in Oberlin’s submission for the 2013 rankings.
Joseph P. Pettibon II, associate vice president for academic services at the 65th-place Texas A&M University at College Station, said the college’s median SAT was 15 points higher in U.S. News because of a data-entry error. After being alerted of the discrepancy by The Chronicle, Mr. Pettibon filed a correction with U.S. News.
But at the Johns Hopkins University, the exclusion of some students was deliberate. Cathy J. Lebo, assistant provost for institutional research, says that the submissions to U.S. News cover undergraduates admitted through the Office of Undergraduate Admissions. The Ipeds data cover all undergraduates on all campuses, including students in the music conservatory. As a result, the U.S. News score is 10 points higher.
Some discrepancies relate to timing. Ipeds data are due in the fall, but U.S. News deadlines are in the spring. Marin E. Clarkberg, director of the Office of Institutional Research at Cornell University, says that, between the deadlines, the college was able to correct an error in the Ipeds data.
The definitions for SAT scores used by Ipeds and U.S. News are essentially the same, asking for the scores of first-time, degree-seeking undergraduate students. Colleges report the 25th- and 75th-percentile scores for each section of the test, which The Chronicle combined for its analysis. But the two sets of questions are long, and worded differently.
“There’s also almost always some level of professional judgment that’s required in coding data,” says Randy L. Swing, executive director of the Association for Institutional Research. “Students just don’t line up as neatly as the definitions would suggest.”
But while Mr. Swing sympathizes with the demands placed on researchers’ time, some mistakes are too embarrassing to discount.
“Given the complexity of the data that institutional researchers have to deal with, there’s going to be some variation,” he says. “You pointed out in your story, though, some variation that’s too large to just say, Well, shucks.”