• August 22, 2014

Researchers Criticize Reliability of National Survey of Student Engagement

The growing popularity and influence of the National Survey of Student Engagement troubles two researchers, who consider it a poor tool for evaluating institutional quality.

In a paper presented this week at the annual conference of the Association for the Study of Higher Education, Alberto F. Cabrera, a professor in the College of Education at the University of Maryland at College Park, and Corbin M. Campbell, a doctoral student there, challenge the survey's "benchmarks," which are intended to measure colleges' performance in five categories: level of academic challenge, active and collaborative learning, student-faculty interaction, enriching educational experiences, and supportive campus environment.

The individual benchmarks, the researchers argue, have a high percentage error and overlap with one another. "If each of the five benchmarks does not measure a distinct dimension of engagement and includes substantial error among its items, it is difficult to inform intervention strategies to improve undergraduates' educational experiences," Mr. Cabrera and Ms. Campbell say.

The paper, "How Sound Is NSSE?," also examines the connection between students' performance on the benchmarks and their grade-point averages, testing whether the two measures of success correlate for seniors at "a large, public, research-extensive institution in the Mid-Atlantic." Mr. Cabrera and Ms. Campbell found that only one benchmark, enriching educational experiences, had a significant effect on the seniors' cumulative GPA.

The director of the national survey, which is known as Nessie, welcomed its examination. "Lots of schools should be doing this kind of thing. They should be understanding how Nessie works on their campus," the director, Alexander C. McCormick, said.

He argued, however, that because most of the survey questions relate to students' current academic year, any GPA comparisons should stick to that same time period. Also, he said, he does not consider the benchmarks to be inherent underlying characteristics, or "latent constructs," as the researchers' statistical analysis assumes them to be. That assumption, he said, could make the percentage error higher than it may otherwise be.

Still, the survey has considered revising its measures. "We created these benchmarks to give people a way into the data," Mr. McCormick said, "but I think they have maybe drawn too much exclusive attention."

In one potential change, both institutional and national reports on Nessie's results may no longer treat "enriching educational experiences"—which Mr. Cabrera and Ms. Campbell found to have the highest percentage error—as a benchmark, but instead break out its individual measures.

The new paper is not the first scholarly criticism leveled at Nessie. A paper presented at last year's conference said the survey asks many questions that are too vague for students' answers to be meaningful or that fail to consider how human memory can be faulty and how difficult it can be to accurately measure attitudes. Other critics have asserted that the survey's mountains of data remain largely ignored.

Comments

1. henr1055 - November 18, 2010 at 04:48 pm

NESSE came about the the Educationologists at the accreding agencies are trying to justify their existence. They decided that attending class, giving presentations, doing research, writing papers, taking tests, working in class as groups, doing group assignments, using technology to find information and their assigned grades were not really assessment and they set out to create another type of assessment which means visiting schools and having faculty committees print out reams of documents that somehow prove something. People got sick of wasting paper and time so the "market" came up with NESSE and FESSE. The students at our school reported that they had done more than 20 assignments greater than 50 pages in length. Right. Anyway the NESSE and FESSE have made everyone happy. We cut back on the number of tree deaths in honor of "the true meaning of assessment" and the educationologists think they have discovered the new world.

GIVE ME A BREAK

TH

2. dboyles - November 18, 2010 at 06:11 pm

Surveys--a great way to dismantle indigenous and diverse microcultures and replace them with something overarching and worse.

3. gallaghd - November 19, 2010 at 10:03 am

Well, politics of surveys aside, the NSSE benchmarks correlate with GPA in first year and senior year in the range of .1 to .2. This data comes from NSSE's own scale development work. It can be found at their website. We replicated these tiny correlations and vitually every study finds roughly the same values. There is no data showing that NSSE bench marks are related to grades, retention, or anything else reflective of good outcomes. Instututions are making policy decisions on the basis of an instrument which is not empirically related to good outcomes! The construction of the benchmarks was done not on the basis of their own analyses but via some other,unknown, process. I believe engagement is critical but I do not believe the NSSE measures the type of engagement which is related to success. Now for politics: Are institutions using the NSSE just because everyone else is using it and accreditating agencies like to see scores from the NSSE?

4. gwwyo04 - November 19, 2010 at 10:43 am

It's also hard to chop down a tree with a shovel.

NSSE is a tool. Like any other tool, it has its limitations and purposes.

NSSE suggests that "Student Engagement" is an effective proxy for "Student Success" and then uses a compendium of measures for defining student success. Picking and choosing one benchmark or one subset of measures and then claiming that the whole instrument is invalid because each individual item does not correlate to the larger picture seems to be an unfair indictment of the instrument.

Further, it's only one tool. Any institution that makes policy decisions on the basis of one result or one instrument isn't doing assessment properly, imho.

5. optimysticynic - November 19, 2010 at 11:03 am

To say instruments like this exist to give assessors and accrediting bodies something to do simply pushes the causal question back another link. Why do we have so many assessors and accrediting bodies? Because we have bought into the business model...for everything, every institution, every cultural expression, every domain. Bottom line, value-added, economic outcome--these are our rulers. At base, the issue is political.

6. uaeobserver - November 19, 2010 at 11:05 am

I think the researchers question the validity more than the reliability of NSSE. I've used NSSE for a decade. The results are reliable (I can replicate them - and my results are very stable over time).

What the researchers question is whether or not the factors presented in NSSE are related to important criterion variables (eg. Retention and academic performance).

I think NSSE is best used for marketing and promotion. I really wish accreditors and education researchers would stop fooling themselves about the educational value of the instrument.

When it comes to actionable education research, I'm less confident that I can use NSSE to improve student academic success.

7. sam_michalowski - November 19, 2010 at 12:23 pm

A second here on the weak correlations of the NSSE benchmark scores with GPA (earned both in the term they were surveyed and cumulative). Nothing over .150 for first year students for most of the benchmarks. Student-faculty interaction was the highest of all for senior year students at .260. Enriching educational experiences was never significant. And, to support the authors' point, the benchmarks themselves are moderately correlated (.30 up to .56).

Other outcomes, many longitudinal, may correlate better with NSSE items as GPA may not have the clearest relationship to engagement as measured by that survey.

Any sense on how accrediting bodies are responding/may respond to this spate of critical takes on this survey?

8. dwch6440 - November 19, 2010 at 12:52 pm

I, for one, applaud the attention being given to NSSE results. We all know that performance measures drive institutional goals and budget allocations. Many traditional performance measures focused on fixed assets (laboratory space, library collections) or prestige factors (prize-winning scientists, National Merit finalists). NSSE seems to have given some weight to the actual classroom experience of students. I'm not troubled at all by the fact that NSSE results aren't aligned with grades. Does anyone think grades actually represent a uniform standard of performance for students? Certainly NSSE can be improved, but I'm glad the discussion is at least about student experience and not the size of our football stadiums.

9. 11167997 - November 19, 2010 at 01:11 pm

Yeah, dwch6440, but NSSE is not about what students learn, despite the attempts of the propadanga machine, e.g. the Spellings Commission, to pretend otherwise. I don't think founder George Kuh or current Chief Alex McCormick would claim it's a measure of student learning, or even a proxy for same. The principal values of NSSE are for student services and general institutional self-reflection. There's nothing wrong with that--but don't let the business model of higher education push it any further.

10. gallaghd - November 19, 2010 at 03:00 pm

If NSSE is primarily useful for self reflection, it seems to me there is an opportunity to save lots of money and time by developing a local measure asking students to tell us how often they meet with faculty, how many papers of various lengths they write....
To say NSSE is a tool implies it has usefulness...but there is scant evidence for any usefulness at all. To say grades are not the best measure of learning does not provide any evidence that NSSE is measuring something akin to learning. A careful reading of some recent distributed material on the NSSE (see the website) does indeed seem to start to suggest that NSSE benchmarks might be a worthwhile as goals in themselves. I find that suggestion less than intriguing, but interesting.
The NSSE is said to measure things related to -important student outcomes- as seen in prior research (see the website). There is no empirical evidence at all that this is so.

11. professor_e - November 19, 2010 at 04:34 pm

NSSE is king of all indirect assessment instruments largely because of their marketing. They infiltrate every conference and every accreditation meeting and have big-name supporters. It does not discriminate as well as the Student Satisfaction Survey from Noel-Levitz; but on our campus, it has helped us immensely with defining improvements that need to be made in many areas. NSSE is only as useful as your campus makes it. The researchers in the article were not careful--they don't even know the difference between reliability and validity...

For direct measures of student learning, you will need to go the route of national field tests, general education tests, eportfolios, etc.

12. gallaghd - November 22, 2010 at 10:07 am

In response to professor_e, and in general about the usefulness of NSSE: I guess the main issue I have with NSSE is that it seems not to be measuring anything of use (validity; and as a telling note, the developers of the NSSE, see the website, say NSSE is valid because scores are similar when using different administration methods. This is reliability across methods, not validity). Benchmark scores are empirically unrelated to any indicators of performance, retention, etc. If an institution uses NSSE scores to make changes on campus, is there any evidence that those changes have any beneficial impact on students lives other than to increase NSSE benchmark scores? And, if NSSE benchmarks are unrelated to anything important... I am very willing (and in fact hopeful) to see evidence of improvement in important student outcomes, as related to changes inspired by NSSE benchmark scores. On a personal, and professional, note: I believe firmly that student engagement is critical for performance, retention, satisfaction, growth of maturity, etc. I do not believe NSSE is measuring that kind of engagement, and that is why it fails in its goals. Linda Suskie recently wrote on the growing tendency for Learning Outcomes Assessment to become more of an administrative task, than to assist in student learning (apologies to Linda if I haven't got it exactly right). NSSE in my opinion seems little more than the former.

13. 11126724 - November 22, 2010 at 12:24 pm

Let's face it folks, NSSE does NOT measure learning, which is what most of us really care about. If you examine the benchmarks and the questions, it should be obvious to ANYONE who ever completed a research methods class that all they measure is student satisfaction, and that often poorly.

Student satisfaction is NOT the same as student learning. It is NOT even an indirect measure of student learning, hyperbolic statements by NSSE administrators on their website notwithstanding.

NSSE is merely another in a long series of educational frauds, purporting to be something it is not. Anybody who uses it as a proxy for student learning should go back to graduate school--if they can score well enough on the GREs to get in!

14. mohave - November 22, 2010 at 03:40 pm

To henr1055

Tom, where are you hanging your hat these days?

15. gallaghd - November 23, 2010 at 10:25 am

To their credit, NSSE people do not claim it measures learning. They claim is measures things related to important student outcomes.They also claim that those "things" have been seen in prior research to be related to important outcomes. BUT, there is no empirical evidence presented for any of these assertions.
To further beat this dead horse:

16. gallaghd - November 23, 2010 at 10:35 am

Sorry, hit the wrong key before I was done.
Central problems with the dying horse:
1. As stated above regarding lack of evidence for relationship between benchmarks and prior research.
2. In developing the measure, they used appropriate techniques (facrot analysis, etc.) but then almost ignored those findings in constructing the benchmarks.
3. For the psychometrically minded: Chronbach's Alphas are much too low for the benchmarks (i.e., the banchmarks are probably measuring some combination of things other than or in addition to something)
4. In NSSE's own development work and in virtually all subsequent work, Benchmark scores are extremely weakly related to indicators of academic performance and even more weakly related to retention.
5. There is no evidence that NSSE scores are related to anything of importance in students' lives.
6. It is a mystery to me why institutions continue to use this empirically unimpressive instrument. To do so, in my opinion, will lead to errors in decisions about what to do on campus. If NSSE seems not to measure anything related to important students outcomes, to use it means that actions will not have an impact on important student outcomes. Then we will wonder why our efforts seem not to be having an impact. The answer is that we made decisions based on results of a measure that does not measure what we want to change.

Add Your Comment

Commenting is closed.

subscribe today

Get the insight you need for success in academe.