• July 28, 2014

Researcher Harpoons the 'Nessie' Survey of Students

Members of the Association for the Study of Higher Education this week heard a sharp critique hurled at the influential annual study that many have relied on or used as a model for their own work: the National Survey of Student Engagement, widely known as "Nessie" after the acronym NSSE.

In a paper presented at the group's annual conference here on Friday, just three days before the scheduled release of this year's NSSE results, Stephen R. Porter, an associate professor of research and evaluation at Iowa State University, argued that the survey of undergraduates "has very limited validity for its intended purposes and that researchers and institutions must adopt a new approach to surveying college students."

Tapping into a large body of other research dealing with how people respond to survey questions, Mr. Porter complained that NSSE asks many questions that are of dubious relevance, are too vague for the answers offered by students to be meaningful, or fail to take into account shortcomings in human memory and the difficulties involved in precisely measuring attitudes.

In an interview, Alexander C. McCormick, director of NSSE, challenged some specific criticisms contained in Mr. Porter's paper and argued that NSSE administrators have determined through extensive discussions with student focus groups that students have very similar interpretations of the survey's questions. He added, however, that NSSE administrators are well aware the survey has some flaws that are likely to result in errors in some of its measures of students.

"Any survey instrument is a blunt instrument," said Mr. McCormick, an associate professor of education at Indiana University at Bloomington. "I think there is a lot in this paper that will be helpful to us as we think how we can improve NSSE."

Mr. Porter told his audience at the conference that he had chosen to "take a bold stand" in criticizing the survey because it plays such a major role in influencing college operations, government policy, and students' decisions about where to enroll. He said the annual reports of the survey's findings have "potentially life-altering consequences" and quite possibly have caused some colleges to be unfairly regarded as poor environments for students.

Mr. Porter stressed, however, that while his paper focused on NSSE, his intent was to make the broader point that many education researchers are surveying students with questions of dubious validity, which ask students to provide assessments of their attitudes or factual information about their behavior that many are unlikely to report accurately.

He said other social sciences have much more rigid criteria for judging whether survey questions are valid, and he argued that many education researchers are under such pressure to produce publishable studies that there is little incentive for them to take the time necessary to test whether their questions are valid.

Varying Interpretation, Vague Quantifiers

Other research presented during the same session similarly raised questions about the reliability of student surveys. For example, Linda DeAngelo, assistant director of research at the Cooperative Institutional Research Program at the University of California at Los Angeles, and Serge Tran, associate director of data management and analysis at the university's Higher Education Research Institute, presented study findings showing that, while most students report their SAT scores fairly accurately on surveys, low-scoring students are more likely to be off in the numbers they give, and high-scoring students are more likely to exaggerate how well they did.

"Have we arrived at a point where all is not what it seems from the data we collected?" asked the session's moderator, Nathaniel Bray, an assistant professor of higher education at the University of Alabama at Tuscaloosa. Noting how education research is held to different standards than research in other fields, he said, "This is a debate that has been coming for a long time."

One of Mr. Porter's chief criticisms of NSSE is that many of its questions use words that are open to varying interpretations or ask students to report the frequency of behaviors on scales using vague quantifiers, such as "often," rather than actual numbers. As a result, his paper says, "it is likely that students do not understand much of what we ask them."

The 2009 survey, for example, asked students how often they had "discussed grades or assignments with an instructor" without clarifying whether "instructor" referred only to faculty members or also included graduate students who teach. Similarly, the survey asked students how often they had "serious conversations" with peers about certain subjects without clearly defining what was meant by either word, or posed questions using educational jargon, such as the phrase "thinking critically," that probably went over students' heads.

Mr. McCormick, the survey's director, acknowledged that NSSE is somewhat inconsistent in using the terms "faculty member" and "instructor" but found in its focus groups that students tended to use both terms interchangeably. He said NSSE generally could tell if students were having difficulty understanding a question — and make appropriate revisions — by looking at whether a large share of students had skipped over it.

In criticizing the survey's use of vague quantifiers, Mr. Porter's paper cited other research showing that students have very different ideas of what terms like "very often" mean in respect of various behaviors. When asked separately to provide actual numerical estimates of how frequently they engaged in certain behaviors, some who previously had answered "very often" might say once a week; others, a dozen times, or more frequently than that.

If students are confused about the meaning of a question, they often take cues from the scale of answers they may give — thinking, for example, that because they see themselves as average, they must engage in an activity sometimes — or provide what they consider the "right" answer, reasoning that because they are good students and good students probably engage in a specified activity often, they must engage in it often as well.

'Computer Hard Drives in Their Head'?

Mr. McCormick argued, however, that NSSE determined in its focus groups that students typically meant about the same thing when they offered a response like "often" to a given question. And he said Mr. Porter's critique appeared to assume mistakenly that NSSE seeks to quantify, in precise numbers, how often students engage in certain activities, when its intent is mainly to make relative comparisons between different subsets of students or different institutions.

Another major criticism offered by Mr. Porter's paper was that NSSE and similar surveys "implicitly view college students as having computer hard drives in their head" and thus being able to accurately recall their actions weeks, months, or even years before. Memories fade over time, and, because people have difficulty assigning dates to past events, they sometimes recall things that happened some time ago as having occurred more recently than actually was the case. As a result of those two flaws in memory, college seniors may report having engaged in certain activities more often than freshmen simply because they have more college memories to draw upon, and their memories of engaging in those activities in their junior or sophomore year are bleeding into their recollections of more recent events.

The survey's questions about attitudes are especially fraught, Mr. Porter's paper says, because they appear to assume "attitudes exist in the respondent's head, and all the respondent has to do is reach in, read the file, and report an answer." In reality, he said, research shows that "most attitudes are rarely formed until a respondent reads the question, and attitudes vary greatly over time, due to respondents' forming and reforming an attitude each time they are asked a question about the attitude."

Mr. McCormick agreed that "there are a lot of problems with attitude questions," and he said "that is why we don't have many on the survey," which focuses on students' behaviors.

Broadly, Mr. McCormick rebutted Mr. Porter's critique as not fully taking into account students' survey-taking behavior and the practical need to gather information from large numbers of them. Earlier versions of NSSE, which was first administered in 2000, contained much more elaborate instructions, but students later admitted skipping over them.

Any survey administered to students, he said, needs to strike a balance between clarity in its instructions and the wording of its questions and "actually being something students are going to respond to." NSSE administers conceivably could construct a survey in which every word was carefully defined, "but if hardly any students are going to fill out that survey, then that effort is wasted."

Mr. Porter's paper suggests that, instead of trying to measure student engagement through survey questions asking them how often they engage in certain activities, education researchers should borrow from other fields, such as economics, and ask students to keep daily diaries in which their accounts of how they spend their time will be less prone to memory lapses and thus more accurate.

The paper says such changes "will require a serious reorientation of how we study students," in part because high costs will be involved in paying students to keep diaries and converting the information they provide into usable data sets. But, it argues, the payoff in terms of accuracy will be worth the costs.

Comments

1. jeff1 - November 09, 2009 at 07:17 am

Indeed, I have been less than satisfied with NSSE results and the inevitable hand wringing conversations that result from the reports. The data, beyond being not good, is hardly as actionable as it could be and the response rates and comparison institution issues are difficult. Surveys given us one view and as such should continue to be improved. The other approaches (e.g., the log idea) sound very interesting and potentially much more useful.

2. ksledge - November 09, 2009 at 08:26 am

There is definitely a quantity/quality trade-off. You can get a small number of students to engage in the log activity if you have the funds to pay them for their time. But you will have some self-selection as to which type of student is more or less likely to drop out of such a study that requires so much of their time. The advantage of a survey is that you can get a LOT more people. It would be nice if both could be done, though, because the results could be a good check of reliability.

3. evbiii - November 09, 2009 at 08:41 am

I have personally found the NSSE data useful, especially when combined with other methods of analysis.

4. mcogan - November 09, 2009 at 09:47 am

Dr. Porter's comments regarding MSSEs influence on operations and policy is an implication of administrators and politicians who use this one instrument as a tool to advance their agenda. Rather, the NSSE should be used as one tool of many in order to describe the perceptions and behaviors of students attending our institutions.

5. 11151785 - November 09, 2009 at 09:48 am

Porter's critique is dead on--if NSSE's purpose was to create a self-report of the behavior and attitudes of individual students. No one can deny that ambuguous terms like "often" have different meanings for different people, which renders meaningless the comparison of one student's "often" to that of another. That's a given.

However, unless there's some compelling reason to believe that some college's students have a systematically different (i.e. non-random) range of interpretations for the more ambigous parts of the items, then it is completely valid to state that one college's students are more or less likely, on average, to report having a certain experience than those of some larger collection of students at many institutions. Indeed, non-systemic error is one of the assumptions of probablisitic statistics, and in a survey of this size these conditions appear to be more or less met.

Frankly, I believe Porter is making a mountain out of a molehill. No one should argue that NSSE, in the absence of any other knowledge or research on one's institution, is sufficient to draw hard conclusions and set broad policies. However, it is useful as one instrument in larger assessment toolbox.

6. jomn09 - November 09, 2009 at 10:51 am

mcogan is right on. Any survey instrument provides just one perspective on student behavior and attitudes. It must be used in concert with other data collection to truly be useful. Using the data from this instrument as "truth", by administrators and faculty, is as much the problem as any survey construction or measurement issues that NSSE may have.

7. occidentalir - November 09, 2009 at 05:14 pm

Surveys have their limitations ... which is not news, but still needs to be said. I think that's Porter's intention, judging from this quote: "Mr. Porter told his audience at the conference that he had chosen to "take a bold stand" in criticizing the survey because it plays such a major role in influencing college operations, government policy, and students' decisions about where to enroll."

IOW, I find NSSE to be just fine as a survey. But what bothers me are the efforts by some (USNews, VSA, even NSSE itself) to elevate NSSE (and CLA and the like) to special status as public arbiters of college quality. Higher education is increasingly getting influenced by a survey/testing-industrial complex, just as US foreign policy has been influenced by the military-industrial complex.

8. marklarson - November 09, 2009 at 05:22 pm

What is not discussed very often or ignored intentionally at many campuses -- and is most important -- is the generalizability of the NSSE data from the sample to the population. Unless you give every student an equal chance of participating in the survey ( a randomly selected probability sample), you have potential for massive but unknown sampling error. At our campus, despite many faculty objections, the data are collected from a nonrandom sample.

9. vfichera - November 09, 2009 at 07:38 pm

@ marklarson

You are indeed correct to raise as a central issue the necessity of ensuring a true random sampling. And thank you for your testimony that, indeed, administrations do not always choose that path.

I have always encountered a great deal of resistance and misunderstanding concerning the importance of the sampling whenever I raise the issue. For example, the CHE has itself been extremely hostile towards me and, I assume, other "critics" who happen to have noticed that, despite the Chronicle's "Great Colleges to Work For" program's assurances that the survey is "anonymous" and done from a "random sample" at each participating campus, further research reveals that in fact each campus' administration is in complete control of the sampling and determines exactly which staff and faculty will receive the questionnaire. And part of the process is for the administration to personally "encourage" each staff member selected to participate by sending them letters to that effect!

When you know your identity can be known, and when everything is conducted on the campus's own computer system, well, as with the NSSE so with the GCTWF: "many are unlikely to report accurately."

Again, thank you for your comment. True "random sampling" (by a party without a conflict of interest) is the heart of the matter, after all.

10. jaysanderson - November 10, 2009 at 12:58 pm

How can the NSSE be deemed inaccurate? After all, its questions were validated with...(wait for it)...FOCUS GROUPS

Well then, that eliminates all of my concerns.

11. chattahoochee - November 10, 2009 at 03:42 pm

evbiii, you touched on a point I was going to make - that is, we should not be making decisions based solely on the findings of one survey. We should use any findings, particularly from indirect measures, with other forms of analysis to make decisions regarding policy and programs in higher ed.

12. subcrea - November 10, 2009 at 04:03 pm

Applied research can have serious limitations and still offer great value. I welcome the results of Porter's upcoming massive time-diary study of student behavior, the correlation of his data with unambiguous measures of outcomes, and the report on effect sizes. However, I doubt he will be able to find funding for this on a broad scale.

Surveys such as NSSE are indeed a blunt instruments, but I'd rather have a blunt instrument than no instrument. As we try to scale up assessments and make cross-institutional comparisons, the instruments we use will become increasingly blunt. So be it. If you don't like it, come up with something better that is organizationally and financially feasible. Good luck!

Add Your Comment

Commenting is closed.

subscribe today

Get the insight you need for success in academe.