• July 31, 2014

Student-Survey Results: Too Useful to Keep Private

Although the conventional view in the United States is that elementary and secondary schools have serious problems in terms of quality while that of higher education is exemplary, the evidence suggests that such problems exist among colleges as well as schools. As the report from the Secretary of Education's Commission on the Future of Higher Education, better known as the Spellings Commission, noted several years ago, "There are … disturbing signs that many students who do earn degrees have not actually mastered the reading, writing, and thinking skills we expect of college graduates. Over the past decade, literacy among college graduates has actually declined."

On a purely anecdotal level, the proportion of college graduates I have interviewed for employment in the past decade whom I'd consider fully qualified—for example, who could think and write at competent levels—is astoundingly low, including many graduates of so-called elite liberal-arts colleges. Such trends are especially troubling given the increased importance of higher education to the nation. In our knowledge-driven global economy, high-quality higher education is an important driver of economic competitiveness. Improving higher education is something we all have a stake in.

The challenges that I've described have many causes, including the reduced levels of government support for colleges. But a major contributing factor is that the customers of higher education—students, parents, and employers—have few true measures of quality on which to rely. Is a Harvard education really better than that from a typical flagship state university, or does Harvard just benefit from being able to enroll better students? Without measures of value added in higher education, that's difficult, if not impossible, to determine. And without the ability to measure an institution's education quality, customers won't be able to make the best choices.

Developing such measures of outcome or of value added is itself difficult, but an existing intermediate measure of quality could provide customers with significantly better information. The National Survey of Student Engagement, begun with support from the Pew Charitable Trusts, is designed to obtain, on an annual basis, information from more than 1,300 colleges about student participation in programs and activities that those institutions offer for learning and personal development. The latest version was just released, and the results provide an estimate of how undergraduates spend their time, what they gain from attending college, and their views about the quality of teaching that they've received. Even though the survey doesn't measure education outcomes, it measures the activities and practices that are associated with those outcomes. Indeed, it states, "Survey items on the National Survey of Student Engagement represent empirically confirmed 'good practices' in undergraduate education. That is, they reflect behaviors by students and institutions that are associated with desired outcomes of college."

Yet what is remarkable about the survey is that participating institutions generally do not release the results so that parents and students can compare their performance with those of other colleges. The administrators of the survey have agreements with participating institutions that prevent the reporting of the results for individual colleges. Thus, while colleges are able to see how they rank relative to all others involved in the survey, the public is not. Even if some colleges post the results on their Web sites, and 450 release data for a USA Today site, that is not the same as aggregating all the results in one place.

Requiring all colleges to make such information public would pressure them to improve their undergraduate teaching. It would empower prospective students and their parents with solid information about colleges' educational quality and help them make better choices. To make that happen, the federal government should simply require that any institution receiving federal support—Pell Grants, student loans, National Science Foundation grants, and so on—make its results public on the Web site of the National Survey of Student Engagement in an open, interactive way.

To be sure, many colleges will complain that requiring such information to be made public will lead to all sorts of problems. They will claim that colleges won't participate. But if they want federal funds, they will probably participate. They will say that they already use the information internally as a benchmark to measure themselves against other institutions, so that making it public is not necessary. But that would be like the airline industry's saying that it doesn't need to publish on-time departure and arrival data, and that as long as carriers know how they compare with their competitors, they will improve. After all, what institution in any industry wants information made public about its performance?

Making the survey data public would certainly make life more challenging for faculty members and administrators at low-performing institutions or at those whose relative scores are going down. But competition and accountability drive improvement in performance, whether in the airline industry or in higher education.

Indeed, a growing number of organizations in our economy now have to live with customer-performance measures. It's time higher education did the same. Students, parents, employers, and society as a whole will be better off for it.

Robert D. Atkinson is president of the Information Technology and Innovation Foundation, a nonpartisan research and educational institute.

Comments

1. eacowan - November 17, 2009 at 08:28 am

This emphasis upon "student survey results" and "measures of outcome or of value added" shows the extent to which academe has lost its way amidst the prevailing mania for quantification.. And the ultimate delusion is the notion that students are "customers of higher education" who are able to "evaluate" their professors and measure the value of goods received.

Nobody, it seems, ever mentions the fact that students, far from being "customers," are actually probationers who either learn the material presented to them, or not. Those who learn, pass; and those who do not learn, fail. That is all.

I have seen syllabi from various sources that are full of references to "outcomes," as though the material to be learned were not evident from the list of subjects and assignments included in the syllabus. This fixation on "outcomes" ranks with the kind of puffery found in most universities' "mission statements"...

2. gibbonst - November 17, 2009 at 08:39 am

I really don't believe that academe has lost it's way. On the contrary, outcomes-based learning is an important part of any education. The folks who advocate these surveys are doing great work and academe needs to recognize the importance of meeting their customer's needs ... parents and students deserve it!

3. grifflee - November 17, 2009 at 09:03 am

The problem with using NSSE measures is that they are indirect measures of learning: they measure characteristics associated with learning, but not learning per se. If such measures are reported publicly and if consequences are attached to results (funding, more applicants, etc.) institutions will have incentives to improve these associated characteristics rather than learning itself. For example, professors could be motivated to interact with students outside of class more frequently. That interaction may or may not lead to increased learning.

The entire accountability movement is weakened by the failure to develop measures of student learning that are as comprehensive, complex, and rich as the learning itself. Until such measures are widely available, incentives will always reward superficial performances and thereby direct efforts away from the real learning that is so difficult to achieve.

Many of us are working on better direct measures of learning and need only a little more time and funding. Please stay tuned.

Merilee Griffin

4. esselan - November 17, 2009 at 10:39 am

I have no problem making my college's student survey results public -- however, that means that they will require contextualization. Enough explanatory information will have to be included for students and their parents who have little or no background in statistics to understand the data that they are seeing. If comparative data with other institutions is included, then that will have to be contextualized further so that users can understand how sampling factors play into the comparative data (e.g., I can show them how my small, arts-focused institution compares to other four year institutions -- but I also have to explain the effects that, say, the comparative lack of interest in sports has on the results). These aren't excuses -- I share these data on my campus all the time with students, faculty and staff and when they are introduced to it, these are the questions they always ask me about, the things that they need to understand.

It's not impossible, but it's also not as straightforward as Mr. Atkinson would have us believe.

5. intered - November 17, 2009 at 10:44 am

One part of my work is designing student-focused assessments and instruments that produce meaningful, especially actionable findings. Thus, I confess to a strong bias, some may say special interest, based on 25 years of my work as a measurement scientist. That said, I would share a few observations with respect to this topic:

1. The implication of the first comment above, and countless reactions that we hear from faculties, is one of fundamental disrespect for the character of students. Many instructors, especially the old guard Mandarins, do not believe their students are sufficiently competent to pass objective judgment on anything (one wonders why they take money for teaching them). They describe their students as children who cannot distinguish good teaching from bad teaching, and cannot separate the either from their childish preoccupation with their grade. How arrogant to think that one is the only person capable of sound, independent judgment! And how false.

2. The facts suggest that we would benefit from gathering more real-time metrics from students as they traverse the 19th Century maze that we call higher education process. A few of many reasons are:

a. Students can reliably distinguish good from bad teaching (conveying meaningful content; managing time effectively; providing adequate and constructive feedback on performance; assisting learners through difficult topics) but only if their judgment is secured at the right time with well-designed and valid instruments. Most Likert-scaled end-of-course surveys measure the wrong thing the wrong way.

b. Students' comments, properly taxonomized, aggregated and profiled, provide the highest quality information available to a department head for establishing mentoring and best-practice sharing programs to improve the quality of instruction. This option is only available to enlightened departments that believe we are neither divinely inspired nor infallible, and that we can improve what we do by applying a proper measure of continuous feedback.

c. Students know better than anyone when they are at-risk and can provide invaluable information for retention management, again only if critical information is gathered the right way at the right time.

3. Contrary to the empirically groundless myth circulated by some faculty:

a. The R(2) between grades and instructor evaluations is low (0.22 or so based on 18,000 cases in one study)

b. Students' comments on well designed end-of-course surveys focus almost exclusively on Herzberg's motivating factors (quality of instruction, quality of content, quality of learning environment) where they are overwhelmingly positive(79% based on 2.75 million comments spanning a decade). Negative comments tend to be focused on specific behavioral deficiencies identified in their learning environment

c. Students do generally take evaluations seriously and put their best (if rapidly deployed) judgment into them; when they fail to do this, it is almost always because the instructor failed to set expectations properly or, himself, conveyed disrespect for the process.

Much more can be said but the generalization is that higher education stands only to improve its processes by taking the student more seriously and attaching more and better metrics to every facet of their relations with the institution (i.e., educational process, service, learning outcomes, learning impact, etc.)

Robert W. Tucker
President
InterEd, Inc.

6. smcdonald999 - November 17, 2009 at 01:14 pm

I'm looking forward to the day when "probationers" can start putting instructors on probation for not engaging students effectively. Droning professors in lecture halls is a tired and inefficient model for imparting knowledge, no matter how learned the orator. The sooner we slay that dinosaur along with the system of tenure that created it the better.

Wake up academia, you are the problem!

7. klblk - November 17, 2009 at 01:36 pm

The fundamental problem with this whole approach is that it ignores everything we've learned in the past 30 years about quality.

Approaches such as this (surveying students) for measuring and documenting quality -- across a wide range of manufacturing and service systems -- have been shown to decrease quality outcomes, ceteribus paribus.

If the problem identified above is student achievements, then measuring student satisfaction tells us nothing about higher education contribution to individual student improvement -- except student satisfaction.

8. intered - November 17, 2009 at 02:15 pm

To KLBLK and Others:

The reason virtually all of us own automobiles of Japanese engineering lineage, if not manufacture, is that the Japanese auto industry embraced process assessment while our auto industry was still measuring quality by waiting until they had outcomes -- too late for formative corrections. (Remember those "reliable," smooth running, efficient U.S. autos of the 70's through the 90's?)

The point here is that we should collect, interpret, and act upon good metrics at each and every juncture, including doing the hard work it takes to operationalize student's goals at the time of matriculation so we can conduct goal-fulfillment assessment upon graduation and beyond.

Separately, the correlation between sustained valid measures of LSAT (learner satisfaction) and positive learning outcomes is high enough to justify rigorous LSAT assessment solely on the basis of cost-effectiveness and ROI. The correlation between LSAT and downstream impact (Level 3 -- the stuff we should all be focused on) is even higher.

Fun stuff but the admissions ticket is steep.

Robert W. Tucker
President
InterEd, Inc.

9. redweather - November 18, 2009 at 09:05 am

Mr. Tucker, I might buy in to some of your claims but for the fact that the majority of students I see, day in and day out, semester after semester, are increasingly interested in one thing and one thing only: their grade. They view learning as little more than a quaint notion that "Mandarins" like me hang on to only because we are living in the past. Not receiving the grade they want is prima facie evidence that I am a bad teacher. It can't have anything to do with the fact that they didn't do the assigned reading (Quaint Mandarinesque Notion #1), or that the answers they wrote on exams were incomplete and sometimes incomprehensible (Quaint Mandarinesque Notion #2), or that the papers they turned in, replete with basic errors in grammar, sentence structure, and spelling, were also largely unintelligible (Quaint Mandarinesque Notion #3). If you can develop an assessment tool that takes all of the above into account while you are also measuring the many faults and failings that faculty bring with them into the classroom, to quote Walt Whitman, "I stop somewhere waiting for you."

10. klblk - November 18, 2009 at 09:48 am

Dear Robert W. Tucker,

Oddly enough, although I teach operations management in a top-20 business school and I have lectured, researched and published on quality management, I forget the part where the Japanese integrated "satisfaction of the part being processed or unit being assembled" into total quality management (or constituent practices such as SPC, 5 why's, DFA/DFM, Pareto analysis, Ishikawa diagrammes, Taguchi methods, etc.).

Maybe I've been wrong all along by what is meant by "the voice of the process"?

11. csmomaha - November 18, 2009 at 11:20 am

redweather addresses some important issues. The majority of the students want to get by (and get an "A") doing as little of the work possible. As stated in the first paragraph of the essay, college graduates are increasingly less literate - I have seniors who can't write a well thought-out paragraph, let alone without spelling and grammar errors. How can this be the fault of the professor? 4 years (or more) of college cannot remediate an inadequate basic education prior to college; I think our error is in continuing to tolerate and pass along these students. Anyone who can't read and write shouldn't be allowed to graduate from high school - let alone college. Alas, the system IS broken, but there are no easy solutions. If we only allowed students who are really prepared for college and willing to work once they are there, we would have far fewer colleges - many of them would go broke for lack of students.

12. optimysticynic - November 24, 2009 at 08:31 pm

I find it interesting that Mr. Tucker carries the notion of student as customer to the extreme of thinking what students say they want at the time of matriculation is, by definition, what we should be giving them. Most students have no clue why they are in college except for two reasons: their parents expect it and they think they will make more money with a college degree. Meanwhile, we are taxed with teaching them to think critically, be engaged and informed citizens, and umpteen other mandates that appear and disappear on a regular basis. Knowing what students should learn and how they should learn it is EXACTLY our job--not the students'. What Tucker is describing is graduate school or professional training: students come in wanting specific skill development and we contract to give them exactly that. Freshmen coming to college are not in a position to be sophisticated about the specifics of their goals, with the exception of those who are already educated (the few.)

Try asking a group of freshmen at the typical four-year public school of average rank why they are there and see what they say and how it accords with your institution's mission statement and mandates. It's small wonder we have trouble communicating and engendering commitment; we share almost no goals in common.

I must also comment on the assumption that we are all talking about the same group of people. "Students" vary across and even within institutions so widely that generalizations will forever be in dispute.

13. intered - January 28, 2010 at 11:47 am

This is an old post but, looking back, I wonder if differences in out implicit vision of the student body account for some of our disagreements. I have no difficulty envisioning uninterested, unmotivated students for whom the grade is the only visible outcome. I have had these students as well but they were always in a distinct minority. I would be doing most students a disservice to paint them with this brush.

More important, are we all remembering that half of all college students are adults with perhaps 80% of them being working adults who have adult work and family lives. While the abilities of these adult students are distributed more-or-less the same as their younger counterparts, the clarity of purpose and motivations of these students are different. Better. They may not be showing up to satiate a burning theoretical passion (few are) but they show up to learn something, apply it to their lives, and improve their lot in life.

These adult students, especially the working adults who have developed adult engagements in the world, are good judges of teaching efficacy. To be specific, we have more than 8 million comments in our database, extracted and taxonomized over a period of 20 years from semi-structured open-ended comments on end-of-course surveys at more than 50 universities. The first thing one learns when perusing these data is that adult students are critical consumers in the most positive sense: 80-90% of their comments are about the immediate learning environment (instruction, content, etc.); less that 10% of comments focus on hygiene factors (comfort, vending machines, etc.; of comments made about faculty, 75-80% are positive, recognizing the contributions of the instructor to their goals. Even more important, when comments about instruction are negative, they are focused and actionable in the just the way we would want them to be (e.g., not enough feedback on my work, missed an office appointment, etc.)

Perhaps better distinctions about the large differences in student audiences will lead to better approaches in interpreting and acting on student feedback.

Robert W Tucker
President
InterEd, Inc.
www.InterEd.com

Add Your Comment

Commenting is closed.

subscribe today

Get the insight you need for success in academe.