Previous
Next

Assessment in higher education: On the train or under it

October 2, 2011, 11:53 pm

Collecting Data

Would that collecting data was as fun as collecting butterflies...

For those of you who are responsible for collecting data for outside accrediting bodies, I would like to invite you to join in on a moment of bitching whining reflection.  For those of you faculty and administrators who don’t have primary responsibility for evaluating our work with students, our new programs, our curricula, and our productivity, you might as well read the following as an educational piece, because you won’t be able to avoid the assessment process much longer.

We know the public has long had a distrust of higher education, thinking we were mainly pointy-headed, poorly-dressed, over-educated, socially-awkward types who earn too much money to do too little work: teaching students and conducting research. This trend has worsened in recent years, for a variety of reasons (i.e., the Tea Party movement, the managerial trend in higher education, the advance of for-profit models of higher education and their abuse of federal funds, the increase in college tuition, the problems of graduates to find employment, etc,), and we are being asked for more and more proof of our accomplishments.

For those who aren’t in the loop on the many requests and requirements to continually collect and analyze data, let me tell you who wants to know…

  • University accrediting bodies now expect ongoing evaluation, feedback, and innovation, with clear demonstrations of how your findings inform program changes, in **all** departments, disciplines, and programs in the university
  • Accrediting bodies for professional schools have long expected programs to meet specific, defined standards that they created, standards that relate to student learning outcomes, program staffing and design, curricula, etc. They, too, are now expecting a clearly defined set of measurable outcomes that are being assessed and used to make changes.
  • State legislators are asking for proof that they money is being well-spent, so they want to know more about professors’ class sizes, teaching loads, research funding, publication rates, and service. If you think this trend is limited to Florida and Texas, you have another thing coming (to a state near you)!
  • Parents, students, and the general public are also now asking for proof that graduates with our degrees can get jobs, pass licensure exams, get into grad schools, etc.

In my experience, the burden of collecting these data has yet to be properly accounted for in university budgets or allocation of effort. If faculty and administrators are being asked to gather and analyze data for all of these diverse constituencies, and to develop mechanisms for providing feedback to administrators, faculty, staff, and students so that they can design changes that can then be evaluated, and so on, shouldn’t there be some recognition of that in terms of time and money? Instead, these required data collection/analysis/evaluation/revision/collection cycles become unfunded mandates that fall on the heads of all members of the university. Worse yet, not everyone has been trained in this kind of program evaluation/learning outcome research, so many people within the university are unprepared for to meet this mandate.

Chairs of departments now have to work with faculty to identify measurable program outcomes and student learning outcomes each year, setting benchmarks towards meeting the goals and achieving the outcomes, and then they must report back on what they are doing to address any goals/outcomes that were not achieved. But even with these overarching plans in place, assessment isn’t a piece of cake. Good assessment takes the development of reliable tools that best measure your goals/outcomes; the establishment of processes for data collection, analysis, and feedback; training of those who will use the tools; preparation of those who will participate in the processes; the creation of databases for all of these data over time; and the identification of researchers who are trained and have time to do this work.

What about the challenge facing faculty who have to begin doing a different kind of assessment of their students’ learning and their own teaching?Academics have long been interested in gathering data on their pedagogy, as evidenced by the number of journals that have titles like “Journal of Teaching in X,” “X Education,” and the like. But in most disciplines, save Education and a few other professional programs, scholarship on teaching isn’t very well respected. In fact, it is fine to write about innovations in one’s teaching and their effectiveness, but only as a secondary or tertiary research focus. If junior faculty members in most disciplines at an R1 were to make the scholarship of college pedagogy in the discipline the focus of their research, they likely would be finding themselves on the losing end of their tenure decisions. So research on one’s teaching is an extra burden, on top of one’s own research agenda.

Yet, despite the fact that publishing on teaching isn’t well respected, most professors now preparing tenure materials are being asked to prove that their teaching is effective and that students in their classes are learning. Instead of just turning in teaching evaluations, we are being asked for portfolios that detail faculty effort, student learning, and pedagogical sophistication. This is true despite the fact that many faculty had no formal training in pedagogical techniques, theories, and research in their graduate programs. Doctoral program directors in every discipline had better create a courses on pedagogy in the discipline and assessing student learning outcomes. Seriously. English, History, Political Science, Biology, Art History, Nursing… No matter what your regular focus or methodology might be–yes, lit crit and rhetorical analysis types, I am talking to you–faculty need to have skills in quantitative and qualitative research methods to be able to prove our worth.

For now, though, we mid-level administrators have to prepare our faculty and get them on board with the data collection process. Faculty need to be knowledgeable about the internal and external concerns we are addressing with the data, updated on different approaches to assessment, and prepared to analyze their students’ learning. Individuals in central administration can help with this process by mobilizing (and subsidizing) folks in the university with the the expertise to do this kind of assessment. Give buyouts to a few higher education researchers, and others who routinely do pedagogical research, to train other faculty. Perhaps we can also get creative about using data collection and assessment as a learning opportunity: We could use students in research, anthropology, sociology, and statistics classes to do data collection and analysis. Nothing says learning more than students proving their learning by assessing the learning of their peers.

I don’t think there is any going back on this trend of data collection and reporting. That argument is done. What we can do is make sure that the people in higher education, especially faculty, get to spell out our own goals and measures for assessment as much as possible. We also need to make sure that we use a variety of media strategies to contextualize the findings and shape the public discussion of the outcomes. Because one thing social scientists know about data is that it can be manipulated and misused, shaped by the agenda of the ones who are reporting it. We need to control this assessment train and ride it into the next decade before it runs off in a direction we don’t like… or just runs us over.

under the train

Not where any of us want to be...

This entry was posted in Academic administration and tagged , , . Bookmark the permalink.