• November 1, 2014

The Future of Wannabe U.

How the accountability regime leads us astray

Corporate U2 Enlarge Image
close Corporate U2

I had a premonition of a metric future the other day as I listened to Alice Mitchell, a professor at Wannabe University, give an account of why she deserved tenure. (Her name is a pseudonym; Wannabe University is a flagship university where I conducted participant observation for more than six years.) Over the years, I've listened to innumerable assistant professors assess their chances of getting tenure. Usually they have worried, Did I publish enough in the right places?

In the 1970s and 1980s, my friends and I also talked about how much we had written and where it had appeared, but we discussed why our work was important, too. Alice didn't tell me about the topics of her research; instead she listed the number of articles she had written, where they had been submitted and accepted, the reputation of the journals, the data sets she was constructing, and how many articles she could milk from each data set. The only number she forgot to supply was the impact ranking of each journal to which she had submitted an article. An impact ranking is, of course, an estimate of the influence that articles in the journal might be expected to have on a field (as measured by citations) and is not to be confused with the NFL power rankings published weekly during the football season.

Alice's analysis reminded me that colleges and universities have transformed themselves from participants in an audit culture to accomplices in an accountability regime. The term "audit culture" refers to rituals of verification that measure whether and how institutions and individuals engage in self-policing, much as a diabetic pricks her finger to learn her blood-sugar level. Besotted with rituals that are characteristic of the corporate world, higher education has inaugurated an accountability regime—a politics of surveillance, control, and market management that disguises itself as value-neutral and scientific administration. In this emerging academic world, audits have consequences (for an individual, if you don't pass the tenure audit, you lose your job), honor resides in being No. 1—or, for an institution, at the very least in the top 25 of whatever group has been identified as yours—and, to quote the sociologist Troy Duster about my research, "every individual and unit strives and claims to be well above average." At Wan U., attempts to improve anything melt and puddle into a list of numbers.

Alice had mastered one requisite of the accountability regime. She had transformed herself into an auditable commodity comprising so many measurable skills. However, she had lost track of why she was being audited. Supposedly her bosses, up the bureaucratic chain, wanted to know what kind of contributions she might make over the years. Would she devote her life to answering questions that matter? Did her students find her lectures and seminars enthralling, or at least not too boring? Was she a good citizen, a team player happy to sit through interminable committee meetings dedicated to the common good? Did she realize that there is such a phenomenon as the common good—whose characteristics are captured by the metrics on research, teaching, and service encapsulated in the latest strategic plan? Those matters, Alice seemed to feel, could be captured in metrics like submissions to academic journals.

Unhappily, Alice is not the only person in higher education who has embraced commensuration, the process of attributing meaning to measurement. Annually, other job and tenure candidates list how many articles and books they have published, how many talks they have delivered (including how many to which they were invited, and by whom), how many students they have advised and taught. Now and again, senior professors, writing letters to evaluate a candidate's suitability to get or keep a job, provide their own lists. Sometimes they, too, are so intent on constructing them that they forget to discuss a candidate's intellectual contributions. Last year, when presenting a distinguished-research award, a top Wannabe administrator noted that the recipient had published well more than 100 articles. He never said why those articles mattered.

So, too, administrators elevate student evaluations of teaching, even if they don't know what those mean. Here's how a Wan U. vice provost explained the importance of scores on Student Evaluation of Teaching Instruments: When making decisions about tenure, he related, "we might be looking at two people with similar research records, but one is said to be a good teacher and the other, not. And all we have are numbers about teaching. And we don't know what the difference is between a [summary measure of] 7.3 and a 7.7 or an 8.2 and an 8.5."

The problem is that such numbers have no meaning. They cannot indicate the quality of a student's education. Nor can the many metrics that commonly appear in academic (strategic) plans, like student credit hours per full-time-equivalent faculty member, or the percentage of classes with more than 50 students. Those productivity measures (for they are indeed productivity measures) might as well apply to the assembly-line workers who fabricate the proverbial widget, for one cannot tell what the metrics have to do with the supposed purpose of institutions of higher education—to create and transmit knowledge. That includes leading students to the possibility of a fuller life and an appreciation of the world around them and expanding their horizons.

Iinterpret many of the metrics in strategic plans as an intention to educate, much as buying a contract at a fitness club may be understood as an intention to exercise. However, most fitness clubs make their profit by taking money from customers who come once or twice, usually just after they have signed their contracts. (If those customers worked out regularly, the club would need to hire more staff members and buy more machines; there's no profit in that.) Most strategic plans proclaim their aim to increase the quality of education as described in a mission statement. But, like the fitness club's expensive cardio machines, a significant increase in faculty research, in the quality of student experiences (including learning), in the institution's service to its state, or in its standing among its peers may cost more than a university can afford to invest or would even dream of paying.

The very term "increase" implies measurement, as in such goals as an increase in student credit hours per full-time-equivalent faculty member, four- and six-year graduation rates, the number of master's and doctoral degrees per faculty member, the number of postdoctoral students per faculty member, the number of publications per faculty member, and, of course "external research expenditures" ($) per faculty member—outside funds that each researcher brings in and spends. Such metrics are a speedup of the academic assembly line, not an intensification or improvement of student learning. Indeed, sometimes a boost in some measures, like an increase in the number of first-year students participating in "living and learning communities," may even detract from what students learn. (Wan U.'s pre-pharmacy living-and-learning community is so competitive that students keep track of one another's grades more than they help one another study. Last year one student turned off her roommate's alarm clock so that she would miss an exam and thus no longer compete for admission to the School of Pharmacy.)

Even metrics intended to indicate what students may have learned seem to have more to do with controlling faculty members than with gauging education. Take student-outcomes assessments, meant to be evaluations of whether courses have achieved their goals. They search for fault where earlier researchers would not have dreamed to look. When parents in the 1950s asked why Johnny couldn't read, teachers may have responded that it was Johnny's fault; they had prepared detailed lesson plans. Today student-outcomes assessment does not even try to discover whether Johnny attended class; instead it produces metrics about outcomes without considering Johnny's input.

Here's how one Wan U. professor explained it to me: "It's like the students are being processed the way you process hot dogs. You take raw material, and you put it on an assembly line. You check for defective hot dogs. Almost all the hot dogs are good. When one is defective, you ask how to change the process. You don't try to figure out what went wrong with the raw materials you were assembling. Not with this kind of continuous quality control." That kind of evaluation does not even pretend to ask questions about a student's preparation, class attendance, or study habits—in short, what the student is like as a student. The analogy to the processing of hot dogs (or TV's or cars) also reminds us that administrators are assuming control of the curriculum, for managers set the standards for the assembly line. They decide how work is to be done. Student-outcomes assessment announces the existence of an audit culture that has run amok and become an accountability regime.

The emergence of an accountability regime generates a list of questions: Will nothing halt the corporate university's insistence on subordinating knowledge to money? Will changes in higher education resemble accounting processes now characteristic of health care and legal practice? What do such alterations in key institutions, like colleges and universities, that help establish the underpinnings of our culture, tell us about how contemporary society is changing?

For those of us working in higher education, the key question may be quite practical: Why aren't more professors resisting, as administrative attempts to cope with the Great Recession make ever clearer the increasing strength of the accountability regime?

 

Gaye Tuchman is a professor of sociology at the University of Connecticut. She is author of Wannabe U: Inside the Corporate University (University of Chicago Press, 2009).

Comments

1. osholes - October 18, 2010 at 07:48 am

The focus on MPUs (minimum publishable units) has been driven by the quantitative assessment process. We count papers, compute rankings and indices, and seem not to care one whit about synthesis or overview . A professor of mine back in the 70s pointed out that there are only so many times you can cut a piece of cake until you are left with nothing but crumbs. I'm glad he didn't live long enough to see his prophecy come true.

2. richardtaborgreene - October 18, 2010 at 08:34 am

The $13 trillion the world just lost in 2008 and 2009 was lost not on a bus, nor under a seat at a concert, but lost by HARVARD MBAs stealing from us all all day long for years and paying venal voters (ads) to vote into power congressmen passing no laws against the currently popular forms of mass theft. A plutocracy buying off a dying democracy to be sure, as Citibank memos gloated.

THAT is the product of metrics just as body counts by McNamara, Ford Harvard whizz kid, lost us the Vietnam War. Harvard's Kennedy school STILL requires 800 on GRE math tests---they are UNABLE TO LEARN. Evil people in, evil people publishing in journals as faculty, evil people out on Wall Street ruining billions of lives for the sake of their personal boats and sexual service firms. A pretty picture indeed---academic publishing at work. Its results. Input human sludge, get civilizational sludge output. Not a surprise to any computer scientist.

3. impossible_exchange - October 18, 2010 at 10:37 am

@richardtaborgreene: Not evil, not even amoral, simply insular. The "Harvard world" is self-justifying (and therefore idiotic), self-reflexive in nearly all things, and very nearly mad.

4. jungianscholar - October 18, 2010 at 12:40 pm

Unfortunately, there is much truth and wisdom in richardtaborgreene's comments. Failing to emphasize the liberal arts, including ethics and values, and focusing on constant metrics of efficiency, production, publication, and corporate as well as individual performance beyond all else can lead to a single-minded obsession with profits, cost eliminations, economies of scale, and other issues that sacrifice what is good, human, and beautiful, for that which is obscene, obsessive, and destructive to our collective society.

For so many years, Harvard's Business School led the world with their much vaunted "case study method" where professors led naive and uniformed students through the dead detritus of failed, or sometimes very successful organizations, then, like coroners, tried to make sense of what they saw in retrospect. Today, other schools (not many) actually engage with viable, living organizations, both for profit and community based non-profit, to learn how to improve processes and organizations. Unfortunately, many business schools focus still on accounting and finance, and show students how to optimize short term profits and results, to appease stockholders. They also learn to line their own pockets and develop a callous, jaded approach to life where they walk around with GIANT I's painted on their chests, to remind others that their short sighted little world is "ALL ABOUT ME!" What pathetic excuses to come from our education system!

Learning institutions and processes that focus on "measurable outcomes, learning rubrics, teacher accountability behaviors, etc." are often pathetic applications of 19th Century manufacturing standards applied to post-modern learning processes and institutions. This is an example of using misplaced tools from one industry, to another.

Can you imagine if Professors J.R.R. Tolkien, Rubert Sheldrake, Albert Einstein and others were subject to this kind of nonsense?

Go seek higher education in Canada or Europe!

5. dkomito - October 18, 2010 at 02:51 pm

In 1930 Rene Guenon published The Reign of Quantity & the Signs of the Times. In it he reflected on the likely consequences to be expected for a civilization which replaced a broad concern for standards of quality with a broad concern for standards of quantity. We are reaping the consequences of what we have planted in all corners of our civilization.

6. amnirov - October 18, 2010 at 07:52 pm

Accountability sure beats the heck of of the bigoted collusionary bullying nonsense that used to be used to decide tenure back in the 1970s and 1980s... Good riddance to that garbage. Remember who betrayed their ideals and destroyed the world--boomers.

7. tallenc - October 18, 2010 at 08:24 pm

Excellent commentary, Professor Tuchman. Thank you.

8. a_voice - October 18, 2010 at 11:43 pm

Blaming Harvard for all that is wrong with education and the financial markets sounds childish. Accusing boomers of "destroying" the world is also pretty outrageous, and ignores human history. Let's grow up people. It's not the end of the world.

9. betterschools - October 18, 2010 at 11:51 pm

Ms. Tuchman,

Perhaps you can tell us how it is that you or your constituents, including those who pay you to do what you do, can determine how well you are doing your job? Is that an irrelevant question to you. If not, are your credential evidence enough? Is it sufficient that you are telling us that you are an excellent teacher? Does it add a measure of validity if you tell us that someday, though perhaps not in your lifetime, your former students will realize the powerful effects your erudition on their lives? Should we use any comparative measures at all to manage scarce resources or do you feel that we are obligated to support equally the great and the miserable teachers, both of whom speak as you do and both of whom claim to be in touch with the highest purposes of teaching?

It may be churlish of me to ask, but might you provide a rationale that you would be willing to apply to senior knowledge-holders in other professions, of whose services you might avail yourself? Your physician perhaps?

10. impossible_exchange - October 19, 2010 at 02:55 am

responding to jungianscholar's point: "Learning institutions and processes that focus on "measurable outcomes, learning rubrics, teacher accountability behaviors, etc." are often pathetic applications of 19th Century manufacturing standards applied to post-modern learning processes and institutions. This is an example of using misplaced tools from one industry, to another."
The final point is the one that interests me.
As someone who has worked in the "real" world of hiring and firing, weekly P&L meetings, million dollar decisions, budgets, and the micro scale production process, who now studies poetry.
I can assure you that I have yet to meet anyone who talks about the university as a business who has one F-ing clue about how a business model applies to the university, most of these jokers couldn't to page one of running a business.
The same goes for the nuts who insist on "quality" controls.
How the heck to you do quality control on a social interaction?
How do you measure the growth of a mind?
Hmm? How?
You cannot.
All you are folks are trying to do is bring order to the world outside of your ordering regime, that is the human world.
How is the university a business when we are all its customers and all its products?

11. busyslinky - October 19, 2010 at 06:23 am

Everything is measurable. Designing the right measures, applying them appropriately, and then adjusting them as necessary is critical to be able to manage any organization (school, corporation, non-profit etc.). I think the author points out what can go wrong with measures. Focusing only on outcomes is an 'end-of-pipe' approach is something that has gotten many organizations into trouble. The author mentions we should also be considering whether the material is good (input measures) and whether the process is good (process measures) and whether real feedback even exists.

Here's a the dilemma, when you start measuring all these aspects and concerns, then you are contributing further to this auditing/corporatization aspect.

To be able to administer effectively you do need measurement. But, when is it too much?

12. sam_michalowski - October 19, 2010 at 09:55 am

Two issues are rarely discussed in the discourse framing current accountability and assessment movements in higher education: faith and trust in educational institutions. That we have been taught to mistrust educational institutions is not surprising. We have been mistrusting and dismantling every other existent social institution for a while now (most recently sports). Like primary and secondary education before it, post-secondary education was simply an unexplored frontier for this process.

13. davi2665 - October 19, 2010 at 10:12 am

Universities are wallowing in minutae and trivia, and there is a metric, stop-light report, spread sheet, or "deep dive" data for every occasion and every facet of education, regardless of how insignificant it may be. The world of academic hospitals and medicine is even more inundated with trivia. These institutions must collect literally millions of data points to satisfy the ever growing appetite for "metrics." Most of these metrics have little or nothing to do with excellence of patient care; indeed, mediocre hospitals can get excellent "metrics" ratings by jumping through the appropriate numbers hoops. With attention focused on garbage data and statistics, the fundamental issues that should be tracked and evaluated are lost in the sea of numbers. I agree with comments above that this is not accidental; it is deliberate obfuscation to detract people from looking at the pillaging and plundering of the system (banking, housing, social programs, etc) carried out by the pathetic products of MBA and educational systems. In all of these "numbers", the outdated and seldom referenced traits of integrity, honesty, and transparency are totally lost.

14. quiero_leer - October 19, 2010 at 10:34 am

So much of what passes for quantitative assessment is based on poorly-framed, ill-conceived qualitative measures. Yes, I'm talking about the ubiquitous student evaluation. An old silverback at my uni told us about the institution of this bogus measure on our campus. Some four decades back, some administrative type was touting the student evaluation as the wave of the future. Another silverback, now gone to his eternal reward, asked, "Couldn't this be misused? Would this become a disciplinary tool or a means of determining whether a faculty member should get a raise or not?" Naturally, the administrative type assured him that such would never happen. Fast-forward to the present, where graduate teaching assistants carry the load and cower in fear of the entitled cabal, and where the sycophants and bean-counters soar past those who would hold their young charges accountable for doing the assigned work. Faculty who have earned tenure by dint of hard work and brilliant scholarship scramble to survive, while being forced to hand over class time to ever-more inane "programs" designed by ever-more incompetent "edjookaterz" worming their way into administrative positions. Developmental delay becomes profitable as our young charges are disempowered by a dazzling spectrum of programs ironically designed to "empower" them. We reap what we sow.

15. panacea - October 19, 2010 at 10:53 am

@busyslinky: re "everything is measureable." What utter nonsense. There are some things you simply cannot measure, and these are some of the most important things in life.

I refer you to the scene in "Dead Poets Society" where Robin Williams is destroying the very idea that poetry is measurable. That scene hits on a very important point. The things that make us human, that make our soceity wonderous, those explorations of "us" as humans are not measurable in any sense of the term, and to attempt to do so diminishes the effort of the exploration. The very attempt to measure is what makes the effort doomed to failure.

Not everything in this world needs to function on a business model, nor should it. That is the point that we are increasingly failing to see in how we design and operate our systems in the United States.

Some systems should be inefficient . . . because that's what gives the freedom of inspiration the room to blossom.

16. betterschools - October 19, 2010 at 11:38 am

@panacea,

Your comment shows a sophomoric grasp of measurement philosophy, theory, and method. This lack of understanding is unprofessional for someone teaching in 2010, in the wake of 50 years of applicable measurement science.

While I agree that there is much more to the world than can be subsumed by a "business model," most of us think it is reasonable to expect that we can determine whether or not our students learned what we think we taught.

In this regard, if you are suggesting that what you teach is not measurable (you didn't state this explicitly so I may be taking unwarranted liberties here), how is it that you: (a) determined that your students learned it so you could assess whether they passed or failed your course and (b) assuming that you didn't blindly award the same grade to all of them, how did you measure the differences in levels of learning among your students? Do you think the processes you applied under 'a' and 'b' are valid? If so, and if they are in fact valid, then you measured what you tell us cannot be measured. From that point, all one need to do is repeat your process. Do you get this logic?

17. gahnett - October 19, 2010 at 12:24 pm

Seems to me that these responses and the article both support the idea that there are a lot more similarities between the NFL power rankings and impact ratings than the author assumes...

18. sibyl - October 19, 2010 at 01:42 pm

The faculty don't resist because we are complicit. We have successfully avoided any attempt to make learning visible in ways that are meaningful to us -- including ways that can account for student effort -- and we have happily traded away a focus on education for research support, reduced teaching loads, and tenure. (Contingent faculty, of course, are striving desperately for even the slightest opportunity to make the same bargain.)

If we wish to overturn the accountability regime we will have to present an alternative. Simply resisting will do no good, as the people who underwrite our salaries -- legislatures who provide direct appropriations and aid to students, families who eagerly grasp at US News and other rankings, and even foundations -- won't tolerate it.

19. goxewu - October 19, 2010 at 03:18 pm

I can't quite figure out with numerical precision if W. S. Merwin is a better poet than Sharon Olds, if Ms. Olds is better than Nathasha Trethewey, or if Ms. Trethwey is better than John Ashbery. Can somebody with 50 years of experience in measurement science work this out in a quantitative metric than even a Dean or state legislator can understand?

Money's a little short, so I can't come to a retreat in Idaho to get the method...oops! "methodology," so it'll have to be e-mailable. And I hope that twenty-five bucks consulting fee will suffice.

20. goxewu - October 19, 2010 at 04:01 pm

I can't quite fewer out who's learned more about the art of poetry: John Ashbery, Sharon Olds, Natasha Trethewey or W.S. Merwin. Could somebody with 50 years of experience in measurement science help me quantify this in a metric of the sort that state legislators love?

Money's short in the humanities, so I can't afford to come to a ski retreat in Idaho to obtain the method...oops! "methodology." So, could it be e-mailed. And is twenty-five bucks consultant's fee sufficient?

21. goxewu - October 19, 2010 at 04:33 pm

Ooops! Sorry about the redundant post.

22. cfox53 - October 19, 2010 at 04:49 pm

if what we do as faculty, that is teaching, is important, shouldn't we want to know if it's consequential

23. betterschools - October 19, 2010 at 05:58 pm

@goxewu,

Too broke to pay attention? You need to if you're going to keep up in this discussion.

First, references to the legislature are yours, not mine. I don't care much about them in this context. To quote my post above, my focus goes to the fact that ". . . most of us think it is reasonable to expect that we can determine whether or not our students learned what we think we taught."

Second, yes, in fact, most any graduate-level measurement scientist can address the simple assessment issue you raised above (either version). That you think you have cleverly posed an imponderable reflects on your lack of understanding and little else. Once you see how easy it is, I am pretty sure you would agree (with a bit of a sheepish "Oh, I hadn't thought of that" on your lips).

Should I attempt to teach you two or three valid ways to ascertain if your students can engage critically on the topic you mention? I think not. You have a bad attitude. Perhaps some other measurement scientist will take pity but I'm feeling less than charitable today. I've reached my daily quota for tolerating intellectual red-necks.

24. pierce_library40 - October 19, 2010 at 06:34 pm

I should think that one should begin by defining "learning."

If you can't define what learning is, or whether it exists in a particular situation, then you have to answer the question of why someone should pay you to produce it.

Otherwise, you put yourself in the position of the Emperor's tailors, producing a fabric that only the most refined can see, and, if you can't see it, well then, you're simply not refined enough.

25. goxewu - October 20, 2010 at 07:40 am

Re #24:

Sure, betterschools may "teach me" (or just mention) "two or three ways [my] students can engage critically on the topic." ("Engage critically on"--that kind of patois would get some red pencil in my class, but never mind.) That is, if betterschools, can deign to deal with "intellectual rednecks" after all. (I'd also red-pencil the hyphen in "red-necks," and perhaps advise betterschools that the term is a derogatory term for rural white Southerners and a synonym for "cracker.")

BTW, as the reference to the legislature were indeed mine (I was addressing the general issue of legislature-friendly metrics; it isn't ALL about betterschools, you know) and not mentioned by betterschools, so did #21 not mention betterschools. But, as the old joke says, "If the foo s**ts..."

26. betterschools - October 21, 2010 at 02:48 pm

I'm feeling more generous today goxewu. Integrate what you will find in a half-dozen measurement textbooks related to the constructs "consensual validation," "expert panels," "unobtrusive measures," "ranking & rating rubrics," and discriminant an convergent validity as these constructs apply to assessing social particulars. You won't be able to Google this any more than you can Google do-it-yourself brain surgery. It will take you awhile but to work through the graduate textbooks but I assure you it will be a worthwhile venture. You might begin with the old Evaluation Research Handbooks and work forward from there. As you work through these sources, you will see many rich and, from your own intellectual and professional perspective, satisfying ways to assess the kinds of growth in understanding that you pose as non-assessable (you never did say how you manage to assess these things when you determine grades).

Re: your gratuitous critique of my expressions. I count at least a half dozen errors in your posts, including not knowing how to express an ellipse. It would not have occurred to me to mention them. Blogs define a different language game than formal writing and their grammars. You seem like a red-neck to me because of the anti-intellectual pride you take in being ignorant of the measurement sciences.

27. goxewu - October 21, 2010 at 03:24 pm

Re #27:

1. betterschools's first paragraph rather fails to deliver the goods. He said in #24, "Should I attempt to teach you two or three valid ways to ascertain if your students can engage critically on the topic you mention?" I answered sure. But what I got is, "You might begin with the old Evaluation Research Handbooks and work forward from there." That's only one, and it's certainly not specific--merely pointing me to some handbooks--to qualify as one of "two or three valid ways to ascertain if [my] students can engage critically on the topic [I] mention." If that's what I'd get for a consultation fee, I'm cancelling my ski package in Idaho.

2. Three dots are an ellipse in my style book. But my critique of betterschools's expressions wasn't "gratutious." I just thought that if betterschools wanted to use a racist epithet--the flip side of the N-word--he might want to spell it correctly. Especially since he's apparently going to keep on using it.

28. betterschools - October 21, 2010 at 04:45 pm

. . . ...

29. goxewu - October 22, 2010 at 09:36 am

Morse Code for "formatting"?

I wrote the comment, but didn't typeset it. Anyway, I was brought up on two spaces after a full stop (period), but every publication for which I've ever written has switched to one. (I'd mention a few, but that'd be bragging.) And those same publications accept ..., with whatever spacing, as an ellipse.

Some trade terms that sound impressive, but still nothing concrete in the how-to vein.

BTW, "consensual validation" yields 28,300 Google results, "expert panels" 164,000, "unobtrusive measures" 48,900, "discriminant" 36,400,000, and "convergent validity" 223,000. Even "do-it-yourself brain surgergy" elicits 32,100. Only the nicely alliterated "ranking & rating rubrics" comes up empty. That one must be proprietary knowledge out there in God's country.

Add Your Comment

Commenting is closed.

subscribe today

Get the insight you need for success in academe.