• August 27, 2014

We Must Stop the Avalanche of Low-Quality Research

We Must Stop the Avalanche of Low-Quality Research 1

Michael Glenwood for The Chronicle

Enlarge Image
close We Must Stop the Avalanche of Low-Quality Research 1

Michael Glenwood for The Chronicle

Everybody agrees that scientific research is indispensable to the nation's health, prosperity, and security. In the many discussions of the value of research, however, one rarely hears any mention of how much publication of the results is best. Indeed, for all the regrets one hears in these hard times of research suffering from financing problems, we shouldn't forget the fact that the last few decades have seen astounding growth in the sheer output of research findings and conclusions. Just consider the raw increase in the number of journals. Using Ulrich's Periodicals Directory, Michael Mabe shows that the number of "refereed academic/scholarly" publications grows at a rate of 3.26 percent per year (i.e., doubles about every 20 years). The main cause: the growth in the number of researchers.

Many people regard this upsurge as a sign of health. They emphasize the remarkable discoveries and breakthroughs of scientific research over the years; they note that in the Times Higher Education's ranking of research universities around the world, campuses in the United States fill six of the top 10 spots. More published output means more discovery, more knowledge, ever-improving enterprise.

If only that were true.

While brilliant and progressive research continues apace here and there, the amount of redundant, inconsequential, and outright poor research has swelled in recent decades, filling countless pages in journals and monographs. Consider this tally from Science two decades ago: Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication. In recent years, the figure seems to have dropped further. In a 2009 article in Online Information Review, Péter Jacsó found that 40.6 percent of the articles published in the top science and social-science journals (the figures do not include the humanities) were cited in the period 2002 to 2006.

As a result, instead of contributing to knowledge in various disciplines, the increasing number of low-cited publications only adds to the bulk of words and numbers to be reviewed. Even if read, many articles that are not cited by anyone would seem to contain little useful information. The avalanche of ignored research has a profoundly damaging effect on the enterprise as a whole. Not only does the uncited work itself require years of field and library or laboratory research. It also requires colleagues to read it and provide feedback, as well as reviewers to evaluate it formally for publication. Then, once it is published, it joins the multitudes of other, related publications that researchers must read and evaluate for relevance to their own work. Reviewer time and energy requirements multiply by the year. The impact strikes at the heart of academe.

Among the primary effects:

Too much publication raises the refereeing load on leading practitioners—often beyond their capacity to cope. Recognized figures are besieged by journal and press editors who need authoritative judgments to take to their editorial boards. Foundations and government agencies need more and more people to serve on panels to review grant applications whose cumulative page counts keep rising. Departments need distinguished figures in a field to evaluate candidates for promotion whose research files have likewise swelled.

The productivity climate raises the demand on younger researchers. Once one graduate student in the sciences publishes three first-author papers before filing a dissertation, the bar rises for all the other graduate students.

The pace of publication accelerates, encouraging projects that don't require extensive, time-consuming inquiry and evidence gathering. For example, instead of efficiently combining multiple results into one paper, professors often put all their students' names on multiple papers, each of which contains part of the findings of just one of the students. One famous physicist has some 450 articles using such a strategy.

In addition, as more and more journals are initiated, especially the many new "international" journals created to serve the rapidly increasing number of English-language articles produced by academics in China, India, and Eastern Europe, libraries struggle to pay the notoriously high subscription costs. The financial strain has reached a critical point. From 1978 to 2001, libraries at the University of California at Los Angeles, for example, saw their subscription costs alone climb by 1,300 percent.

The amount of material one must read to conduct a reasonable review of a topic keeps growing. Younger scholars can't ignore any of it—they never know when a reviewer or an interviewer might have written something disregarded—and so they waste precious months reviewing a pool of articles that may lead nowhere.

Finally, the output of hard copy, not only print journals but also articles in electronic format downloaded and printed, requires enormous amounts of paper, energy, and space to produce, transport, handle, and store—an environmentally irresponsible practice.

Let us go on.

Experts asked to evaluate manuscripts, results, and promotion files give them less-careful scrutiny or pass the burden along to other, less-competent peers. We all know busy professors who ask Ph.D. students to do their reviewing for them. Questionable work finds its way more easily through the review process and enters into the domain of knowledge. Because of the accelerated pace, the impression spreads that anything more than a few years old is obsolete. Older literature isn't properly appreciated, or is needlessly rehashed in a newer, publishable version. Aspiring researchers are turned into publish-or-perish entrepreneurs, often becoming more or less cynical about the higher ideals of the pursuit of knowledge. They fashion pathways to speedier publication, cutting corners on methodology and turning to politicking and fawning strategies for acceptance.

Such outcomes run squarely against the goals of scientific inquiry. The surest guarantee of integrity, peer review, falls under a debilitating crush of findings, for peer review can handle only so much material without breaking down. More isn't better. At some point, quality gives way to quantity.

Academic publication has passed that point in most, if not all, disciplines—in some fields by a long shot. For example, Physica A publishes some 3,000 pages each year. Why? Senior physics professors have well-financed labs with five to 10 Ph.D.-student researchers. Since the latter increasingly need more publications to compete for academic jobs, the number of published pages keeps climbing. While publication rates are going up throughout academe, with unfortunate consequences, the productivity mandate hits especially hard in the sciences.

Only if the system of rewards is changed will the avalanche stop. We need policy makers and grant makers to focus not on money for current levels of publication, but rather on finding ways to increase high-quality work and curtail publication of low-quality work. If only some forward-looking university administrators initiated changes in hiring and promotion criteria and ordered their libraries to stop paying for low-cited journals, they would perform a national service. We need to get rid of administrators who reward faculty members on printed pages and downloads alone, deans and provosts "who can't read but can count," as the saying goes. Most of all, we need to understand that there is such a thing as overpublication, and that pushing thousands of researchers to issue mediocre, forgettable arguments and findings is a terrible misuse of human, as well as fiscal, capital.

Several fixes come to mind:

First, limit the number of papers to the best three, four, or five that a job or promotion candidate can submit. That would encourage more comprehensive and focused publishing.

Second, make more use of citation and journal "impact factors," from Thomson ISI. The scores measure the citation visibility of established journals and of researchers who publish in them. By that index, Nature and Science score about 30. Most major disciplinary journals, though, score 1 to 2, the vast majority score below 1, and some are hardly visible at all. If we add those scores to a researcher's publication record, the publications on a CV might look considerably different than a mere list does.

Third, change the length of papers published in print: Limit manuscripts to five to six journal-length pages, as Nature and Science do, and put a longer version up on a journal's Web site. The two versions would work as a package. That approach could be enhanced if university and other research libraries formed buying consortia, which would pressure publishers of journals more quickly and aggressively to pursue this third route. Some are already beginning to do so, but a nationally coordinated effort is needed.

There may well be other solutions, but what we surely need is a change in the academic culture that has given rise to the oversupply of journals. For the fact is that one article with a high citation rating should count more than 10 articles with negligible ratings. Moving to the model that Nature and Science use would have far-reaching and enormously beneficial effects.

Our suggestions would change evaluation practices in committee rooms, editorial offices, and library purchasing meetings. Hiring committees would favor candidates with high citation scores, not bulky publications. Libraries would drop journals that don't register impact. Journals would change practices so that the materials they publish would make meaningful contributions and have the needed, detailed backup available online. Finally, researchers themselves would devote more attention to fewer and better papers actually published, and more journals might be more discriminating.

Best of all, our suggested changes would allow academe to revert to its proper focus on quality research and rededicate itself to the sober pursuit of knowledge. And it would end the dispiriting paper chase that turns fledgling inquirers into careerists and established figures into overburdened grouches.

Mark Bauerlein is a professor of English at Emory University; Mohamed Gad-el-Hak is a professor of mechanical engineering at Virginia Commonwealth University; Wayne Grody, Bill McKelvey, and Stanley W. Trimble, respectively, are professors of medicine, management, and geography at the University of California at Los Angeles.

Comments

1. 22067030 - June 14, 2010 at 09:54 am

So Professor Bauerlein proposes to abolish Pareto's law. If we published less then, of course, the weaker stuff would never see the light of day and waste our time. We would be back at the old days, when publications were not packed with pablum.

But if we take off the rose-colored glasses, we would see that all that has changed is the number of practitioners, and the technology that makes the expansion possible. Old abuses have been replaced by new ones. The reason why the old abuses are forgotten is that that more effective editor -- the textbook writer -- has selected the stuff we now remember and left everything else on the cutting floor, so that we forget it ever existed.

But the truth is, the more things change, the more they remain the same...

---GLM

2. mariadroujkova - June 14, 2010 at 10:23 am

Someone has to break the news, pronto: the internet exists. Moreover, social media exists.

How about this proposition: publish anything at all, online, in science blogs. Create and develop interest groups and informal, quick peer review networks based on blogging and wikis. Those papers that actually generated any interest at all through such free, public, open means can then be picked up for more rigorous and more formal scrutiny, and for further collective development. Those papers that did not generate interest can still serve as exercises for their writers, and references for their immediate cirlce of colleagues when the topic comes up in conversations.

We need more publications, not fewer - but more robust, instant, open, social mechanisms for sorting.

3. berkshire - June 14, 2010 at 11:29 am

Or perhaps look at the tenure and promotion requirements at most schools. If it is publish or perish we encourage more efforts to publish and with the expansion of journals that need manuscripts, how can there not be more research that is weakly prepared. Plus there seems to be an explosion of schools requiring research and publications as the primary means of determining tensure and promotion - so the cycle continues.

4. jabberwocky12 - June 14, 2010 at 12:29 pm

This article has so many things wrong with it, one doesn't know where to start, so I'll pick up on just three:

Impact factors? I never thought the day would come that someone actually suggested taking impact factors seriously. Nuff said.

Useless research? Some of the most important research published in science, when first published, was either seen as useless or plain wrong. Remember Einstein's papers? What a laugh that stuff was! What about George Boole, publishing that esoteric rubbish about binary algebra - I mean, honestly, whoever needs some rubbish like that?

Five or six pages? Why that arbitrary number? What not 2 or 3? Or 8 or 9?

If the authors' own rules regarding useless writing applied, this would have been sent to the trash heap.

5. astridsheil - June 14, 2010 at 12:42 pm

I have never understood why the majority of advancement in academia is based on the sheer number of publications one has. I have seen so much schlock research published in my seven years as a college professor. I truly chafe at the notion that this "avalanche" gets us anywhere. If universities would create an equal two-tier track, one for research and one for teaching--and not require the teaching track to publish as much as the research track--then we might start to see a reduction in the amount of superfluous research. The teaching track would use a different measurement for productivity, such as the percentage of graduates who find employment in their chosen field within two years of graduation...this metric would certainly make a lot of administrators, legislators--and parents--much happier.

6. getwell - June 14, 2010 at 01:08 pm

The authors state, "...the number of "refereed academic/scholarly" publications grows at a rate of 3.26 percent per year (i.e., doubles about every 20 years). The main cause: the growth in the number of researchers."

I would suggest that the authors take a close look at the publishing industry, and one might discover that the "big players" promote the creation of more journals (at costly subscription rates) to off-set the Open Access phenomenon. In addition, "they" continue to develop complex and expensive specialized databases to package and control the content at high subscription costs...ask any Academic Librarian:)

Follow the money........

7. markbauerlein - June 14, 2010 at 01:38 pm

Your examples, jabberwocky, are Einstein and Boole. Can you give us some examples from the last 20 or 30 years (the time frame of our analysis) of material regarded as useless and now recognized as important?

8. ellenhunt - June 14, 2010 at 02:02 pm

Somewhat agree:

1. Pox of piffling papers - I once submitted a 120 page manuscript because it was all of a piece, separate findings about one thing that had an integrated set of conclusions. That is now 4 separate papers and 2 more unpublished. (Probably never to be published simply due to the time it takes to get them through.) With the internet, we no longer have to keep papers short to keep publishing costs down. I think we should encourage submissions that emulate the style of 100 years ago when a paper could be 100 pages or more. Sometimes it's the right thing. Part of that is that editors don't like long stuff. But neither do you like long stuff. So, buddy boys, we are blued and tattooed on the half-shell.

2. A plethora of pithy older papers uncited - I like old papers. I dig back through reviews and comb literature to find them and cite them. Most of them are better than the new ones. Sometimes we found out something new, but usually not. Yes, this rubbish for the rabble of "reviews" that rehash what ws more clearly written in the first place is a plague upon our houses.


VERY disagree:

1. Impact factors? Are you brain damaged?
Did any of you review the algorithm that makes impact factors? Take a look thorough look at exactly what it measures and get back to us. You will be astonished at how abominable it is. Impact factor is ridiculous to the point of blithering, blathering idiocy. The algorithm itself is a formula created by a cretin, inflated with pomp and circumstance, signifying next to nothing. It is rubbish.
To add to jabberwocky12's points, there are a set of papers out there that are just flat wrong which have high citation values because people have to reference them in their papers to explain why they are wrong. Then those same people write another paper citing themselves, ignoring everyone else, and get cited a bunch more times. It is one of THE MOST EFFECTIVE STRATEGIES for getting high citation counts.
I will also mention Madelbrot. Mandelbrot is still bitter about his groundbreaking mathematics being rejected by all but the most obscure journals for so long. Most of the great papers did not appear in frontline journals.
In short, Daffy Duck could do better than your recommendation.

2. Short papers are a cure? That's ridiculous. Short papers are a major reason for a plethora of twittle. What next, see who can summarize their work in twitter? Sadly, some papers could be improved by cutting them to the length of a twitter post, but that is an editing problem. The major reason for sawing up publications into itty bitty chunks is page limitations. We have been schooled to think that is good writing when it's not. I would far prefer to read one paper containing the whole body of several years of work that puts it all together in 100 pages.
As I referred to above, if you go back into history, papers varied based on need. Some where relatively short. But most were longer than the 5-6 pages recommended here. Isaac Newton condensed his Tract on Fluxions (calculus) into some 48 pages. There are excellent papers on cancer from 100 years ago that ran 100 pages.

9. aolivez - June 14, 2010 at 04:43 pm

@ 6. getwell: ECONOMICS is certainly a factor. Growth in every sector is considered desirable by economistic thinking, which dominates research as much as anywhere else. "Publish or perish" is the academic articulation of marketplace competition.

10. markbauerlein - June 14, 2010 at 04:54 pm

Your rhetoric--"brain damaged," etc.--is a disservice in this space, ellenhunt. The goal of an article like this is to identify the problem clearly, then propose solutions. Constructive arguments and counter-arguments are welcome. In blogs it's best to speak as if you would to someone across the table.

Note, too, that we didn't say only short papers. We suggested two versions, a short one for print publication and a long one for the web site.

And maria's suggestion--"We need more publications, not fewer - but more robust, instant, open, social mechanisms for sorting"-- has a key failing. You can't expand publication any further and sustain a "sorting" process. The bulk is becoming unmanageable.

11. lolabn - June 15, 2010 at 06:54 am

In much of Europe, at least in my field, people still think that publishing monographs is the way to go. Talk about wasted paper and resources! Maybe it would be easier to swallow if these tomes were to be published as a PDF but since hardly anyone here in Europe seems to know about Adobe Acrobat, I don't see that day coming soon.

12. tsb2010 - June 15, 2010 at 10:40 am

Very interesting article - and even more interesting comments, especially #2 and #8.
The authors of the article seem to completely glance over the fact that much of the research is published because people want to get hired - so unless we change that, good luck getting "better" research. With 200+ applications for a tenure-track job, most everybody on the search committee turns into a bean-counter (it's easier to count quantity than quality). Besides, how do you compare article A and article B? With some luck, time will tell which one ended up being more useful for research (echo of #4).
As for the "useless" research... any research that is not fraudulent is not useless. It reveals something about Nature, and will, hopefully be integrated into a "big picture" sometime in the future (1, 5, 10, 100 or 250 years). Instead of trying to concentrate on "big impact" (come on, really?) articles in Science&Co, we should concentrate on solid science - and to making sure that no sloppy research gets published.
The current system of journals is deeply flawed - it is simply too complicated to "get something out" to the rest of the world. Here the model proposed by commenter#2 seems like a fantastic idea. Just publish *everything* and work on better ways to connect the dots and to search (google?) for relevant research. The good stuff will rise to the top by itself (let time take care of that, as opposed to editors - who, we all know, are all humans, and cannot predict the future and the future needs).

13. markbauerlein - June 15, 2010 at 10:47 am

Our first recommendation, tsb, was aimed precisely at the incentive of hiring and promotion. And we don't believe that "any research that is not fraudulent is not useless." Research that does not make a meaningful contribution to knowledge is, indeed, useless--and worse than useless, for it takes time and labor away from other activity. That's the whole point. We have reached a stage in which the sheer volume of material is overwhelming the capacity for "good stuff" to "rise to the top."

14. tsb2010 - June 15, 2010 at 11:15 am

How do you define the "meaningful contribution to knowledge"? Who is there to say that knowing something more about the nature around us is NOT meaningful? For instance, knowing the expansion rate of the universe is for all practical purposes useless... should we not bother writing papers about it? Or ant navigation - why do we care how they orient themselves to the sun?

The fact is that experiments are being performed, using enormous resources (tax payer's money, researcher's time, animal lives, materials, energy) - the least one could expect is for the results to be published, so that at least somebody, sometime can make sense of it all. In fact, I would go so far as to say that even negative results should be published - they would help others to avoid the same mistakes in the future.

Yes, making sense of "it all" is hard - and it takes time and labor. But that doesn't mean that it should not be done. There's a big puzzle out there, and it seems extremely short-sighted to simply throw away the small puzzle pieces just because they are so small. As they say, somebody's noise is somebody else's data...

15. tsb2010 - June 15, 2010 at 11:17 am

PS. I should add that I'm talking about scientific research here (physics, biology, chemistry), not research in English

16. jabberwocky12 - June 15, 2010 at 12:53 pm

In response to comment 7, looking for more recent examples, (the last 20-30 years - the time frame of the analysis) of scientific research once regarded as useless, and now recognised as important:

Firstly, you're missing much of the point. Frequently, it takes many, many years for for the great works to be appreciated - Boolean Algrebra's value became apparent more than 70 years after George Boole's death. The same goes for many others (like Goddard's silly little rockets).

BTW - it's also a little cheeky of you, asking me to do your research for you.

But, since you ask, here are a couple of examples for you (I've excluded people like Robert Bakker (dinosaurs) and Lynn Margulis (whose paper was originally rejected by about 15 journals), because they fall just outside your 30-year limit:

- Warren and Marshall in the the 1980s, arguing that bacteria caused ulcers. How could they have been so stupid when all the pharmaceutical companies in the world knew otherwise? I'm sure their 2005 Nobel Prize made the criticsms easier to swallow.
- Fernando Nottebohm's work on adult neurogenesis - "People in my own lab begged me to stop. I saw the pity in their eyes. They were saying, 'Fernando has lost it completely.' Well, take a bow, Professor Nottebohm, for helping countless patients, and giving us all hope, especially those of us who may have 'lost it.'
- Stanley B. Prusiner's rubbish about prions (1982), in his own words, "set off a firestorm. Virologists were generally incredulous and some investigators working on scrapie and CJD were irate." Maybe he was just mad as a cow. Tell that to the 1997 Nobel Prize Committee.
- Warren S. Warren's work on MRI (1990s) was so stupid that his Princeton colleagues held a "roast," mocking his work with a bogus award, and his funding dried up. Well, I hope they roasted crow and humble pie, because they needed to eat it later.
- Miller et al (2007), who discovered that lap-dancers earn more money in tips when ovulating than when not - ok, I just threw that it to see if you're still paying attention (but, see below).

If you want current stuff, it's easy. Just look at any commentary by any "recognised" scientist where they describe someone else's work as "interesting" and "brave" and "adding to the debate." This is academic speak for "I don't have the guts to stake my reputation either way. If it's wrong, I can say I never supported it. If it's right, I can always say I recognised the germ of the potentialvery early on."

Of course, there will be useless stuff - but the point is that you won't always know the good from the bad for many, many years. (And yes, if you're not in the field, it is difficult to understand the value of discovering that lap-dancers earn more money in tips when ovulating (Miller et al. 2007), but sometimes you simply have to admit that you don't understand the particular field well enough. BTW - according to Google Scholar, that article already has 36 citations - that must have helped that journal's impact factor a tad. )

I hope these examples help you in your work.

17. pachy - June 15, 2010 at 02:56 pm

The article states:

"In the many discussions of the value of research, however, one rarely hears any mention of how much publication of the results is best."

This makes no sense. I'm not even sure it is a sentence.

18. markbauerlein - June 15, 2010 at 05:02 pm

The points by tsb and jabberwocky are well-reasoned, but I don't think they really address the points of the article.

To tsb, you rely too much on the principles of research and don't consider the practicalities. We believe it's fine that some research to end up overlooked and forgotten as long as the overall enterprise fosters significant research. It's a matter of degree--not too much ephemera and more than enough good work. We believe we've passed beyond that point, and that the sheer amount of insignificant work is actually hindering the production of significant work.

To jabberwocky, you cite cases we never raised--those in which researchers were denounced and mocked. Obviously, those figures touched a nerve in the fields, but we are talking about people whose work is even responded to at all, not even in a negative way. It just gets produced and forgotten.

Now, I'm sure there are a few examples of overlooked work that suddenly was discovered and rightly appreciated. But once again, let's consider that in practical terms. How much insignificant material do you wish the system to produce? At what point do you believe that the steady increase in journals subscriptions and other costs stretches the system to a breaking point?

19. dogvomit - June 15, 2010 at 10:20 pm

These folks have too much time on their hands. Rather than insulting the hoards of productive individuals out there, maybe these authors should worry about their own productivity. This is one of the most ignorant articles I have read in the Chronicle in years.

20. dogvomit - June 15, 2010 at 10:25 pm

Consider this, Watson and Crick's paper describing DNA was less than a page long and revolutionized biology.

21. upallnight - June 15, 2010 at 10:48 pm

It seems that the practical problem would be sorting the useless research from the useful. Who will be the judge? The authors oft his article would likely deem much of what is published in basic science as useless, as basic science identifies the nuances of whatever system is at hand. I agree with earlier comments. There are many instances in which basic science findings were not easily published or well-received. It's useful to be reminded of the case of Ignaz Semmelweis whose work on the lifesaving effects of handwashing by medical professionals in the mid 1880s. He ended up dying in a mental institution, after becoming the object of mockery.

We must acknowledge that findings that some might find "useless" today could in fact be found to be useful in the future.

22. enipeus - June 15, 2010 at 10:52 pm

I've read and greatly benefited from many papers that I've never had the opportunity to cite. Many top-notch papers are widely cited, but I'm not sure that citations alone are the mark of worthy research.

23. drwho - June 16, 2010 at 12:12 am

So we have an english professor and a professor of management as co-authors on an article that bashes science publishing, as well as no indication that the remaining authors who appear to actually do science have the faintest idea what they're talking about (Impact factors? Seriously?!).


24. boiler - June 16, 2010 at 12:17 am

I'm in the social sciences, not the hard kind, so maybe my views aren't well informed. But in my field, impact factors and citation numbers are a terribly unreliable guide to the quality of research. The highest impact factors tend to be associated with applied and interdisciplinary journals that can be cited by a lot of different people. Nothing wrong with such journals, but they don't include the core journals that publish the most substantive stuff in our discipline. Citation numbers, for their part, often have less to do with the quality of the work than the amount being done in that area. If you work in a field that lots of other people are working on, those people will cite you. If you work in an innovative or obscure area, that work will get very few citations, at least initially. Does that mean it's low-quality work? Not at all. It may be well-researched, theoretically innovative, and well-written, but none of that will get people to read and cite it if it's not related to their own fields.

I don't mean to suggest that impact factors and citation rates are meaningless statistics -- they reflect something real about journals and articles. But they're not the only way to judge the quality of research, and in some cases they're very poor ways. To restructure academic publishing around them, as the authors here suggest, would be a very serious mistake.

25. martythehound - June 16, 2010 at 01:50 am

The authors of this article begin with the assumption that there are two types of publications (and, by extension, research programs): "useful" (i.e. cited within 5 years post-publication), and "not useful/useless" (not cited within 5 years). This assumption is based entirely on a "utilitarian model" of thinking (i.e. the publication must be useful to others within a relatively brief (5 years) period of time to be of value).

The problem I have with this entire premise is that the authors seem to miss the fact that not all research is (nor should be) "applied research". In fact, university science programs not too long ago used to be refuges for research that is entirely *curiosity-driven*, with no application or usefulness in mind beyond describing or discovering something new to science. Much curiosity-driven research in biology, for example, is performed on non-model organisms in which the scientist may be the ONLY person in the world with an expertise. Publications arising from studying obscure organisms often take more than 5 years to be cited simply because there are few researchers studying those unusual organisms. Does this cause the research and resulting publication to have low value? I don't think so. As long as the research is original, well-designed, and shows that scientific knowledge has been moved forward (even by a small amount), I argue that it is of some value. How much value? Difficult to predict, in the short-term. However, just because an article is not referenced within a 5 year period is certainly no guarantee that it may not be of greater value in the future.

In my opinion, the authors' arguments are effective as long as they are limited to applied research fields such as biotechnology, medical science, industry, etc. The paradigm fails entirely, however, when applied to academics whose research interests are driven entirely by a pioneering sense of curiosity and a desire to go off the popular beat and path and venture into completely unknown territory simply because it is interesting and adds new knowledge to the scientific pantheon.

Harvard's Ed Wilson is fond of saying that if a young scientist really wants to make an impact, s/he should go to the uncrowded "backwaters" of science to study processes that few others care about. The truly pioneering researchers, like Ed, are driven by passion and insatiable curiosity, and most certainly not by journal impact factors.

26. trendisnotdestiny - June 16, 2010 at 03:10 am

The title of this article in some ways assumes that the authors have some expert knowledge of what low quality research looks like (as if they are neutral players in offering a fix)... That stated, this response attributes the best of intentions and motives as a means of grace and thanks for commiting their ideas to this obdurate forum that is the Chronicle board...

First, we agree that publication quantity is not a fundamental academic heirloom to coveted and replicated. Second, we also agree that the publication arms race is not sustainable nor is it practical for an informationally engorged ivory tower. Lastly, the authors are spot on about comments of quantitative reductionism by bottom line administrators; it is true that changes are necessary and there are so many wasteful efforts within and between...

However, your suggestions need some refinement.

1) "First, limit the number of papers to the best three, four, or five that a job or promotion candidate can submit."

Creating a reduced numeric standard for promotion doesn't really address the central issue in an academic world that is based on competition, individualism and financial justification. The central issue in a pay to play business model is INFLATION of PRODUCTIVITY. By limiting the papers up for consideration, you place even more pressure to get published upon candidates to align with the mainstream research streams creating new interbred replications of the status quo... There is nothing to suggest that these four to five papers will result in your version of high quality research. In fact, I would argue that it would create a "more toxic academic politician-researcher" where the stakeholders are paid homage and the wannabees are turned into a gameshow or reality series contestants: Survivor Tenure Stream

2) "Second, make more use of citation and journal "impact factors," from Thomson ISI."

As many have criticized here, less is more when it comes to impact factors... We actually have to work with PEOPLE in academe; (not wanting to be dehumanized by turning our life's work and passion into a series of numbers for production or efficiency. Impact factors and citation stats are self-fulfilling prophecies that feed into the insecurities of the academic who wants to be lauded with acceptance and labels of rigor. They can be helpful as one source of information, but have ascended into a distorted level of competitive excellence...(As if the Michael Jordan of academia is going walk down the street and have someone say, there goes the most cited and scholarly journal writer of a last generation, last year or last week)

3) "Third, change the length of papers published in print: Limit manuscripts to five to six journal-length pages, as Nature and Science do, and put a longer version up on a journal's Web site."

Creating a new standard for length helps reviewers and committees(maybe), but doesn't necessarily translate into better
research. Time is money (no doubt), but all of your suggestions have to do with efficiency of time. The scarcity of time will not be solved by these suggestions alone. The two tiered idea has benefits and weaknesses, but is not some all-encompassing scholarly salve applied to the time wounded academic.

The central issue involves changing how academics get paid (where their revenue sources originate from/stablity/support from within and between). As long as 70-80% of funding comes the corporate world, more is more becomes the mantra of the academic who is dependent upon competing amongst other academics for air space, income, research funds and stability... (not a great environment)
Let's not kid ourselves that reducing papers or pages will address the problems you fine authors present.

"Our suggestions would change evaluation practices in committee rooms, editorial offices, and library purchasing meetings. Hiring committees would favor candidates with high citation scores, not bulky publications. Libraries would drop journals that don't register impact. Journals would change practices so that the materials they publish would make meaningful contributions and have the needed, detailed backup available online. Finally, researchers themselves would devote more attention to fewer and better papers actually published, and more journals might be more discriminating."

Your suggestions while seemingly well intentioned are more hopes than realities. These comments above do not address the complexity of performance inflation in most segments of our financial reality with large instituions in this country at this time in our history. You can change the standards all you want, but there will be "a new normal" for this change that arises from the ashes of your intentions of high quality research. It might be best to focus on what you believe high and low quality research is.

27. busyslinky - June 16, 2010 at 03:43 am

Provocative article.

I do not agree with it and I agree with much of the criticism related to value determination and impact factor problems.

But more directly the authors state: "The main cause: the growth in the number of researchers."

After reading the article this cause was never truly addressed and I didn't see it mentioned much in the comments. Is the conclusion that academia should be limiting the number of researchers or lessening the amount of research per researcher. Not a particularly good recommendatin either way. Who decides what research (discussed in much detail here) and who gets to do the research (not really discussed).

I think we should be happy that so many new researchers are in society to help us better understand and evaluate research. Doing your own research and publishing it, is something that is valuable not only from an outcome perspective, but also from a process perspective. Seeing others research (poor and good) helps us all learn. I, like others here, do not mind the search process. The more information that exists the more likely we will see diverse ideas and viewpoints.

I would be very hesitant to limit research due to problems of homogeniety that can be caused by the 'gatekeepers', both in terms of research and in terms of researchers. Let diversity and chaos reign. Evolutionarily speaking, it will make us all stronger in the end.

28. landrumkelly - June 16, 2010 at 05:33 am

"Even if read, many articles that are not cited by anyone would seem to contain little useful information."

"Seem"? "Seem"?? Seem to whom, and on what basis? How could you possibly make any such logical inference? Have you read these materials? Could you or anyone else possibly rule on their potential utility or worthiness a priori? No.

You are assuming that, if something is not cited, then it is not useful.

Verify that claim.

The potential usefulness of the findings is the key, and you have no way of assessing that. That does not keep you from making groundless inferences and sweeping generalizations of the value of the findings of various researchers. We face an information overload and have faced it for decades. Cutting off research funding does not solve the problem of how to deal with the vast quantity of information. You have offered no constructive suggestions for doing so, and how could you? How could anyone?


This article is based on some of the poorest methodology imaginable.

I will not be citing it.

Landrum Kelly, Ph.D.
Livingstone College
Salisbury, North Carolina

29. amnirov - June 16, 2010 at 06:33 am

Actually, 28, most journals go unread, uncited, unloved. People only really care about publishing their crud so they can make reappointment, tenure or promotion. And what the heck is wrong with that? I don't publish anything because I want it to be read. I don't care about readers. I publish for my own jollies. The journal system should re-evaluate its formal outlook and dispense with the fiction of readership and concede and celebrate and streamline production to reflect merely the formal vetting of academic papers.

I never read a journal anymore. I search for an article using an online database and read the PDF. I haven't held a physical journal in my hand for half a decade at least. And I don't care one toss where something was published or who wrote it, I only care if it agrees with something I'm arguing, or if it disagrees in a such a way that I can ridicule it. One out of every twenty or thirty articles will argue for something new and interesting that I'm really interested in, and those are the ones that will get extensive quotations in my work, but those are so rare as to be useless for building any rules around.

30. mbelvadi - June 16, 2010 at 06:40 am

Thank you #27, for bringing out the problem with the article that was also bothering me as I read the other comments. Ultimately the article doesn't really seem to be about too much research publication as it is too much research itself underneath the publications. That suggests that the deep agenda is to restrict academic freedom, that is, to somehow start to limit what areas of research faculty can actually conduct. The NSF and NIH are already pretty strong gatekeepers via their control over the most important funding sources for research, and there are numerous complaints backed by statistical data suggesting that those agencies tend to favor certain kinds of institutions over others.
What other mechanism do you propose to solve the problem of what you see as poor quality research, other than attacking the very last stage of it, the publication process? Trying to stop bad research from happening by restricting its access to traditional formal peer-reviewed journals seems like an inefficient way to go about it, to say the least, given the wide variety of options available through the Internet to go around such restrictions. Or was that not your point?

31. markbauerlein - June 16, 2010 at 06:57 am

Once again, the commenters here don't address the question of the avalanche of research publications in practical terms. #20 aligns productivity with quality--the fundamental error. Others regard the explosion of publication as a good in itself. Others ask, "Who's to decide what's useful and what's useless?"

These questions skirt the central crux: at what point does output overwhelm quality control? When does the volume of publication hinder cream from rising to the top?

32. busyslinky - June 16, 2010 at 07:34 am

"at what point does output overwhelm quality control?"

Many of us do not believe that we are even close to that point. Some of us have learned how to be quite efficient in identifying these things. With the explosion in research, there has also been an explosion in the technology to sort for the research we are interested in. We do not live in a unidimensional world of growth.

"When does the volume of publication hinder cream from rising to the top?"

We are not close to that volume. The hindering of the cream does not and will not occur. There are many ways to find this cream. Also, there are also many more seeds where fruit will bear.

Maybe part of the issue here is that the basic premise is something we do not agree with. We don't believe there is a problem, so why do we need to offer solutions? I think that is what many of us are arguing.

33. x1234 - June 16, 2010 at 07:42 am

It would be interesting to get Mr. Bauerlein's take on publishing in the humanities where you find-to me at least-that the vast majority of published journal articles and monographs go unreviewed, uncited, etc. We have scholars who turn a pleasant journal article into a completely redunant monograph, you've got journals that hardly anybody ever really reads.

If the sciences are struggling under the pressures and practicalities of the publishing, then what is the humanities going through?

34. tejackso - June 16, 2010 at 07:48 am

Hey you guys in the sciences, if it makes you feel any better; the problem of continual avalanches of redundant and just plain bad published research in the humanities has been such a problem for so long that nobody much talks about it any more. To tell the truth, I'm almost glad to hear that it's not just us...

35. x1234 - June 16, 2010 at 07:49 am

"Maybe part of the issue here is that the basic premise is something we do not agree with. We don't believe there is a problem, so why do we need to offer solutions? I think that is what many of us are arguing."

Yeah, but busylink I wonder if you disagree with this article because publications are the currency on the job market and for tenure. To some the prospect of any discipline publishing less creates anxieties about how then does one get a job or tenure.

36. tsb2010 - June 16, 2010 at 07:49 am

To the autor (markbauerlein) - you write:
"We believe it's fine that some research to end up overlooked and forgotten as long as the overall enterprise fosters significant research"

Not to be disrespectful, but, as #23 already points out, how would an English professor be qualified to make such arguments about "significant research". What is "significant"? This is the point that I and most other researchers on this board try to make (in vain, it seems). We are moving around in circles here, unfortunately. While your point is well taken with research on Shakespeare and the latest monograph or article on his 16th Sonnet, it is completely misguided (and dangerous) in the field of basic scientific research. None of the authors is doing basic research as far as I can tell, and ar thus not qualified to make such statements and demands. At the end of the day, all applied research is based on basic ("useless" you would say) research.

The basic problem with this article is that it has the attitude "I can't read all that's out there, thus we must reduce what's out there". Why not take the more positive approach and say: "I can't possibly read all that's out there; is there a way that we can organize this information?". Yes, it is hard. But we must do it, sooner or later.

37. harmindersingh - June 16, 2010 at 08:01 am

A crucial issue that has not been addressed in the article is the immense growth of higher education institutions around the world, especially in emerging economies but also in the developed world, in the last 2-3 decades.

These new institutions know that one way of improving their FT/BusinessWeek/USNews/etc ranking is to get their faculty to publish more. So, the faculty are asked to publish and they do so. Because there are only a few "high-impact" journals in every field, faculty members find it harder to publish. This leads to new journals being created. Since the goal is to publish in peer-reviewed journals and conferences, the creation of these new outlets meets the needs of the faculty of the new universities.

The academic world is no longer what it was 50-60 years ago, when scientific research was mostly restricted to the developed world and to fewer universities. The rapid rise in the demand for more and more faculty to be research-active has led to an increase in the supply of research outlets.

I think this issue is the **key** reason for the "avalanche" of research, low or high-quality. Hence, the suggested fixes might help only temporarily, as they do not address this issue. Personally, I sympathize with the authors and understand their pain, but resolving it seems impossible to me without a) reducing the pressure on faculty to publish, and b) finding out some way of expanding what's considered to be "high-quality" without lowering their "quality".

38. x1234 - June 16, 2010 at 08:10 am

"While your point is well taken with research on Shakespeare and the latest monograph or article on his 16th Sonnet"


So Bauerlein is out of line commenting on a discipline of which is not part of, but you feel comfortable making this statement? So to review: literature scholars cannot understand much less comment on the state of publishing in sciences, but people in sciences can derisively cast out schlorship in Shakespeare? And in the meanwhile, all research in sciences is vitally important? All of it, every last bit of it.

Good to know.


39. busyslinky - June 16, 2010 at 08:18 am

To x1234:

I am as cynical as the next person on the reasons why people do research. It is a currency for promotion and tenure, and yes that is one factor that is causing this growth in research. But, this factor has existed for a long time.

If this pressure results in knowledge (some may say it is poor quality knowledge, but knowledge none-the-less) does it matter the motivation? Universities are supposed to be institutions that contribute to society by generating knowledge and research does this. You provide rewards for meeting this goal.

It may sound trite and naive, but I believe that much of the research that is generated does supply new knowledge. Even rehashing old knowledge is valuable to reinforce the old knowledge.

I do not think that new knowledge should only be high impact, short term knowledge. I think encouragement through a reward system is acceptable. I do not think we have a knowledge overload.

I have gone through all promotion and tenure stages. I am as productive as ever. It is not because of promotion and tenure, but because I am interested in my world and would like to contribute to its advancement without having constraints put on me. So, don't put limits on our thoughts and our outlets, especially for such reasons as "a lot already exists." It will only do a disservice to our profession and our world.


40. tsb2010 - June 16, 2010 at 08:23 am

"And in the meanwhile, all research in sciences is vitally important?"
Vital = fundamentally concerned with or affecting life or living beings (Merriam-Webster)

According to your own wording, I dare say that scientific research is more "vital" than Shankespeare scholarship. Increasing our life span from 40 to 80 years in a century had little to do with Schakespeare, and everything to do with scientific research and its results.

41. x1234 - June 16, 2010 at 08:31 am


But the question is you feel comfortable saying that literature is not vitally important, right? You basically pulled rank on bauerlein and said that he had no business commenting on the sciences, but you feel comfortable commenting on literature studies?

42. x1234 - June 16, 2010 at 08:32 am


Also, its Shakespeare, not Schakespeare. It would seem that literature study could be vital, no?

43. tsb2010 - June 16, 2010 at 08:40 am

Low-blow with "Shakespeare" (happens when you type on a keyboard, no? especially with words that you do not write all the time). You missed the other typo: "Shankespeare". I must have touched a nerve...

According to the definition of "vital", no, it is not vitally important. Poor ol' William would be probably horrified to see what cottage industry his writing has produced. As they say - if you're not going to write yourself, you might as well comment on what other people have written. Or if you cannot create, you might as well criticize.

Please run a spell-check on my reply, why don't you? :)

44. tsb2010 - June 16, 2010 at 08:42 am

Also, please note that I was talking about "basic scientific research". I don't believe that Shakespeare scholarship qualifies.

45. x1234 - June 16, 2010 at 08:53 am

"As they say - if you're not going to write yourself, you might as well comment on what other people have written. Or if you cannot create, you might as well criticize."

So you're basically saying you have absolutely no knowledge of how English scholarship works. I gather that for the basic fact that you don't make the differentiation between criticism and scholarship. I also love how your model basically means that humanities is bascially worthless. The University should be science, because that's vital. The definition of vital, says "affecting the life" and it would seem that literature never ever affects the lives of human beings. Heck, we decided to make that printing press for fun, its not like the novel had anything to do with politics, ethics, science (gasp), or any of the vitally (there's that word again) important aspects of human existence. It seems you might want to close read (there I go throwing in useless jargon) the definition of vital and see that it doesn't just apply to increasing a human's life span.





46. tsb2010 - June 16, 2010 at 09:04 am

x1234: point well taken.

Doing the "close read" I would agree that art (literature included) is vital. One does not live by bread alone. What I didn't make clear in my posts is that I have nothing against art. The article we're commenting on is about "publishing" (in specialized literature, presumably).

What I talked about is "research on Shakespeare and the latest monograph or article on his 16th Sonnet" - and here I am not (yet) convinced that we need much more research than it's already been done.

47. ak_ok - June 16, 2010 at 09:17 am

Mark et al.

Your intentions are good, methinks, but your dramatic one paragraph sentences are a substitute for rigorous analysis.

There is and always will be a mix of quality and quantity. Take journals in the social sciences for example. Top journals have stronger editorial boards, selecting the researchers with the best reputations (best = innovative, highest impact, most persuasive, etc.). They in turn ask the next level down of colleagues to review papers. In short, it's a pecking order, and the higher ranked journals have a more rigorous review process. Lower ranked journals have weaker boards and less rigorous review. Hence, they publish work that makes less of a contribution.

So the answer is, drum-roll...., journal lists. Different tiers of universities will have shorter or longer lists of a-level, b-level, c-level outlets. We will tend to read and cite only higher level journals than the journal we are targeting. But the practicalities of career requirements dictate publishing in a mix of journals.

What really irks me the false dichotomy: top journals vs. junk journals, as if there is nothing in between. There is a continuous spectrum of journals, and where your school draws the line between a-, b-, and c-level journals is open to debate (and updates). If you have a transparent process for debating and updating those lines, there is no real problem.

Be fruitful and productive.

Peace.

48. janewales - June 16, 2010 at 09:24 am


The Shakespeare versus science debate that has suddenly broken out highlights a problem with the article's premise: once one begins to set up systems to limit/ control research, then someone has to decide what is and is not basic, vital, etc. And the decision-makers will act not only in their own proper realm, but also in areas in which they are happily, even aggressively, ignorant (tsb2010 can "believe" whatever s/he likes, but belief is not expertise). I would align myself with those who suggest that the more appropriate solutions lie in improving the sorting mechanisms that allow all of us to cope with an avalanche of published research. I did find a few of the article's suggestions intriguing; I like the idea of the short publication with the web-based supplement, for example, and have even suggested such a thing myself to a publisher I know when we were brainstorming about what the press could do to move with the times. I believe there was a discussion in the humanities, quite a while ago now (something to do with Yale is coming to mind), about limiting t and p files to a "best 5"-- that might be a promising avenue. But like some of the other commenters, I no longer "have" to publish, but I still do so. It has nothing to do, any more, with getting or keeping a job. I love to find things out-- and writing them up is, for me anyway, part of the process of discovery. I suppose I could then just keep most of the resulting essays to myself-- that does seem the logical endpoint of the original article.

49. abichel - June 16, 2010 at 09:35 am

The sheer volume of remarks on this matter proves the article's main point - academics like to hear themselves speak.

50. busyslinky - June 16, 2010 at 09:41 am

"The sheer volume of remarks on this matter proves the article's main point - academics like to hear themselves speak."

No. It is, "Academics like to see their words in print".

51. kittybware - June 16, 2010 at 09:42 am

The authors have made a suggestion that is not feasible considering the world of databases: "Libraries would drop journals that don't register impact"

Many libraries have already culled their print journals down to essential well-known titles so most unused print journals are already gone. As for online journals, they most often come to libraries in the form of a database -- a package of journal titles that is not customizable. You get 1153 titles in the package period. Often times these 1153 titles include many that are 'low impact'. If you want to see less exposure for these titles, you must ask the database providers to rethink their packages...and good luck with that!

Most research out there could be viewed as irrelevant to a particular individual or group....but what is one man's irrelevant article might be the article another person needs! IMHO this piece is a veiled attempt to promote further the anti-intellectualism overtaking America. We need more voices and venues not less -- perhaps we also need better mechanisms for searching, collating, and evaluating all the information out there, but we don't need less opportunity for people to share their research and curiosity-satisfying explorations!

52. markbauerlein - June 16, 2010 at 09:44 am

Good comments going here, both for and against the article. To respond:

#32 doesn't believe that there is any problem. Try telling that, though, to librarians facing huge subscription costs and space pressures; or graduate students who find that publication requirements for a job keep climbing.

X1234 asks what I think of the problem in the humanities worlds. It's worse there (type into Google "Professors on the Production Line" for full paper).

busyslinky and others warn against setting "limits" to research, but we have limits now and we've always had limits and we always will. The question is where you draw the line.

tsb makes the point that the problem can be addressed by having good "sorting" mechanisms. I agree, but are the existing ones effective today in handling the bulk?

Finally, ak_ok notes that there will always be a mix of good and bad research. Yes, but is the proportion getting out of hand? We believe it is.






53. maronmg - June 16, 2010 at 09:51 am

astridsheil #5. I could not agree with you more. Advancement in the world of Academia should not be based on the "sheer number of publications" one has. In my opinion this is the #1 reason for the so called "Avalanche of Low-Quality Research." I know in my institution of many professors who publish simply because the need to (to keep their jobs), but do a half hearted effort. Yes, I do agree that research and publications are important, but they should not be shoved down the throats of faculty. I am not one for research, and have been penalized for this at my institution. For me, I would rather spend time on what is most important, TEACHING, and improving myself in this area. In reality, what are we being paid for..........teaching or research?? Ask any college student who they would rather have standing in front of them, an excellent teacher, or an excellent researcher. I am sure that most students will choose the former. Hey, maybe I should do some "research" to prove my hypothesis :)
astridsheil........I really like your two-tier approach, one for research, and one for teaching. Assess the researchers on their research, and assess the teachers on their teaching.

54. andyj - June 16, 2010 at 09:54 am

While no one would support taking up valuable journal space with poor science, the premise of this article is questionable. The authors site a reduction of cited articles over two decades from 45% to 41%, hardly alarming in light of the rapid expansion of scientific journals. This is not strong evidence of an upswing in poor quality research. Seems to me that the cup remains about half full. Let the reader decide what is valuable and what is not.

55. 7738373863 - June 16, 2010 at 09:58 am

Somehow, despite the vollume of the present discussion, no one else has noted that much of the scientific research being published is sponsored research. That fact either makes the current volume of publication doubly pernicious, because there is a lack of serious evaluation and/or self-restraint operating at two levels--that of who gets funded and that of who gets published--or that fact casts doubt on the thesis that too much of current research is getting published. Just as the work of Einstein, Boole, and Mandelbrot may at one time have been dismissed. Other work that was not dismissed was falsified (Popper) or resulted in the production of anomalies (Kuhn) that lead to another approach to the problem, and sometimes that retheorizing or rethinking led to better results and/or a better understaanding of the process or phenomenon in question. There is no way to assess the long-term impact or latent benefits of a scientific paper at the point of publication, and the current funding and evaluation systems force practitioners to keep an open mind regarding the impact of scientific research.

56. cleverclogs - June 16, 2010 at 10:06 am

I'm curious: how much research that is widely hailed as excellent when it comes out, ends up being totally useless or, worse, detrimental? Knowing that would allow us to assess whether the problem is simply one of sorting or if there is also a problem with the peer review process.

On a related note: if as others have stated, once-ridiculed research later turns out to be useful, then it would seem dangerous to cut off publication for research that might, at the moment, seem ridiculous.

57. mucwp602 - June 16, 2010 at 10:44 am

I agree. Stop publishing Bauerlein's writings here. It is not scholarship and it is of very low quality.

58. tsb2010 - June 16, 2010 at 10:51 am

I think the article and some commenters want to implement what I would call "The Top-Down Approach":
One must limit the amount and "quality" of research that is published. Even more, one must even impose the format of the publication, impose a page limit (5? 6? 3.1415? pages), etc

As noted above, this involves more bureaucracy and somebody to play "God" and act as a gatekeeper.

Instead, I strongly suggest the "Sort Through the Junk" approach:
Publish anything that is not fraudulent (and have gatekeepers to check on that aspect alone). Data, after all, speaks for itself (it may take a while, but it does). Instead of implementing policies that artificially restrict research, fund research that addresses the "sorting question" (how to find data, ideas, hypotheses that are relevant for one's research).

59. physicsprof - June 16, 2010 at 10:55 am

Where does this notion that "the work of Einstein may at one time have been dismissed"? Please check your facts. It was 1905 when Einstein made his groundbreaking discoveries. Two of the subjects, Brownian motion and photoelectric effect, were clean non-controversial breakthroughs and were accepted immediately (the second one won him his Nobel prize eventually). The other two papers established special relativity. So for how long had they been ignored? Not at all! First of all, works of Lorentz and Poincare had already laid down some foundation (worked out properties of the group of Lorentz transformations) so Einstein's work was not performed in a vacuum. Within two years other famous scientists (Nobel prize winner Planck, Minkowski, etc.) joined the field. Within a year experimentallists tried to disprove Einstein's energy-mass relation and within three years confirmed it. Note that the number of scientists a century ago was a tiny fraction of today's crowd and by all standards the influence of Einstein's ideas was broad and quick.

60. willynilly - June 16, 2010 at 11:08 am

Nothing could be more humorous than to see Mark Bauerlein's name attached to this srory. The Chronicle will certainly win an award this year in the Comic Catagory for this piece of ----. Here is a man (Bauerlein) writing this drivel, while he himself spends every available hour combing through landfills and dumsters searching for the lowest of low quality research items. Once found, (and I must admit, he is quite good at it) he believes he can cleverly manipulate the item, and re-present it as remarkably astute evidence of the value, creativity and/or savior-like quality of some wacko scheme aligned with the extreme right wing movement. He particularly likes to put a most positive spin on the (low of the lowest) research findings of such notable intellectuals as Limbaugh, Beck, Hannity, Coulter and the world renowned research department at Fox News Network. All reasonable people such as moderate Republicams, most all Democrats and a good number of the members of long established third parties find Bauerlein and his buddies a real threat, in that they work hard to divide our nation by inflaming the lowest instincts of the undereducated, the poor, the angry and the disasssociated - the so-called Rove Formula for election success. Bauerleins' name as a co-writer in this piece, is no more than another effort to disguise the mans true motives in a cause intended to appear as a reasonable effort to improve the sea of research. The most insulting aspect of his work is that he believes that we readers are so inastute that we swallow his cool-aid. Whatever Bauerlein writes, singly or in concert with others, always has a distinctive odor - as evidenced at how many posters retch at all his writings. Notice above how many tines Bauerlein found it necessary to become defensive in response to a reader's post. How many of his co-authors found it necessary to add a defensive retort?

61. maronmg - June 16, 2010 at 11:11 am

Will you people please get your noses out of the air? I enjoyed
Bauerlein's article above, on a topic that needed to be written about. In my opinion (I have no "research" to document this) it is a very informative, well written piece.

62. willynilly - June 16, 2010 at 11:14 am

P.S. Sorry to report this, but if Bauerlein chooses to praise you for the quality of your post; he is really announcing to the world that you are stupid. You took the hook, minus the presence of a worm.

63. tsb2010 - June 16, 2010 at 11:16 am

Good points from #59 - but the amount of "low-hanging fruit" has diminished - it is asymptotically harder to advance the field to the same degree as researchers 100 years ago.

Accelerating a particle to 90% of the speed of light? Piece of cake. Going the extra few percent higher gets progressively harder - that's why we need CERN. Maybe research in general is not as dissimilar - when you start a new field (relativity theory) you are bound to have a larger contribution than when you enter it 100 years later. What about cancer research? HIV research? Progres is bound to be incremental - until (maybe) a BIG IDEA comes along. But since research doesn't happen in a vacuum, it is hard to predict which ones of the little "insignificant" steps along the way helped out...

64. frankschmidt - June 16, 2010 at 11:37 am

As a member of P&T committees, I have wrestled with the question of numbers vs. quality many times. As a researcher, my struggles to keep up with my reading make me yearn for a field with neither literature nor competition (either a backwater or a frontier, take your pick.) There are, however, a number of problems with the modest solution in the article.

Blind reliance on a single number is the best way to go bankrupt (see crisis, financial, involvement of Wall Street Quants in). Reliance on the number of published articles is foolish, but so is relying on "impact factor" or its gussied-up relative, the H-index.

The problem has no easy solution. Should big labs be allowed only a few publications? Should small labs be shut out because of their outsider status? Remember that refereeing is not blind - a submission from Harvard will get more credence than one from a small, obscure place.

Alas, this comes down to the same state as so many issues - we have to apply our best judgement with imperfect information. Too bad we are so lawyered up that we hesitate to do so.

65. recurver - June 16, 2010 at 11:41 am

This seems to be one of those: "The kids today..." articles, lamenting the new problems which are, and are not really, different from the problems of yesterday.

Yet several points remain to be clarified, or re-clarified.
First, remember, Shakespeare couldn't spell his own name, why should any of us be so particular about ;-)

In response to #59: The truth remains that the facts of today are often not the facts of tomorrow. Thus, what is useful, fruitful, and widely considered citeable today might not be so useful in 100 years. We all can see and recognize that historical truth or matters of concern, if you will, change from age to age, from generation to generation, century to century. In this, the MB, et al. are simply in error. There are no methods of today which will stop us from "killing" research as useless that the future will find to be worth reanimating as the core of some new area of study. The only solution is to increase publishing, to insure a record of such work is created and maintained.

Next, several people have mentioned "floating cream" theories of publishing; in fact, it seems that the our authors are indebted to such an idea. But, what if merit is merely the fashion of the moment without any relation to actual value or worth? (See previous point.) Then what? Remembering that merit is the lie professors tell themselves when they order their graduate student to do something stupid and it fails (see PhDcommics.com).

Finally, this sort of conversation is precisely the uses of conversations across disciplines. It is unfortunate, however, that here we have an effort by those who should be the folks destabilizing the hierarchies of "fact" and merit that seems, instead, to be largely beholden to the mechanisms of legitimation of the "status quo." The risk here is that the argument is simply self-serving.

There has been a lot of talk about "short" forms of scholarly work. This is a coded call to differentiate between top tier scholars (at Harvard, U of M, etc) and mid-level scholars. Such differentiation is steeped in merit and is pure garbage. The greatest brain drain we face is that most somewhere out in the vastness of humanity potential is snuffed out by the inequities of our social world. The only way to combat this tragedy is to broaden the net of all parts of what we do. We must recognize that there is no real merit difference between any of us, although, there might be a commitment difference.

In short, broad inclusivity is more important that making it easy for Yale prof X to find just the right article and making sure it is short enough for them to easily get the info that they want.
Modern search engines have made research through article databases easier than ever, yet now we have calls for restricting the amount of those articles?
Why?

66. educationfrontlines - June 16, 2010 at 11:42 am

1. I question the value of citation indices for judging the value of science in some subdisciplines. In describing new species, and in particular publishng a monograph revising a group, if you do your work well, you will quiet the field until decades later another collection of undescribed relatives surfaces. But if a revision is heavily cited, it is likely an indication that the author made many mistakes and these are corrections. Counting publications is sadly a sign of administrator beancounting, and leads to lowest publishable unit, etc.

2. I am worried about the proliferation of new online journals and a new breed of cyber-editors that may not be careful "gatekeepers." I received an e-mail request to review a paper with the paper attached, and it was out of my field of study. I replied to that effect and received the response to go ahead and review it anyway. I sent back the names of several in the field who would...but after walking away from my computer to cool off.

3. The environmental concern about print is baseless. Print journals are carbon sequestration, using non-acid paper, coming from nearly-60% recycled material and storing carbon for 500 years, and not producing toxic wastes. Electronics now exceeds the carbon footprint of the whole airline industry (you are eating up energy reading this) and produces highly toxic waste metals after less than a decade of life. There is as of yet no effective archive function for orphaned publications, and the turnover of hardware and software is less than a decade. The technolgy burden on us is terrific. My paper books and journals are paying for your electronic environmental carbon sins.

4. These comment threads may be the electronic equivalent of a hallway chat, but I still appreciate it when folks take responsibility for what they say by identifying themselves by name. Does anyone publish anonymously?

John Richard Schrock

67. texasguy - June 16, 2010 at 11:59 am

A more objective way to evaluate the impact of research papers is looking at the number of times they are cited. According to my personal experience, this has much more to do with the contents of the paper than with the venue it was presented. Papers on "hot" topics presented at mediocre venues are much more cited than papers on more sedate topics published in the most prestigious journals.

This is not to say the solution is perfect. First, it takes at least two or three years to measure the impact of a paper. Second, it penalizes scholars working on niche topics while rewarding authors of quick and dirty papers on "hot" topics.

68. procrustes - June 16, 2010 at 12:35 pm

Aside from asking MB why he choose to take on science rather than cleaning his own Augean stables in English, a few observations:

1. We have s dysfuntional system in which publication is driven by the supply of articles and books rather than reader demand.

2. Economic factors will force a change. Most of this publication is supported by institutional funds (library subscriptions/purchases, page charges/open access fees, etc.). That money is drying up. Like stock market and real estate bubbles, the publishing bubble can and has gone on far longer than any rational person would have predicted. But the reckoning will come eventually. Some analysts are already warning that the big publishers like Elsevier will face great pressure on their margins.

69. clumma - June 16, 2010 at 12:54 pm

@mariadroujkova Unfortunately, I cannot upvote your comment. -Carl

70. bdbailey - June 16, 2010 at 12:58 pm

Given the current system, there appears to be enough cream that gets tossed out with the trash to nullify the cream rising to the top argument. It ignores the fact the reviewers often reject research that they don't agree with, or that uses new methodologies that they simply do not understand. Given that reviewers are acknowledge "experts" in their fields, is there not an inherent conflict of interest when they are asked to review research which might challenge their own?

Of all the arguments here, perhaps open access has the most merit. Let's submit everything to Google. They will figure out how to organize and sort it.

71. abichel - June 16, 2010 at 01:01 pm

"The sheer volume of remarks on this matter proves the article's main point - academics like to hear themselves speak."

No. It is, "Academics like to see their words in print".

Same thing...speech is speech.

72. markbauerlein - June 16, 2010 at 01:01 pm

procrustes gets to a crucial factor in the problem: money. Recurver says, "The only solution is to increase publishing, to insure a record of such work is created and maintained." But publishing isn't free. It also burdens other researchers, and makes a "Sort through the Junk" approach all the more needed. What happens, though, if the junkpile is just too darn big?

73. dank48 - June 16, 2010 at 01:03 pm

I don't think there's any stopping avalanches, natural or otherwise, and I wouldn't suggest trying. There's no doubt that there's one hell of a lot of published research, most of it less than essential to civilization, but who's to say what should or shouldn't be published?

Theodore Sturgeon, defending science fiction from the charge that it's crud, pointed out that ninety percent of everything is crud. It may be a somewhat harsh comment on academic publishing, but no more so than on publishing as a whole. Most stuff that gets published, academic or otherwise, isn't worth the paper it's printed on, in the ultimate judgment of the world it's presented to. But you have to be willing to shovel dirt if you want to find gold.

As a generally rather libertarian person, I think it's a good thing that belling this cat is impossible, because I really don't think it should be done at all. What authority would have the right to decide what could and could not . . . you get the idea.

If we want to save costs, not to mention trees, electronic publishing is a natural. Journals great and small could be and should be distributed quicker, easier, and cheaper in any of various e-formats.

Still, Mark, it's quite a distinction to be personally attacked by someone capable of three distinct punctuation errors in less than one sentence: "Bauerleins' name as a co-writer in this piece, is no more than another effort to disguise the mans true motives . . ." Not to mention misspelling "Kool-Aid" and "times" later, although the latter was doubtless just a typo, and I'm an ungracious clod to mention it.

74. princessleia - June 16, 2010 at 01:13 pm

I was delighted to see this topic raised, and profoundly disappointed by the misplaced focus of the article on the supposed consequences of poor scholarship. I would much rather see a discussion of what constitutes poor scholarship. I believe there is in fact a vast body of useless and inane material that is passed off as meaningful. Science is the most solid, because it's easier to demonstrate that something is new and correct. It's the rest that can easily devolve into a slippery slope of insipid drivel, especially when some correlation between two parameters is found, which is often simply stating the obvious, and then all kinds of speculation on causes ensues with no substantive supporting data when there are dozens of possible relevant parameters. A lot of this makes the press, and those are the relatively good studies -- I shudder to think of the iceberg of junk beneath!

75. recurver - June 16, 2010 at 01:31 pm

Sorry for the sloppiness: posting a rather longwinded response just before a lunch date was clearly ill-advised.

76. x1234 - June 16, 2010 at 01:50 pm

"What authority would have the right to decide what could and could not . . . you get the idea."

Maybe I misreading the sentiment here, but in publishing editors and anonymous peer-reviewers have the authority. Perhaps, science publishing is radically different than humanities at the editorial level, but the peer-review process necessitates a chosen few to be the arbiters of what is and is not worthy of publication. Its interesting that nobody (that I can remember) has mentioned how arbitrary the peer-review process can be. In the humanities publishing seems to be a flip of coin (sometimes). I'm curious is it not the same for the science fields?

77. dank48 - June 16, 2010 at 02:00 pm

Editors, publishers, reviewers, and so on have limited authority, according to their positions within whatever organizations they work for and in. I meant, what overarching [perhaps governmental] authority could decide who could and could not publish this or that or the other? The current situation is chaotic, huge, explosive and--this is my point--out of control.

I think it should be out of control, in the sense that publication is not something that should be under any one entity's control. Thanks, James Madison. Freedom of the press doesn't mean manageability, neatness, or comprehensibility. Good thing, imo.

78. tuxthepenguin - June 16, 2010 at 02:04 pm

"What authority would have the right to decide what could and could not . . . you get the idea."

Precisely. If universities think it is good to have faculty who are active in research, even if someone thinks the output is of low quality, how is that any of their business?

The thing about the low quality research is that, according to their own definition, it's rarely cited anyway, so it's not honest to claim it slows down other research.

That's a critical point, because without it, all they have is, "I think other people are doing things of no value." I think the topic of this article is important, but the authors have written it in such a way that it is of no value. Yet I have no business telling them not to publish it.

79. perspolis - June 16, 2010 at 02:27 pm

There certainly exists some truth in the data presented here. But the fact is knowledge expands exponentially and so does the population of students and experts needed to discover, learn and teach that knowledge. At the very least, what is proposed in the article can be considered an elitist approach by the 'old guard'. Higher education and research, much like the rest of the world, has marched on. Let's concentrate on what our world needs today and into the future and how to best communicate answers to these needs.

80. princeton67 - June 16, 2010 at 02:35 pm

1. With PH.D. in English, I offer to become the Supreme Arbitrator of what science research is fit to print.
2. Why speak only of "scientific" research? Look at the stuff (swearing not allowed)from humanities and arts. Remember Physicist Alan Sokal's "Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity"??

81. gchaucer - June 16, 2010 at 02:37 pm


A point that seems to be important but has been overlooked is that each of these journals has an editor and a review board. If bad research is being published (or good research is excluded), they are the gatekeepers. In my experience in the social sciences, the quality of the review process and editorial decisions in academic journals varies widely. I don't believe that there is a paucity of problems in the world for researchers to tackle, so I disagree that the volume of publications is a problem. Rather, the problem is one of quality scholarship that addresses pertinent issues and questions.

We need more editors who are more transparent and more accountable to their disciplinary peers and more well-prepared reviewers willing to invest more time to conduct thorough and fair reviews (even when a piece challenges his/her prevailing paradigm or received view in a discpline). Editing and reviewing are among the most important -- and thankless -- contributions a scholar can make. The low level of recognition and weight to this work for productivity measures in academia makes no sense given their crucial role in sorting the solid from the weak scholarship. I suggest that improving the editorial process and the peer review process would do more to improve the quality of published scholarship than any of the suggestions offered by the authors. (Having said that, I do like the idea of adopting the practice of job/tenure applicants submitting what they think are the best half-dozen pieces rather than merely maximizing the numbers). If quality rises through a reformed publication process, the volume level would be a welcome problem we could live with (and benefit from).

Patrick O

82. x1234 - June 16, 2010 at 02:39 pm

"Editors, publishers, reviewers, and so on have limited authority, according to their positions within whatever organizations they work for and in. I meant, what overarching [perhaps governmental] authority could decide who could and could not publish this or that or the other? The current situation is chaotic, huge, explosive and--this is my point--out of control."

I think I'm a little confused because I did not remember the article calling for any governmental agency that would decide these things. If I read Bauerlein's comments correctly I don't think he would advocate such a thing. As for editors, publisher, and reviewers its really difficult to maintain that their authority is limited when they decide what gets published. They're the ball game, get them to approve your monograph or article then you get published, and a job, and tenure. They also have a say in research funding. To argue that this is a limited authority isn't quite right.

83. mariadroujkova - June 16, 2010 at 02:44 pm

markbauerline wrote: "And maria's suggestion--"We need more publications, not fewer - but more robust, instant, open, social mechanisms for sorting"-- has a key failing. You can't expand publication any further and sustain a "sorting" process. The bulk is becoming unmanageable."

Ah, but social media scales up indefinitely, in the way hierarchical structures do not. It's becoming unmanageable precisely because obsolete hiearchical review structures can't cope anymore. I don't feel I will do this area of expertise justice if I try to explain it in a comment. I can suggest some light reading, for example, "The Starfish and The Spider" (http://www.scribd.com/doc/10521739/The-Starfish-and-the-Spider) or "Here Comes Everybody" (http://www.shirky.com/herecomeseverybody/). Alternatively, as a case study, I suggest perusing the history of Wikipedia (from 20 articles the first year, when they used the traditional review methods, and on): http://en.wikipedia.org/wiki/History_of_Wikipedia

@clumma - thank you!

84. tsb2010 - June 16, 2010 at 02:51 pm

Just occured to me that the title of the piece we're commenting on is:
"We Must Stop the Avalanche of Low-Quality Research"

Who is "we"? Why "must"? What is "Low-Quality Research"? These are ill-defined throughout the article (and MB's comments throughout). And I find it (as other have) ironic that all this is coming from an English professor.

God help us if the day comes when English professors are responsible for science policy.

PS. What about: "We Must Stop the Avalanche of Low-Quality Chronicle Articles" ?

85. ere591 - June 16, 2010 at 02:58 pm

I quickly googled the publication lists of all the authors of this essay. All of them have more than 500 'research/publications' citations granted some of it is repetition of the same material. Question? How many of their publications are high quality, high impact, used less capacity of resources etc., etc. I think they should practice what they preach. Quality/junk is in the eyes of the beholder. Enough said.

86. juvenal - June 16, 2010 at 03:46 pm

What an uproar! The bellows of many oxen being gored. MOST entertaining.

87. markbauerlein - June 16, 2010 at 04:14 pm

The question "Who is to say what is good or bad?" is an odd one to ask in fields operating on peer review all the time. And if "Quality/junk is in the eyes of the beholder," as ere591 says, then the discipline has no claim to disciplinarity.

Also, tsb, I am only one author out of five, and I am part of the project because of past work I've done on productivity in the humanities.

tusthepenguin says, "The thing about the low quality research is that, according to their own definition, it's rarely cited anyway, so it's not honest to claim it slows down other research." But this is to ignore the amount of time and labor that goes into peer review of "low quality research," not to mention the necessity of later researchers including it in literature reviews as they develop their own projects.

Finally, there is an excellent essay on Citation Impact entitled "Ranking Political Science Journals: Reputational and Citational Approaches," by Michael Giles (a colleague at Emory) and James Garand. It finds problems in measuring work by citation figures, but contrary to people here who dismiss, it also finds value in the method.

88. dank48 - June 16, 2010 at 04:28 pm

X1234, there was no explicit call for governmental control of the avalanche. But as Tsb2010 asks, who is "We"? Who "must" stop the avalanche? Who decides what's "low-quality"?

Those editors et al. are not "czars," thank heaven. They're overworked, underpaid, and so forth individuals, each doing part of the whole job. Nobody's in charge, and imo that's a very good thing. The current situation is an avalanche, or a flood, but I think that's better than a desert. Consider the situation of biological sciences in the USSR when Lysenko was in control of what could and could not be done, just to take an extreme example.

89. davi2665 - June 16, 2010 at 04:57 pm

University tenure and promotion (and hiring) committees have abdicated their responsibility to evaluate the quality of research rather than the weight and volume. Too many researchers have mastered the least publishable unit (LPU) and flood the literature with garbage; even if one could assemble all of their droppings in the journals, the totality would still be useless. But it impresses the committees with the numbers. I believe that a promotion and tenure committee should give an intense scrutiny of only 3-5 key papers of the candidate's choice, and should solicit outside, non-crony, detailed review by highly regarded leaders in the field, providing a modest stipend for the effort (which most academics find hard to resist).

Too often, "high impact" factors are a measure of whether the researcher has appropriate cronies on the editorial boards who can preferentially identify "supportive" or "sympathetic" reviewers.

Better yet, stop turning out Ph.D.s whose presence in a department has been to act as cheap labor for established laboratories, only to end up in a place where little or no significant discoveries emerge.

The academy's proposed solution to "too many researchers" and "too many useless publications" that are never read or referenced, is to rally for yet more taxpayer money to support yet more research, a Malthusian model for endlessly expanding its own support. Perhaps having the institution pay a cost-share or matching component to funded research would force the universities to only promote the conduct of research that their best scholars believe could actually make a difference. As long as a university can endless demand more, more, more in extramural funding from every faculty member as a criterion for remaining there, the proliferation of mediocre researchers turning out tripe will continue.

90. jtradzilowski - June 16, 2010 at 05:10 pm

Interesting article, good debate (I publish a lot and I worked as a journal editor). A similar problems exist in the social sciences and humanities. Citation counts are imperfect measure of success but still useful. In some fields, not being cited usually means you've violated some ideological canons or you're on the outs with the "star" scholars. In these fields, the proliferation of journals has in some instances served to break an ideological stranglehold. The price is sometimes poor quality research. I only wish quantity and quality of research were valued more. Major departments in my field are filled with dead wood who've published only a few articles but have checked all the right ideological boxes (post-modernism, whiteness theory, race, gender, etc.). Those that parrot the party line get cited endlessly, those that go in a different direction get punished. Fewer is not necessarily better. Incompetent hacks will always be popular in some quarters.

The biggest weakness is peer review. Too few take it seriously, too few do it well. There is no reward and some editors treat serious and critical reviews as an irritation. ("Don't you know we have a journal to fill?") It can also be easily manipulated to enforce one's personal or ideological biases. (And let's not started on the quality of book reviews.) Why not treat peer-reviewing as equivalent to publishing for purposes of tenure and promotion?

91. danfrog - June 16, 2010 at 05:20 pm

Well, the authors told an interesting story about the volume of research in the field; I just wish they had been a bit more thorough careful, as a lot of their comments don't seem to match up with the actual practice of scientific publication. Specifically:

"Consider this tally from Science two decades ago: Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication."

First, why is this relevant? Are the authors assuming that the other 55% are junk? Why? As others have pointed out, important findings can go unrecognized for ages. A senior colleague of mine takes great pleasure in digging up obscure papers that presaged (and refuted) the fashionable new theory of the moment. Why is this bad?

Second, even if the assumption is true, what should we do about it? The obvious problem is that the reviewers and editors don't have a crystal ball and can't predict which papers will end up going uncited. We can only do our best to critique a paper and let the field do the rest.

(More in a moment, but I lost the first version of this and want to submit it lest I lose it again.)

Second, even if

92. danfrog - June 16, 2010 at 05:36 pm

(Sorry about the hanging phrase there.)

"For example...professors often put all their students' names on multiple papers, each of which contains part of the findings of just one of the students."

Professors are putting students' names on papers to which they did not contribute? Really? I've never heard of such a thing, and no specific examples are provided. ("Gift authorship" for senior authors is another matter, and it's probably how that prof got to 450.) Students who received authorship in such a way would be badly embarrassed when they had to explain to an interviewer what their contribution to that paper had been.

"Experts asked to evaluate manuscripts...pass the burden along to other, less-competent peers. We all know busy professors who ask Ph.D. students to do their reviewing for them."

Are the PhD students necessarily less competent? The buzz I've heard is that grad students, postdocs, etc. tend to be the best reviewers (see http://sciencecareers.sciencemag.org/career_development/previous_issues/articles/0980/peer_review_techniques_for_novices/ for an example). The expert should obviously give the student credit, but what's the problem here?

* "If only some forward-looking university administrators...ordered their libraries to stop paying for low-cited journals, they would perform a national service."

This is an unfortunate comment because it reveals the authors' ignorance of journal subscription practices. They are essentially sold like cable television--you have to buy a package and cannot limit yourself to the six channels or six journals you want (or if you can, it'll be a zillion times as expensive).

* "Second, make more use of citation and journal "impact factors," from Thomson ISI."

Another unfortunate comment because it ignores the (spirited) discussion about ways of measuring productivity, overlooks the obvious and well-established problems with impact factors, and omits the many possible alternatives (such as the h-index, the g-index, etc., etc.) Mark, leaving aside your criticisms of other people's manners, why did you and your coauthors recommend the Thomson impact factors instead of something else? Of course, the authors seem to believe that the following is true:

* "For the fact is that one article with a high citation rating should count more than 10 articles with negligible ratings."

Lots of people would be happy to hear that, including Jan Hendrik Schon and Woo Suk Hwang. Those are extreme cases, of course, but I imagine that that attitude would also please Freud, B. F. Skinner, and any number of other people whose work is cited frequently but disparagingly.

93. cunningham2 - June 16, 2010 at 07:14 pm

Truth is that any attempt at review/oversight/assessment inevitably shapes what it was merely meant to measure. Nothing we can do about that iron clad law.

In other words, if make getting ahead contingent on some measure or other people will shape what they do to increase their 'score'.

Citations? People already get together with their friends to cite each others' work at every opportunity to up the 'impact' factor, as well as citing their own continuously.

We act as if misbehaviour is an accidental correlate of competition that can be removed by some means or other. Regrettably it actually an integral part of any competition.

94. marka - June 16, 2010 at 07:51 pm

Hmm ... Definitely provocative article. Bravo for bringing it up.

There is now an information overload - the glut that the article addresses.

As a fair amount of research on the brain and how it works, one theme is pertinent: one of the most important functions is filtering information -- separating signal from noise, so to speak. It is impossible for the human brain to absorb all the information available -- and why should we want it too?

So far as I can tell, there is more & more noise, making it harder to detect meaningful signal. Again, plenty of research on this.

In academics, especially the sciences, we have always had a quality control mechanism in place: peer review. It hasn't always worked well, and as the article notes, it is falling apart out of sheer volume. Without some quality control, more & more noise ... .

The danger is not just for academics, but for the general public as well. Every day, someone is picking up a scintillating sound bit and publishing it in general media. And people base real-world decision-making on what often turns out to be noise. In medicine this can literally be deadly. Many meta-analyses note that large amounts of 'academic' research is of 'low-quality,' on what should have been obvious statistical grounds: e.g., small sample size, lack of randomness, lack of double-blind, etc.

In short, we need, and have had, various quality control processes in place: peer review being prominent. Sheer volume is threatening to break quality evaluations down. While some of you appear to welcome sifting thru the chaff to find a few kernels of wheat, many of the rest of us would wish the winnowing process take place sooner rather than later. Anyone can 'publish' on the web: academic journals ought to have much higher standards. Those that want to wade thru the swamp to find hidden treasure can do so. But general media should be able to rely upon some higher standard of quality before they pick it up to feed to the general public.

In medicine, at least, this is quite literally life-threatening.



95. dboyles - June 16, 2010 at 08:01 pm

The title of this article "We Must Stop the Avalanche of Low-Quality Research" goes on to relate to the publication mill on the sciences. But doesn't that mill to large extent mirror the projects funded in the sciences, that is, the decisions made for the most part at the federal level (NSF, DOD, DOE, NIH, et al) as to what projects to fund in the first place? If so perhaps the tacit implication of the article is that too much junk is being funded? Or that funding agencies secure their distinct territories by themselves counting publication as one measure of the productivity of their programs? Or that the plethora of new "niche" journal publishers fuel the problem the authors describe?

Not each of the sciences is created equal to others, and the sheer resources required vary from one science discipline to another may make the rate of publication production highly uneven from one scientific discipline to another (cf. theoretical physics requiring multimillion dollar facilities for a single experiment versus counting prarie dog populations--which of these false dichotomies will generate the most in junk publications?). Those outside that playing field are often oblivious to these nuances that control the rate of production within something as catch-all as "the sciences."

On another note, my scientific research requires regular use of the chemical literature dating back to the late 1800's. When one is up against making and breaking chemical bonds among the "easiest" and cheapest ways were figured out long ago when chemistry was young. That literature doesn't go out of style in the chemical sciences just because it is "old."

I am wholehearted agreement with the authors that there is plenty of "junk" being published--for whatever reason. But the questions remain: (1) what is 'quality' research, and (2) what 'knowledge' is NOT worth pursuing?

96. physicsprof - June 16, 2010 at 09:39 pm

"theoretical physics requiring multimillion dollar facilities for a single experiment..."

DBoyles, let's get terminology right: "theoretical physics" does not perform experiments and does not exploit multimillion dollar facilities (unless those are supercomputer facilities). Theoretical physicists work with pencils and computers. What you meant was "fundamental physics", which consists of experimental and theoretical physics, but one should not confuse the two parts.

97. wepstein - June 16, 2010 at 10:41 pm

For Bauerlein;

There is a considerable literature on publication bias that your article should have reviewed. On example, by Companario, suggests a considerable problem. Still and all, it is hard to argue with the proliferation of trivia and nonsense that passes for scholarship. The difficulty is figuring out what to do about it without infringing on free publication. In consideration of all the perils that societies face, the publication overload may not be such a big deal.

98. oldassocprof - June 16, 2010 at 10:47 pm

This article is the usual narcissistic arm-waving. If these guys have such quality, how come no-one ever heard of them? I wonder what late-night pis*siness threw this unlikely bunch together.

99. wepstein - June 16, 2010 at 10:51 pm

Oops. The Companario article is at http://www3.interscience.wiley.com/cgi-bin/fulltext/57739/PDFSTART

100. trendisnotdestiny - June 16, 2010 at 11:18 pm

Anyone who reads 100 threads needs a sense of humor; so here is my contribution to high and low quality research:

I see high quality research as similiar to the fertilization process. The odds (1/10 billion)are not in your favor, the winner of the zona pelucida race is just happy to get home and there are billions of tiny little losses along the way. Once fertilization occurs, then the blastocyst is submitted to a receptive journal (after a concerted call for scholarly insemination) on its path to revise and resubmit implantation where bundle is shaped and divides via writers' meiosis.

Next, the review process resembles gestation with all of the risks and changes taking place to textual body. The publication's birth culminates with the handing out of celebratory cigars, burping of family members and the showcasing of the exhausted scholar and their newborn article... There are many dangers during this time of review (reviewer alcohol syndrome, incoherent voluntary feedback (IVF), undergraduate driven high blood pressure and textual paternity struggles plus caesarian editing), but after nine months of feedback, a 1st author to an academically sanctioned healthy text is born... YOU can almost hear the academic say: "this is some high quality research, just look at my boy in the Journal of Applied Contemporary and Symbolic Texts for Clinicians & Interactive Behaviors Archived!"

You may ask: what about low-quality research? Well, I take the position that low quality scholarship is the same as high quality "research" without the actual fertilization. In other words, it feels pretty similar to scholar but without the time and financial commitments. Also, the goals may be different as there may be multiple textual submissions or just one monogamous journal. There is an avalanche of low quality research (mental masturbation referred to by a lay audience) in and outside of academia. The research may be satisfying or filled with submission regret or writer's remorse... Certain publishing promiscuities may lead to journal editors issuing submission responses of rejection or concerns of STD's (Stop Typing Demands) for the persistent scholar in search of "a good researching time".

Oh! The impact factor is the newly published article's filled diaper communicating amount, prestige and memorable references of academic residue where smell and citation collide;

For all of you scholars worried about the number of articles, authors, fertilized and pregnant ideas as well as the quality of the academic's submission healthcare, you might consider that as tenure system continues down an aborted path not unlike world population (7.5 Billion now and 9 billion soon), that managing what constitutes high quality and low quality research is a loaded and futile exercise in embittered abstinence... Let's celebrate the quality that resonates and include the quality of research that differs from our experience.... After all its only research! Wear protection!

101. oldassocprof - June 17, 2010 at 12:13 am

Trenisnotdestiny: HOF'd

102. vkzoe - June 17, 2010 at 01:54 am

The goals may be different as there may be multiple textual submissions or just one monogamous journal.

103. crankycat - June 17, 2010 at 06:47 am

The central questions are then:
What is the definition of "useless"? And,
Who gets to decide?
Science is a construct of many small parts - no knowledge derived from properly constructed research is "useless". That's why I find this a non-starter, though it addresses some important questions.
I do agree that "publish or perish" pressure has lead to a lot of LPUs out there (least publishable units) - investigators, especially with limited personnel, don't wait until they have a full story to publish because if you go a couple years between publications it's seen as "nonproductive".

104. kiosk - June 17, 2010 at 10:12 am

I have read a lot of garbage, so tend to agree that there is a problem. Whether it's growing or shrinking, I have no idea, and I am not enthralled by the proposed solutions.

One thing that has always bothered me is that citations all get the same credit - some are made in passing and some are the foundation on which a paper rests. If authors could choose a few "foundational citations" I think it would be easier to identify quality, and more importantly to me, easier to track the key work in an area of research using ISI.

105. markbauerlein - June 17, 2010 at 11:09 am

Yes, kiosk, we need to have some rating system for citations that range from negative to positive and from substantive to perfunctory.

And, danfrog, I'm not sure that journal editors would go for having grad students to peer review.

106. inama - June 17, 2010 at 11:44 am

As an African American historian, I am familiar with Professor Bauerlein's work. He is a very good scholar and I respect his opinion. However, in my field, some of the best articles are not published in the Journal of American History or the Journal of African American History, but rather in very specialized journals. For example, some of the best articles about slavery in Georgia are published in the Georgia Historical Quarterly. In my specialization (Black people in Canada), some of the most important articles are not published in the Canadian Historical Review, but rather Acadiensis or the Journal of the Royal Nova Scotia Historical Society. Thank you all in advance for considering my comments.

107. physicsprof - June 17, 2010 at 12:03 pm

I am often for outsiders taking a critical look at something that experts might have long since adopted to and even stopped noticing (or evolved with the changes and never paid critical attention to them). But the above article only shows that the lack of knowledge does not necessarily guarantee success.

So in sciences we have too many papers of questionable quality. Does this fact hinder scientific progress? The answer is simple "no":

1) There has never been a good work overlooked because of the flood of papers, rather than because of the dominating school of thought, existing priorities, or simply fashion, i.e. "valid" scientific trends. If it is argued otherwise I would like to hear specific examples.
2) There has never been an instance where a large number of second-rate papers formed a dominating school of thought or defined scientific priorities (quite the contrary, "me too" scientists always follow trends). If it is argued otherwise I would like to hear specific examples.
3) Science is no English or contemporary history, the mere existence of vast (with little original content) literature does not overburden the researcher for a simple reason that in science one does not have to read everything. The problem of too many papers does not exist now thanks to efficient search engines.

108. markbauerlein - June 17, 2010 at 02:06 pm

Second-rate work does "hinder scientific progress" because of all the time and labor it takes to manage it.

109. oldassocprof - June 17, 2010 at 02:13 pm

107, in my field, Sociology, it can be argued that much (not all) of social constructionism or feminist research is essentially Lysenkoistic (subverts reality in the interests of ideology.) There's a groundswell of anti-intellectualism in these areas that (some years at least) can block out more materialist points of view (sociobiology, views that take economic constraints into consideration, etc.) This is so bad that my field can almost be called the "Church of Sociology" in the hands of many researchers and teachers. I won't even get into postmodernism, which is almost self-mocking in the hands of many. There many good things that can be salvaged from symbolic interactionism (originally a materialist theory BTW) or even postmodernism (yes items in culture are imitated blindly unless something stops this imitation), but many writings are pretty irresponsible in my mind.

Still, I don't believe in saying that there's too much research. I just say that unfounded points of view can persist for a long time, pushing more valid things to the margins. So might citation indices.

110. sahara - June 17, 2010 at 02:32 pm

Wow, 109 comments so far, and most (not all) of you are so wordy and pompous that you can't make your point in one well-crafted paragraph...

By the way, 57, Mr. Bauerlein has the same right to put his writings here that you do.

111. physicsprof - June 17, 2010 at 02:42 pm

#108, markbauerlein, can we descend from the realms of beliefs to the grounds of evidence and examples?

112. davidstodolsky - June 17, 2010 at 02:49 pm

Internet technology offers a solution to the problems:

Extended abstract (5 min. read):

Stodolsky, D. S. (2002). Computer-network based democracy: Scientific communication as a basis for governance. Proceedings of the 3rd International Workshop on Knowledge Management in e-Government, 7, 127-137.

http://dss.secureid.org/stories/storyReader$14


Comprehensive

Stodolsky, D. S. (1995). Consensus Journals: Invitational journals based upon peer review. The Information Society, 11(4). [1

http://dss.secureid.org/stories/storyReader$19

113. dormanp - June 17, 2010 at 02:49 pm

I think we need to step back and consider the larger picture. Research communities were once much smaller, and personal networks established rankings. Publication was a necessary form of communication, but authors were not generally evaluated by their communities at such a distance. Everyone knew everyone else within a couple of degrees of separation or so.

Over time communities got larger and larger. This was due both to the increase in the number of researchers and the breaking down of barriers between national research communities in some fields. We then entered a period of parallel systems, which is still the case. At the center of each field there is a personal network, as before, but on the periphery many researchers maintain membership primarily through publication. An example of this dual system is the promotion process in which committees look at both publications and letters from the candidate's relevant network.

This dual system is under stress. First, it is true that the size of the community relative to the feasible size of personal networks has become too great in nearly every field for the second to serve a regulatory role over the first. The second factor is that political and administrative decision-makers -- education departments and ministries, provosts, etc. -- have come to believe that personal networks are arbitrary, self-reproducing and no longer reward intellectual value. They want quantitative metrics for ranking scholars and departments, and publication data provide this alternative.

Bauerline et al. would like to replace the sheer number of publications (which has generated congestion externalities) with citation metrics, among other proposals. The deeper question is whether any quantitative algorithm can successfully substitute for the sort of comprehensive evaluation that personal networks used to perform.

I have thoughts about how to cope with this situation, but I've gone on for too long already, so I'll hang onto them for now. I don't romanticize the good old days of the good old boys, but I also think that research inquiry cannot be assessed only through inflexible, algorithmic processes. For instance, the virtue of the "three best paper" rule is not that the metrics for three papers provide a better ordering than those for a larger number, but that a smaller number of papers can be assessed qualitatively, in context, by a few evaluators.

114. akafka - June 17, 2010 at 03:04 pm

(Posted on behalf of James Peterson)

Mark,

I read your commentary “We Must Stop the Avalanche of Low-Quality Research” this morning. I’m disappointed that there is no mention whatsoever of the oversupply of PhD’s that was and is seminal in creating the avalanche. In that sense, your comment regarding “Deans who can’t read but can count” resonates. The need for cheap TA's drove these decision makers to open the throttle up to more graduate students. The result is that we have a highly educated, under-employed or unemployed community of trained scientists. Hey, at least I’m not standing behind the likes of Rush Limbau outside your office window carrying a torch and a pitch fork. People who write from power seldom have what it takes to see the whole picture, so as a powerless postdoc on his way to being a better dad I thought I’d give you a bit of sobering perspective. Paradoxically, the lack of empowerment on both sides of tenure is what causes low quality, whatever the size of the paper mountain. Currently postdocs often have no health benefit. Or it is a health benefit that is inferior to that of his or her colleagues in faculty. They often have no formal performance review process, a protection that even his colleagues in physical operation or clerical staff enjoy.

I once saw an interview of one of the pioneers of the role of angiogenesis in cancer. I believe it was Judah Folkman. He humbly admitted that over 90 percent of what we try doesn’t work like we anticipate. OK. If that’s true, what does that mean for nine out of 10 of our graduate and postdoc researchers?

Here’s what it means:

1. Long hours. Those with a glimmer of hope typically work 80-hour weeks. So you don’t have a life. You defer your humanity. Thus enter the stereotype that our entertainment industry does such a great job of lampooning: Disfunctional workaholics with no relationship skills and who haven’t a clue how to communicate, interact with, and manage people. The systemic result is that powerful class of people who select for people whose “momma don’t dance and your daddy don’t rock and roll.”  Let’s see now. Who invented the postit?

2. These days, as a grad or postdoc you don’t compete for promotions. You compete for the sweetheart projects. The result is way too many “Smithers” and a near complete lack of freedom to express all but the most naïve (and thus, malleable) questions. The mature questions that provoke meaningful free argument are sometimes crushed. Not in all labs. I’ve worked for fine gentlemen both good and great. I’ve also worked with people who are basically afraid of discourse. Frankly, I agree with much of what you say but the “academe” system you want to conserve leaves the door wide open to petty tyrannical primal prejudices.

3. Have you heard the pin drop in some of these conferences when it comes time for question and answer? That’s because if you ask the wrong questions and piss the wrong person off (say, the guy or gal who has a lot of business with grant committees), you are done. You have no voice. Even more terrifying is asking the naïve question (see number 2, above). Yet, isn’t this why we are supposed to be in postdoctoral “training”.

4. The dissertation “committee” is a myth. Your adviser basically has absolute power over your future. His or her review on paper may be one thing. What it’s like on the phone, or while eating pizza during grant review in Bethesda with his/her peers I believe I have a clue.

Thanks for your article, but you’re missing a HUGE piece of the picture. I understand your conservatism but the definition of insanity is to try the same thing over and over again when history has shown just how well it works in terms of the SUSTAINABLE health of society. I always ask conservatives “What is it about the past you really want to conserve?” I don’t think our past worked very well. I hope the future will work better.

Sincerely,
James J. Peterson, Ph.D.

University of Florida

Florida Ctr. for Renewable Chemicals and Fuels

 

115. dsamuels - June 17, 2010 at 03:56 pm

Professor Bauerlein et al:

Unfortunately you repeat a falsehood that has become folk wisdom, the notion that the vast majority of published papers go completely uncited. The original article in Science that made this claim, to which I believe you refer, included in the "denominator" literally everything: book reviews, obituaries, errata, letters to the editor, editorials, and other marginalia. So, no surprise that the picture looks so bleak. An ISI representative tried to clear up this error, but apparently to no avail. See David Pendlebury, "Science, Citation, and Funding," in Science 251: 1410-1411. Of course, when the ISI calculates impact scores it only includes review essays and research articles in the denominator. When you include only review essays and research articles in the denominator the average number of citations per article does not, of course, increase by the dozens - but still, repeating this error does no one a service. (I like Jacso's research generally but don't know what he's counting in the article you cite.)

There's another problem with the logic of your piece. In my discipline, political science, actually about 20-25% of articles in the ISI list for our discpline go uncited by other articles after 5 years. Is this bad? Maybe, maybe not - because in political science most "high impact" research comes out of books. Moreover, research I've done shows that ISI-indexed articles themselves receive many more citations in books than they do in other articles - and that when all is said and done, less than 5% of all articles receive zero citations in books OR articles after five years. Because the correlation between # of cites in articles and #of cites in books is weak, impact factors discriminate against journals that publish articles that are cited more often in books than in other articles (in my discipline, this happens in International relations journals, and area studies journals, for example) Impact factors are worse measures of quality than nearly everyone thinks, particularly in disciplines where books offer important contributions to the production of knowledge.



116. oldassocprof - June 17, 2010 at 05:19 pm

Right, 115. There's a probable meta-hard-positivistic undergirding to Bauerlein et al. "Research" and "publication" are probably linked to some sort of additive corpus, where knowledge increases incrementally, not troubled by fads (yes) or paradigm issues. Gee, if only we could get rid of the "bad" stuff, then real science could shine the "truth" through. I find it extremely troubling that an English professor is the lead author on this.

117. oldassocprof - June 17, 2010 at 05:35 pm

I will say this, though, I think Mark Bauerlein's video of about his book, The Dumbest Generation, was absolutely right on! I'm going to buy this book. He's talking about my students!

118. physicsprof - June 17, 2010 at 05:45 pm

Sorry, OldAssocProf, can't resist... but is the book about students or about their professors?

119. swt2010 - June 17, 2010 at 05:54 pm

As a co-author of the "avalanche" paper, I'm gratified by the responses, some of which have been quite constructive. Perhaps you will be interested to know that we have a much longer and better documented paper on the same subject appearing in the fall issue of ACADEMIC QUESTIONS. In fact, this longer paper addresses many of the points brought up in the commentary. We had hoped that this short, "popular" version in the CHRONICLE would give us some feedback that would be useful in the long paper but an unfortunate delay in its acceptance and appearance in the CHRONICLE does not give us that luxury. Speaking for my co-authors, I can assure you that we welcome your constructive criticism. Our suggestions for amelioration are just that:suggestions. If anyone out there has better ideas for solving this severe problem, we all want to know about it.

Collegially, Stan Trimble

120. bfrank1 - June 17, 2010 at 05:59 pm

The comments themselves seem to be the proof of the proposition. Enough already!

121. petersonjimmyjoe - June 17, 2010 at 06:02 pm

Dear Mark,
I have a very different perspective. See next post.

122. markbauerlein - June 17, 2010 at 06:13 pm

To drsamuels, we only say that "the amount of redundant, inconsequential, and outright poor research has swelled in recent decades."

And I'm not the "lead author" the piece. The names are in alphabetical order.

123. markbauerlein - June 17, 2010 at 06:16 pm

I hope you like the book, though, odassocprof, and to physicsprof, there is a chapter entitled "The Betrayal of the Mentors" that puts the blame on the elders.

124. petersonjimmyjoe - June 17, 2010 at 07:12 pm

Dear Mark,
I read your commentary this morning. I'm dissappointed that there is no mention whatsoever of the oversupply of PhDs that was and is seminal in creating this avalanche. In one sense, your comment "Deans who can't read but can count" resonates. The need for cheap TAs drove these decision makers to open the throttle up to more graduate students. The result is a disgraceful pool of highly educated, underemployed or unemployed perpetual trainees. Hey, at least we're not standing behind the likes of Rush Limbau outside your office window, pitch fork and torch in hand. Most of you write from power. It's dissappointing how few of you understand the whole picture. As a powerless postdoc on his way to being a better dad I would like to give you some sobering perspective. The lack of empowerment on both the tenured and "trainee" sides of the paper mountain is what causes poor quality research. An extremely centralized power structure places the brain squarely on one side, and the hands on the other. Currently, postdocs often have either no health, vacation, and retirement benefits at all, or token benefits that are inferior to those of faculty. We most often have no formal performance review process, a protection even our colleauges in physical plant and clerical staff enjoy.
I once saw a PBS interview of one of the pioneers of the role of angiogenesis in cancer. My best recollection is that it was Judah Folkman. He humbly admitted that over 90% of what we try doesn't work out like we anticipated. OK. If that's true, WHAT DOES THAT MEAN FOR 9 OUT OF 10 GRAD STUDENTS OR POSTDOCS?????
Here's what it means:
1. Long hours. Those with a glimmer of hope will work 80 hours a week or more. Let's be clear that this ceased to be me a long time ago. After my first postdoc I cranked out something over 1500 job applications. I'll never forget what one of two interviewers said to me: "I understand taxes are really high here and it's hard to make a living on this salary, but frankly, postdocs are a dime a dozen." So, if you have the right pedigree and the right project you don't have a life for a while. You defer your humanity. Enter the stereotype that our entertainment industry does such a great job of lampooning: Dysfunctional workaholics with dusty relationship skills who have no clue how to communicate, interact with, and manage people. The systemic result is a powerful caste of people who select for scientists just like them- People whose "mammas don't dance" and whose "daddy's don't rock and roll". Let's see. Who was it who invented the post-it???
2. These days as a postdoc or grad student you don't compete for promotions. You compete for sweetheart projects. The result is way too many "Smithers" and a near complete lack of freedom to express all but the most naive (thus, malleable) questions. The mature questions that provoke meaningful free argument are sometimes crushed. I've worked for fine people both great and good. I've also worked for people whose actions consistently demonstrate that they find discourse inconvenient, and uncomfortable. Mark, I agree with much of what you say about chicken *&^% pubs that satisfy bean counters, but a regression to a more centralized "academe" frankly leaves the garage door open to petty, tyranical, and sometimes primal predjudice. I've heard poweful people write off an entire competing institution. That is just nuts.
3. Have you heard a pin drop during the question and answer session after some of these presentations at professional meetings? That's because if you piss the wrong scientist off (say, the person who has a lot of business with grant committees) you are simply hosed. Even the tenured bulls can be pulled down by that nonsense. I've watched it happen. Also somewhat terrifying is asking the naive question after one of these "educational" presentations. (See number 2, above). Yet, isn't this tacitly expected of most of the audience who tend to be grad students and postdocs in "training"?
4. A dissertation "committee" is a myth. Your advisor basically has absolute power over your future. His or her review on paper may be one thing. On the phone or while having pizza during a break in Bethesda reviewing grants. That might be something else again.
5. I worked for the airline industry and the engineering industry. These industries treat people fairly. From a purely professional business point of view, here's what a disgrace my CENTRAL QUESTION meant for me: I got to look the Dean of Nursing square in the eye while she told me certain courses I took to hedge my bets as an undergraduate have expired. Funny thing. I don't remember there ever being an expiration date associated with my bachelor's degree. I explained that I scored near the top of my class in our physiology and anatomy courses here (there were well over 400 students per section)if a rigorous curriculem was an issue. None the less, I'll have to pony up >$300 for a basic microbiology course while my career has routinely required use of advanced concepts in microbiology and microscopy on a routine basis. I'll have to retake Human Health and Development after I've performed medical research for over 15 years of my career. (Insert Beatle's musical soundbite: See how they .."count..."?)
6. And while I'm contemplating how incompetent I have been, I can at least go to the beach and take a well deserved chill pill.... ah...resting assured, as the tar balls wash over my toes that the "stellar" and "successful candidates" have everything under control, and without my "unintelligilbe" data or my "useless information".

Thanks for your article, but you're missing a HUGE piece of the picture. I understand your conservatism, but the definition of insanity is to try the same thing again and again when history has shown how abysmally it works in terms of a SUSTAINABLE society. I always ask conservatives "What exactly is it about our past you want to conserve?". I don't think our past worked very well. I hope the future will work better.
Sincerely,
James J. Peterson, PhD
University of Florida Center for Renewable Fuels and Chemicals

125. theboatashore - June 18, 2010 at 03:24 am

Thanks for the article. While I would tend to disagree with some of the points you make, especially the emphasis you place on impact factors, I appreciate your contribution to what I think is an important conversation. And with more than 100 comments, it's definitely an interesting conversation :)

Without re-hashing what most people have been saying, my own preference would be to go the other route entirely. Move beyond the elitism of academic publication and provide platforms for anyone to publish anything e.g. academic blogging. I believe that over time, better quality work will "rise" and poor quality work will be fade off the radar.

Anyway, thanks again for an interesting and thought provoking read. Don't let the haters keep you down :)

126. gahnett - June 18, 2010 at 12:42 pm

I agree with theboatashore. Let there be more and more until no one has time to read everything. That way, we'll be forced to find papers through other selection criteria, which is what's happening now, anyways. Sure there's waste but if we were really conscientious about this problem, we'd do something about the population problem...like limiting making babies-perish the thought.

127. sim34 - June 18, 2010 at 12:53 pm

A Very Communist Idea!!! Only communist were saying that, too much knowledge and intelectuals is a bad idea!

128. psiwavefunction - June 18, 2010 at 10:40 pm

As someone from a field where the highest impact factor is less than 2, I can only say a big passionate FUCK YOU! to the authors of this article. If you claim that highly-cited random often-questionable clinical observations are better "quality" than long hard work in underfunded and underpublicised fields, you should fuck right off and never write another word about academic matters, as you are just a source of intellectual pollution. Our work does not get cited as much simply because there are several orders of magnitude less people working in the field, and you cannot in any sane frame of mind claim that work is now unworthy of publication.

That said, there indeed is a LOT of shitty research out there that mostly adds confusion and takes up space. But a shitload of this bad research is in none other than high IF journals! Nature, Science and PNAS* are as full of crap as The Nowherestan Journal of Obscurology, but their crap is obfuscated by hype. Hype does not suddenly make the research higher quality. In fact, more often than not, hype contributes to errors getting overlooked even more than usual, and enables crappier-than-usual work to get through because it sounds cool.

Note also that it may well be that older (and by now more obscure/less cited) research is higher quality than modern stuff, as there was more time and money (purchasing power, that is), and less pressure to churn out crap at the rate of X papers a year. Also, people back then actually had the time, desire and capability to understand the equipment they used, unlike now where nearly everything is done through one Expensive Shiny Black Box or another. The older research is still very useful, and is not IN ANY WAY a hinderance to modern research. In fact, a great hinderance to modern research is the failure to read the less-cited works. Only someone completely torn away from reality could argue that

I'd argue, admittedly from a quite naive and inexperienced position, that perhaps it may do good to avoid overfunding certain things. At the moment, it seriously seems to me that LESS funding into fields like biomed/cancer research, climate res, human genetics, etc may actually stimulate greater work, as it will discourage those who are just publishing to get in on the feast. That way, some underfunded-yet-highly-relevant-and-important fields can get the funding they deserve (also in modest amounts), and there'd be less of a free-for-all money grab situation in the way. Counterintuitively, overinvestment actually causes quality problems even out in the industry**, where the capitalist system is much more efficient and streamlined than the clusterfuck we have in academia.

But then again, I am very glad I only need to learn how to survive in this mess, and not actually have to fix it!

*Remember the Caterpillar = Insect + Onychophoran paper?
** There are cases where raw products take over the market due to overinvestment leading to them being incredibly cheap relative to the alternatives. Perhaps overfunded research may well work in a similar way, but with the addition of the usual academic political mess.

"Libraries would drop journals that don't register impact"
To someone whose field requires frequent use of low impact journals, that would be disastrous. Vast swaths of knowledge are already getting lost between the cracks of spotty digitisation - cutting access to non-glamorous journals would be a MASSIVE waste of hard-earned knowledge.

I think an important step would be to teach basic lit research skills and hang up the following wonderful quote in every research lab out there:
"Two months in the lab can save two hours in the library." - source unknown

In some 'slower' fields (like cell biol), the time wasted can be even longer...

-Psi-
skepticwonder.fieldofscience.com

129. gahnett - June 19, 2010 at 12:12 am

I love the Chronicle's editors for allowing Psi's diatribe.

sim34: What's a very communist idea?

Don't we currently have too many intellectuals and knowledge?
The problem is not having enough good intellectuals and efficiently managed knowledge.

130. markbauerlein - June 19, 2010 at 01:32 pm

Of course, in highly-specialized fields with small numbers of inquirers, the citation factor would be adjusted accordingly. That said, I invite psiwavefunction to have the courage to provide his or her real name if he or she wants to talk this way.

131. nnnwww - June 19, 2010 at 03:13 pm

The following article may be of interest: Cole, J. R., & Cole, S. (1972, October 27). The Ortega hyposthesis. Science 178, 368-375. The authors argue, based on citation analysis of physics articles, "that only a few scientists contribute to scientific progress" (p. 368) and ask "whether it is possible that the number of scientists could be reduced without affecting the rate of advance" (p. 372).

132. runwithscissors - June 20, 2010 at 09:16 am

With regard to the question of the cream rising to the top, I don't really understand the problem with sifting through the information 'avalanche'.

In the space of about 20 minutes I can find a lot/most/all of the top cited articles for any keyword I choose. Another 25 minutes reading the abstracts. Perhaps 2-3 hours reading through and annotating the articles I actually need to read and archiving them digitally in a relevant folder. Where, therefore, is the avalanche problem? In my research methods 101 course they taught me how to do this. Perhaps the problem is not the volume of information but the poor secondary research skills of researchers to find what they need.

133. runwithscissors - June 20, 2010 at 09:32 am

There are also some practical suggestions for alleviating the problems identified in the article:

1. Give more power to managing editors to reject manuscripts before the review stage. Not everything is worthy of peer review and a well versed managing editor can triage manuscripts and stop wasting everybody's time

2. Journals can grade their articles - I've seen this in a handful of journals where they have papers that are earmarked as excellent, like front page headlines, followed by regular articles, discussion articles and forums pieces. By differentiating the excellent from the pedestrian based upon editorial review, this helps the cream rise to the top

134. markbauerlein - June 20, 2010 at 10:19 am

Good point about "secondary research skills," runwithscissors. I think, though, that you underestimate the enormous labor and money1 that goes into producing and publishing this research. Peer review, among other things, is hugely time-consuming.

135. eukaryote - June 20, 2010 at 05:25 pm

This is an argument by the mainstream in support of the mainstream. Even if citation numbers were a mark of validity, the argument presumes a direction or utility to scientific research that depends on popularity. As though we were voting on reality or goodness. But science does not create reality. It describes reality. An uncited physics paper that described a route to a destructive weapon would be of far more value to life on earth than a many cited geology paper that led to the current oil spill. We have lost sight of what science does for us in the race to patent, save lives and create technology --> science creates a description of nature, nothing more. It allows us to see ourselves and our place in nature. The more science, the better, and even if it goes uncited, it will not go unread.

136. mikerw - June 21, 2010 at 09:57 am

"First, limit the number of papers to the best three, four, or five that a job or promotion candidate can submit."
Not a flawless idea, but not unreasonable and possibly helpful.


"Second, make more use of citation and journal "impact factors," from Thomson ISI. "

No, no, no. Impact factors, as Thompson ISI warns, are for evaluating journals, not authors. Nature does not have a high impact factor because everything in it is great. It has a high impact factor because the top handful of articles in it receive a huge number of citations. It's a poor proxy for the quality of an individual article.

"Third, change the length of papers published in print: Limit manuscripts to five to six journal-length pages, as Nature and Science do, and put a longer version up on a journal's Web site. "
A terrible idea. Nature and science articles are often close to worthless. They sometimes act almost as press releases, with the meat of the research forced into some other journal. Yes, some of them are highly cited because they announce breakthroughs, but that doesn't mean they really communicate those breakthroughs - that's left for the follow-up article.

137. dblobaum - June 21, 2010 at 02:36 pm

The print legacy bundle that is a journal is the main problem. Libraries ought to be able to buy only the articles that their patrons actually want to read. Then there would be an economic cost to the publisher for including a poor quality paper in their journal.

138. mikerw - June 21, 2010 at 04:41 pm

A bit of amusement can be had by checking the impact factors of some of the journals some of the articl's authors have been publishing in.

One of the author's publications for 2009, for example, are in journals with the following impact factors: 0.99, 0.63, 0.60, & 1.50 They may be very good articles - I in no way mean to disparage that author's research - but it's amusing to see someone signing the above article while publishing in such journals.

139. xmalvolio - June 21, 2010 at 05:10 pm

Bauerlein et al. offer well-intended patch for aca-pub's spurious functioning. (Even _Nature_ and _Science_ publish ditzy papers; when confronted, the former says "contact the authors".) The problem they address can only be a symptom of a deeper problem, in my view: the accelerating balkanization of the disciplines and sub-disc and ad inf. Biology is the poster child.

There's lots more money to be made in biology and life sciences than in English; look at the companies advertising in the above mags and related (pharma and reagent suppliers). Attention is paid to those offering findings that can be rapidly commercialized. Most of the rationale and advert/public relations pulp focuses on these as prospective cures for diseases. But they can't be.

Consider that, contrary to perceptions, we do not know how biological systems function, e.g., exactly how, in animals including humans, is organism-wide development controlled? Moreover, we don't have a definition of life, e.g., what is life? Glaringly, there is no theoretical basis for describing life that scales across life-science disciplines, e.g., molec biol, genetics, cell sci, dev, ecol, evo, psych and soc. Worse, sub-disciplines of biology are largely ignorant of each other's jargon, methods, and journals.

Lastly, how can an anachronistic biologist referee a manuscript aiming to address these issues? Send it to a history of sci journal or a philosophy journal and, were it by some serendipitous fall of fate published, expect it to be read by toilers in biology?

Yes, and pigs do have wings (video at 11:00)!

There is more at work here than the sliver of skin (philosophy) you have lifted (apologies to Shakespeare).

140. darcy310 - June 21, 2010 at 06:34 pm

I am a scientist in the field of cell biology. Primo, citation indices are a manufactured way for the beancounters (primarily) to keep tabs on how many people are citing papers. It is a useful measure for researchers as well; but, as a scientist, I have to agree with other comments that no one can determine what is important, and what isn't. Of course there's junk out there, in every field, driven by the publish or perish mentality. BUT only time, and MORE research, will separate the wheat from the chaff. (If you think you or anyone else can, hubris is rearing its ugly head). I also agree that most Deans and administrators count but don't read - and are a huge source of the problem.

Let's not forget about the fad of "science du jour". Some fields are hot, some are not. Today's fashion will many times be tomorrow's forgotten child. Someone can be toiling away for ten years, alone in a field they are starting to build, and most people won't proceed in that direction because of the curse of the citation index. It's the lemming effect in reverse - and deleterious to how real science is done.

A couple of years ago, I was talking to a colleague I respect very much. He was saying how he'd had to drop out of the line of research he was in first, because although he considers it important, he wasn't cited and thus the administration would have held that against him if he hadn't gone to a "higher impact" line of research. How sad. Opinion, and money, controlling scientific direction- nothing new, but not to our collective advantage. I agree with publishing a lot of it on the web, and letting it be sorted out by time.

141. geraldus - June 23, 2010 at 05:05 am

There's no writing genre more boring than a Ph.D thesis. A small fraction of doctoral dissertations get imaginatively rewritten as books and stimulate thought among the general reading public. When useful knowledge filters down and promotes social enervation the scholar has achieved something.

142. brembs - June 23, 2010 at 07:02 am

I never had to read past the title and the author affiliations. The authors are not scientists, or they would know that scientific discoveries are like orgasms: there are no bad ones.
Incompetent drivel of people who try to talk about something they don't understand. Clear example of Dunning Kruger effect.

143. vindolanda - June 23, 2010 at 10:35 am

There are too many colleges and universities for the available educable base. If half were closed, and those who would have attended are taught some usable skill that might get them a job, then the consequent 'slimming' down of the system would solve the problem of excess matter to judge.

144. mikerw - June 23, 2010 at 12:29 pm

I wanted to add to my comment. The authors say that the problem is too many low-quality papers, and their definition of a low-quality paper is one that doesn't get cited, or at least doesn't get cited quickly. Part of their proposed solution is to place more importance on journal impact factor in things like tenure decisions.

In the comments section, one of the authors (comment 87) appears to be suggesting that peer review is a good tool for judging importance. This doesn't match up with reality. The evidence shows that, althoguh Science and Nature are good at attracting and accepting enough "important" (that is, highly cited) papers to give them high IFs, they also accept many "unimportant" (that is, rarely cited) papers. Every journal is the same way. The averages are different for different journals, but they all publish papers of widely varying "importance" (judged by # and speed of citations).

This failure to identify "importance" doesn't mean peer review isn't working. Peer review main purposes are to evaluate whether researchers claims are reasonable given the data and to evaluate whether researchers have gathered all the data they should before making such claims. Judging importance is low on the list of things peer review is meant to do.

145. myemail568 - June 23, 2010 at 02:25 pm

I was hoping to read an article that suggests we all do research for the wrong reasons... to publish worthless papers, to get tenure, keep our job, maintain a lab, pay the bills.... NOT SCIENTIFIC DISCOVERY. Clearly a new direction in the organization of federally funded research needs to emerge that focuses on novel discovery, applied results and useful solutions to problems.
~Former Low-Quality Researcher :o)

146. 22087840 - June 23, 2010 at 07:45 pm

Backtrack: Proxy Problems

147. jimcoxva - June 24, 2010 at 01:34 am

The authors make much of the fact that only a subset of published articles are subsequently cited. They see this as evidence that too many are published. I see it as a healthy winnowing process.

When people publish "inconsequential" research it may spare others the trouble of going down blind alleys and dead ends. So "inconsequential" research may serve the greater good, even if inconsequential papers are not cited.

I think that the quality issue sorts itself out over time. Thus, it seems to me that the authors are making much ado about a non-issue.

148. golfnut - June 24, 2010 at 02:52 am

I agree about the problem of numbers of low quality papers. But the journal impact factor (JIF)is part of the problem, not the solution.

I understand that the JIF was developed to aid the identification of groups of authors not represented by journals in the Citation Index. They were identified by association with a journal in which authors tended to reference papers in that same journal (i.e., a clique). Hence the JIF is the ratio of papers referenced in the journal, published in the last 2 or 3 years, to total papers published in the same journal in the same period. Any relation to the quality of the journal is purely accidental. The JIF is most strongly correlated with discipline - sciences that employ frequent meta-analyses tend to score highly - especially medical and biological sciences, disciplines in which papers have a long life and few references, especially mathematics, score very lowly. .

In my discipline - physics, we have had a proliferation of minimum-publishable-unit papers that reference as many papers as possible from the last few years in the same journal. This has the effect of increasing the JIF for the journal. Over the last few years the JIF has typically doubled, and the journals are full of crappy papers. The motivation for this behaviour is a personal assessment scheme that sums the JIFs for all papers published by a scientist.

We must discourage the use of the JIF for assessing quality of papers, and most certainly discourage the use of JIF for measuring the quality of science and scientists.

149. rambo - June 24, 2010 at 06:51 am

good quality research is at www.heritage.org, www.aei.org, www.cato.org, www.hoover.org.

For the left-wing liberals, at www.brookings.org, www.centerforamericanprogress.org, www.csis.org, www.ceip.org

for most campus research, using a single anti-Bush or anti-Republican statement make the researchers less smart and less intelligent...

150. jmonroe6400 - June 24, 2010 at 08:36 am

I basically agree with the article, mostly for the reason that in many (not all) fields, the publication game has helped to trivialize scholarship. Even the good articles are often mostly technical accomplishments of a highly specialized nature... which would be wonderful in a journal of pure mathematics, for example, but godawful in a journal of literature studies. In the latter case the game is exposed for all to see: it's entirely about credentialing, with no real point [of accumulating useful knowledge]beyond that. I doubt if the quality of many departments in the humanities would suffer if tenure decisions were based on a collective opinion about a person's ability without resort to the rather lame expedient of referencing the amount of time wasted trying to get articles of marginal interest into the journals of their field.

On the other hand, it is hard to argue with human nature: journal articles show the sweat. The present system may be the best of which we are capable.

151. trendisnotdestiny - June 24, 2010 at 09:37 am

rambo

Any movement away from Free-market capitalism gets labelled leftist as if balance between individuals and collective interests couldn't be American... This is such complete drivel.

Listing leftwing and conservative thinktanks is some order totally misses the direction our whole political system has taken in the last 30 years to the right... Moving back towards the center does not mean that we are becoming socialist or communist country as the tea-baggers claim; rather it is acknowledges that we as a country have not prepared our population for the inevitable changes that are going to occur in the next two decades as capital de-leverages/de-couples their interests with labor (no more Social Security, no more entitlements, you are on your own)

However, to keep people ignorant of these changes, blame them for their lack of information and to extort profitable asymmetries that result from this transition only creates anger and more violence... I will say this again on this website, the bloated welfare state is only bloated because our industry and government have been in partnerships to extract whatever resources they could from the middle class as a structural EXTERNALITY... To miss this, is to participate in the divide and conquer notions between left and right, Keynes-Friedman and communist-capitalist deterioration of conversation among ordinary people who are uninformed of their interests... To miss this, is to get caught up in the distraction of rich people getting richer while we in academia sat back and watched!

152. erschuur - June 24, 2010 at 01:50 pm

Hallelujah! The tsunami of (to be polite) poor quality research articles is, in my opinion, impeding knowledge advancement and innovation. It's an artifact of academics, where these papers are seen as the work product. I spend a lot of time and money sifting through (often expensive) journal articles only to come up empty handed in my search for a particular type of information.

153. rhancuff - June 24, 2010 at 02:05 pm

Rambo, that post was funny. Especially the bit about heritage. You may now return to your free republic echo chamber.

154. ellenhunt - June 24, 2010 at 03:33 pm

Mark - You dared to go Emily-Post-al on me, your virtual chin aquiver with indignation at my overly accurate description of the impact factor. Doing so, you hid your own sophomoric laziness in pursuing the research I asked you to do. My criticism is most apropos Mark, since if you do not know what you are using as the basis for your proposal then you have failed the test of whether or not you are a scholar at all.

I award you an F- on your response to my post #8.
* The F is for not addressing the core matter of what an impact factor actually is.
* The F- is for papering over your slipshod laziness with that tool of fakes everywhere, high dudgeon.

I began this discussion with the respect accorded a person one does not know. I leave this discussion holding you in quite low regard after you proved yourself unworthy of respect.

155. mottgreene - June 24, 2010 at 05:18 pm

Citations are like purchases, and are a form of market pricing. Value is established by the separation of the desired from the scorned. To imagine a pre-separation of good and bad is like the modernist dream of central planning - it imagines that intellectual markets can function without the price data conferred, in this case, by citation. But intellectual (reputation, funding) markets require this price data to assign value. So editors (mea culpa) do what they can via refereeing, but the market data (read.accept/reject. cite/don't cite) is actually the dynamic process of consensus formation. The problem the authors raise is indeed serious. Certainly oversupply of writers (too many PhDs) is a problem, but it does sort out in its own messy way: people who write bad papers publish fewer of them, are funded less, fall out of the citation array sooner. Still too many people turning the cranks of too many sausage machines, but the good research does not generally get buried. Empirical examples of good research buried by bad are pretty hard to come by. Glad to see examples if anyone has any...

156. frankgado - June 25, 2010 at 01:03 am

Good article, Mark et. al. No, I don't agree with every tittle, but you do recognize the problems.

Might we safely conclude that the worm in the apple is lust for power, money, and prestige among the faculty? I became a scholar and teacher in order to have the time and means to indulge my intellectual curiosities, not for the sake of silly titles or marginally higher salaries than received by the goof-offs among my colleages.

Anyone who doesn't smell the corruption in academe before the tenure process kicks in is too stupid to matter.

157. chaussures1 - June 25, 2010 at 09:40 am

<Comment removed by moderator>

158. markbauerlein - June 25, 2010 at 10:59 am

No, not "Emily Post-al," to use your coinage, ellenhunt, just that whenever people start throwing out terms such as "brain-damaged" and "blathering idiocy" I stop reading. Hardly an approach consistent "with the respect accorded a person one does not know." I receive a lot of emails from young people over my last book and sometimes they contain a fair share of similar denunciations, not to mention the occasional four-letter and 12-letter words. I always respond, and the first thing I say is that they should realize that when they speak this way nobody is going to pay attention to their otherwise salient points.

159. jsfarns - June 25, 2010 at 12:16 pm

one bug in the ointment: if we eliminate the low-end journals publishing lower-quality research, we will ultimately put more pressure on the prestige journals. I recently placed a paper with the top journal in my field, and then was informed that because of their backlog it will take 18 months before publication is possible. The crazy part of this is the paper will get a book citation before it's even published.

I bumped into a colleague yesterday, and when I asked how her summer was going she beamed that she'd already cranked out two papers in June and was hoping to get another one out before the end of the month. YIKES! And I suspect that half the people in my department will engage in this sort of binge writing this summer.

Excuse the alliteration, but professors should have better prompts for putting pen to paper than promotion. Passion, perhaps?

160. whatmineeyeshaveseen - June 25, 2010 at 03:12 pm

The comments section to this article proves yet again how difficult it is to have a civil conversation about anything online. Some of the criticisms raised were completely valid, but the contempt and vitriol with which they were expressed are completely inexcusable. It has long been known that a fancy degree certainly doesn't give a person character, but it's a shame when that truth is exposed so blatantly for all to see.

The authors did not set out to write a scientific article; they were only writing an op-ed for a newspaper for lay readership about a serious problem that is often discussed in academic circles and to which few, if any, have proposed serious solutions. It is always a brave thing to propose any kind of solution to a problem, because critics will immediately savage it. For that alone, the authors won my respect.

We can have a debate about whether or not the glut of publications really does constitute a problem. It might come down to something as simple as a question of noise. Some people function highly when surrounded by noise and are adept at picking out of the chaos the information they need. Others find the noise deafening and inimical to serious thought and labor.

We are unquestionably living in a bigger and bigger world where more and more people have something (potentially important) to say. However, I would argue that there is a certain limit to what can be heard, even for those who are very adept at working in chaotic, stressful environments and sorting quickly through vast amounts of information. But in order for knowledge to build on itself efficiently, it has to proceed in something like a conversation. And the more people join a conversation, the less it becomes a conversation, and the more it becomes an exercise of responding to x here, and y there, and z over there. The conversation becomes fragmented and disjointed. One loses the sense of who one is talking to, about what, and why. Without the sense of relation, one begins to feel as if one is floating in intellectual space, and the only thing that remains tangible is one's own survival as an academic, which only accelerates the non-relational aspects of publicizing one's research. Many are familiar with Dunbar's number - the optimal number, proposed by Robin Dunbar, of people with whom any given person can maintain a stable social relationship is around 150. Indeed, recent research in human evolution seems to show that there is something like an optimal size for a human community, and that once we expand vastly beyond that, all kinds of inefficiencies spring up. And large, inefficient systems are certainly more vulnerable to collapse. So when the authors say the status quo is unsustainable, I am inclined to believe them. This of course has all sorts of unsavory political implications (Whose voices will be heard? I hope mine!), but I think it's an important reality that cannot be ignored.

In any case, whether there comes to be a consensus on this issue or not, people will find ways to deal with the overabundance of information, some of it of higher quality or utility than other parts. I will be watching to see how the process unfolds.

161. ellenhunt - June 25, 2010 at 03:24 pm

Mark - Your preciousness would be obnoxious in an 8 year old. A man with the wits god gave celery would draw a difference between my (very accurate) assessment of the algorithm you propose to you and aspersions on himself.

If you wish to continue to be so sophomoric in your use of complete unknowns like "impact factor" you will find yourself the completely deserving recipient of dismissal, if not contempt.

162. markbauerlein - June 25, 2010 at 07:40 pm

whatmineeyeshaveseen makes the point for me here, ellenhunt. As I've said before, people should speak in blog comments as if they were sitting across the table from one another. It is all too easy to toss insults ("wits god gave celery") from the safe harbor of virtual space. If you ever encounter me at a meeting or a campus visit, I expect you to repeat the "contempt" to me in person.

163. voltairein08 - June 25, 2010 at 08:21 pm

The authors' argument is incoherent. On the one hand,they argue that "anything more than a few years old is obsolete. Older literature isn't properly appreciated, or is needlessly rehashed in a newer, publishable version." On the other hand, they promote a science-inspired system of pure quantification of citations, premised on the assumption that the academic community's judgment is authoritative

Which is it?

164. cosmopolite - June 25, 2010 at 09:54 pm


My university publishes every year a booklet consisting of all publications published by staff in the preceding year. The more line entries, the greater the bragging rights. I maintain that this booklet is a pristine example of how publish or perish is a lifelong thing nowadays.

The problem is that administrators and colleagues deem those who do not publish as mentally inferior, and as deadwood who are not earning their salaries.

No progress is possible unless there is a sea change in hiring and promotion practices.

Web of Science citation counts should be included in every CV. I predict that the humanities will resist this. In my discipline, citation counts are a good measure of worth.

I welcome more thought about journal impact factors.

The publication avalanche is encouraged by a high rate of creation of new journals by commercial publishers, and higher expectations by administrators. If I do not publish 4 articles every 6 years, I am in the poo with my nation's Ministry of Education. If I do not publish 5 articles every 5 years, I lose my "academically qualified" status. If I and enough of my colleagues become non-AQ, my unit loses its USA accreditation.

The Ministry of Education in my country explicitly discourages giving importance to citation counts.

Rewarding citations and impact factors will be fiercely resisted by those who have done well under the present counting regime.


165. richardtaborgreene - June 27, 2010 at 07:20 am

A PIONEERING PIECE OF DISTILLED STUPIDITY

1) science of the artificial--herbert simon--let's keep growth linear (wow what an insight)

2) impact (whose, when, where, how measured, by what stakeholder, for how long---what a collosally communist central planning mindset--these guys are spys for a dead Soviet Union)

3) citation index---let's distill popularity into a metric then it will appear to be more than mere popularity---stupid stupid stupid

4) None of the authors are scientists---that is APPARENT

5) are academic so foolish they would not proliferate something else if current avenues of proliferation are blocked??? are wall street types the only ones smart enough to work around new rule?????? Are the authors of this tripe the ONLY smart academics alive???????

6) INSULTING---who told these guys they could publish such crap and come away without their repute for dinner conversation in tatters?????

166. rambo - June 27, 2010 at 08:20 am

trendisnotdestiny and rhancuff, thank you very much. How about a study on why and how can liberal Democrats be rich and wealthy when they are anti-capitalism and anti-business and then claimed their righteous to such claim? Or maybe a huge name directory of left-wing and liberals who have/had children in private schools which voting/argued against school vouchers, school choice, school charters, etc.....


167. trendisnotdestiny - June 27, 2010 at 10:12 am

rambo,

Yes, hypocrisy exists: on the left/on the right with me, with you.
Yes, the left exhibits a lot of faux sanctimony about social justice as they replicate perverse self-interested behaviors of the dominant culture. Bravo. You have made the obvious just a bit more visible! The field thanks you so much for your hard work in capturing so simplistic that a caveman could do it (sorry Geico):

RAMBO POST # 150
"good quality research is at www.heritage.org, www.aei.org, www.cato.org, www.hoover.org.

For the left-wing liberals, at www.brookings.org, www.centerforamericanprogress.org, www.csis.org, www.ceip.org

for most campus research, using a single anti-Bush or anti-Republican statement make the researchers less smart and less intelligent..."

RESPONSE

Rambo, you are better than this... You list 8 of the most powerful US thinktanks, you give them categories (left and right)
and you make this assertion about most campus research.... Not to mention that you characterize those who would criticize you as anti-capitalism or anti-business as if criticality was a membership at your country club.

This is called reductionism. The irony of this articles' title and this response is like bringing chloroform to a yoga sitin.

168. wbrought - June 27, 2010 at 05:00 pm

As a long-time reviewer of scientific manuscripts I have some insight into this topic. Most of what is submitted to well-known journals (at least for my review) is poor science at best. I submit that many (most) reviewers do not spend adequate time on their task. I usually pull pertient current research on the manuscript's topic as well as the articles the authors cite as most pertinent before offering an opinion.

At most peer-reviewed journals 2-3 reviewers participate in the reviewing process. Their opinions are routinely shared with co-reviewers by the journal. I often wonder what manuscript my colleagues read and how much they actually know about the topic.

I might also add that the likelihood of publication increases as the prominence of the senior contributing author increases - regardless of the science involved. For multi-lingual senior authors, they are often only translators and editors (and a powerful force toward publication).

I find "data-mining" research most offensive. In many cases the "authors" had NO interaction with the population they have analyzed. In some cases they are not even from the same country and have no experience with the system from which the data comes.

The system much change. A better (cheaper and more accessible) method must be established.

Addendum: Closed journals that offer me access to a manuscript critical to a sick patient's well-being for only $53 must go. What about an iTunes-like system for .pdf copies at a reasonable price (99 cents sounds good) for access.

169. osholes - June 28, 2010 at 09:19 am

Regarding impact factors, see Nature, 17 June 2010. It includes this quote: "You should never use the journal impact factor to evaluate research performance for an article or for an individual - that is a mortal sin."

170. vangelv - June 28, 2010 at 11:47 am

First, we need more publishing, not less. Second, in the era of the internet there is no excuse for not being fully transparent. All of the data and the methodology should be accessable to anyone who wishes to look at it. Third, the AGW debate has shown how the peer review process is broken. Having small cliques control most of the publishing in a particular area is not beneficial and can easily lead to abuse. Better to have a smaller role for reviewers and allow full transparency and feedback to dominate the publishing process.

171. feroze - June 29, 2010 at 12:48 am

This was an interesting article, and obviously touched a nerve on both sides. I was once in a University as a MS student. My university only had a thesis option because they wanted graduate students to churn out papers and get the university name recognition in the journals. Our professors were adept at the art of playing the game - asking us to cite papers of their friends, submitting papers to conferences where their buddies were on the chair or editorial board, etc.

Ultimately, human beings will always game the system, no matter how you set it up. If you count a persons importance in a field by the number of papers he publishes, then, you will get papers. Some of them will not be worth the paper they are printed on. This is similar to proliferation of patents in high-tech industries, mainly as a form of defense.

Also, some of the other suggestions put forth by the authors have their own problems. As other people have pointed out, it is difficult to make a judgement if a paper is not important, or what it's future value is. Usually, time takes care of this.

I find the suggestion given by commenters of having separate research and teaching tracks interesting. It would also be interesting if as part of citations, the systems also maintained if the citation was for a good reason or bad reason? If the citation was for a bad reason, then people will not write papers just to get citations. Over time, quality of citations will increase.

However, at the risk of repeating myself - no system is perfect. Like the problem of evaluating employees in companies, the task of evaluating publications and scientific research is complex, and not easily solvable.

172. ideasrcheap - June 29, 2010 at 05:29 pm

Balderdash, I say! I've made my career on "useless" science. You can't have everyone discovering things of massive import, or the pressure to fabricate would go through the roof and the whole enterprise would come to a grinding halt. You need to have us serfs out there rooting through the withered tubers in the fields surrounding the castles of the nobles (Nobels?). Who knows, one of us might find the buried treasure or the mold that turns out to be penicillin and saves a million lives. Science is a CHEAP enterprise compared to, say, the military or petrochemical exploration. It creates jobs (even publishing jobs!) and gives us something to be proud of. Are "science blogs" the answer? Hardly! You see, the point isn't just to force people to fill the library shelves and online databases with minutiae that no one cares about, but to subject the work of scientists to PEER REVIEW. The system has problems to be sure, but too many scientific publications ain't one.

173. davidscottlewis - June 30, 2010 at 12:29 am

Although we may all question at least a point or two in this piece, I generally agree.

I am on the side of being a consumer, not a producer, of research information. From the consumer's perspective, I find that I generally get the best results from the Web of Science (Thomson Reuters), then Scopus, then Inspec, then Google Scholar. Yes, in this specific order.

I do NOT believe it is pure coincidence that the Web of Science covers fewer subject-specific sources than the others, Scopus less than Inspec, with Google Scholar covering all sorts of junk. (Some of what I'm branding as "junk" found in Google Scholar isn't bad for background info, but it's hardly scholarly.)

So I have to say that as a consumer of a lot of research information across several disciplines (ranging from renewable energy to computer security), the Web of Science is the best source hands down. FYI, I also use Scopus, Inspec and Google Scholar, and run e-mail SDIs (WoK, Inspec, Google Scholar) or RSS feeds (Scopus) on all of them.

First place: Web of Science
Second place: Scopus
Third place: Inspec
Fourth place: Google Scholar

My opinion based upon my own needs as a consumer of research papers.

However, for the money, Google Scholar is tough to beat: Free isn't a bad thing. And Inspec Direct is RELATIVELY cheap for individuals this year: Unlimited access to the entire database, unlimited SDIs. GREAT deal for USD 320.

174. davidscottlewis - June 30, 2010 at 12:31 am

Although we may all question at least a point or two in this piece, I generally agree.

I am on the side of being a consumer, not a producer, of research information. From the consumer's perspective, I find that I generally get the best results from the Web of Science (Thomson Reuters), then Scopus, then Inspec, then Google Scholar. Yes, in this specific order.

I do NOT believe it is pure coincidence that the Web of Science covers fewer subject-specific sources than the others, Scopus less than Inspec, with Google Scholar covering all sorts of junk. (Some of what I'm branding as "junk" found in Google Scholar isn't bad for background info, but it's hardly scholarly.)

So I have to say that as a consumer of a lot of research information across several disciplines (ranging from renewable energy to computer security), the Web of Science is the best source hands down. FYI, I also use Scopus, Inspec and Google Scholar, and run e-mail SDIs (WoK, Inspec, Google Scholar) or RSS feeds (Scopus) on all of them.

First place: Web of Science
Second place: Scopus
Third place: Inspec
Fourth place: Google Scholar

My opinion based upon my own needs as a consumer of research papers.

However, for the money, Google Scholar is tough to beat: Free isn't a bad thing. And Inspec Direct is RELATIVELY cheap for individuals this year: Unlimited access to the entire database, unlimited SDIs. GREAT deal for USD 320.

175. jdsblog - June 30, 2010 at 02:57 am

True! nowadays there are a lot of free search engines available but still research should not be taken too shallow.

Banner Ad Blueprint is actually a guide or blueprint that can be copied by other marketers looking at developing a profitable advertisement campaign. Know more! banner ad blue print

176. algirdas - July 10, 2010 at 11:52 am

Point of true irony: i do not have the time to analyze,ignore the trivia, note the significant, and distill the commentary into a managable (4-5 page?) article for further digestion. We seem to agree that there are many pieces of research that should not even be considered for publication let alone enter the writing stage. Thus the first place for limiting the deluge is at the professor (senior author) stage. We can all name egs of previous generation professors and mentors who subscribed to this tenet. The experience researcher makes this judgment and the student learns from it for his/her own career. The second place to stop excessive unworthy work is at the Editor's office. The Editor should have the experience and the gall to return the manuscript to the author with this judgement, politely stated. Many of the other points by Bauerlein et al (eg paper-counting by grant and tenure and promotion committees, publish the cream of the results concisely and place excessive and less important results into a supplementary data base, do not fragment results,investigate in a fashion to establish a solid scholary contribution which makes one proud to be a scientist (we all know what that means), ask for citing only their 5 top papers in application for grants and positions)should also be championed and promoted to those making the various decisions. A commentary on the Impact Factor can be found the Nobelist, R. Ernst, The Follieso f Citation Indices and Academic Ranking Lists. A Brief Commentary to 'Bibliometrics as Weapons of Mass Citation' in Chimia, 2010, 64, 90, doi:10.2533/chimia.2010.90

177. kulkarniankur - July 11, 2010 at 01:38 am

The root cause of overpublication is the financial autonomy of univeristies. It is this that makes universities dependent on research grants, which leads to pressure on untenured faculty to get grants. And because grant proposals are too many and too competitive, enumerative criteria such as number of publications become decisive.

The fact is that "quality" (of anything, including research) is an intangible property and cannot be mapped to something enumerative - either number of papers or number of citations.

Quality of a paper is hard (may be impossible) to assess objectively - it as be a subjective matter. The best we can hope for to get better quality research is that there is no incentive on a researcher to voluntarily dilute what he perceives as the quality of his papers. For this, we need to unlink researcher rewards and performance and leave researchers with only their internal motivations to produce research. It may be ok to have punitive responses for underperformance of researchers, but certainly no reward for showing higher performance.

Indeed for this, universities need to become more frugal in their ways of functioning and researchers need to become more modest with their lifestyle.

178. blakpete - July 13, 2010 at 07:50 pm

I don't know these et al commentators, wether they are learned or otherwise, however it disturbs me that they should start their paper with the pronoun "everybody". Such usage of generalities is unprofessional and arrogant. Why make such an outrageous presumption, that everybody is sufficiently erudite to agree or understand?

Frankly, this pronoun is used often by self absorbed teens and deflects me from wanting to read any further.

I'm not young enough to know everything!

Add Your Comment

Commenting is closed.

subscribe today

Get the insight you need for success in academe.