Readers of The Chronicle who took their general-education requirements to heart may recall that Ishmael, Melville’s persistent narrator in Moby-Dick, spends much of his 600-page allotment on an epistemological voyage. Driven by the conviction of infinite mind and by the belief that descriptions of reality matter, he struggles mightily to know the whale through the principal 19th-century avenues of knowledge — empirical, rational, and existential. Ultimately he alone is saved from the wreckage of the Pequod because he resigns himself to the essential opacity of the universe.
At one point on this journey, he visits some museums and galleries to learn about the nature of the whale through its representations in the visual arts. In Chapters 55 and 56, he walks us through “Monstrous Pictures of Whales” and “Less Erroneous Pictures of Whales” in such a way that we emerge no more sure than Ishmael of the reality behind the representations.
In trying to understand the swirling numbers that purport to represent the realities of American higher education, we don’t have to resign ourselves, as did Ishmael, to uncertainty. But it has become increasingly difficult to sort the monstrous from the less erroneous in these matters. On any given day, one can find a half-dozen different statistics concerning the same phenomenon in the news media and the trade press, and on the Web. The recent battles over measures of high-school graduation rates, with claims ranging from 68 percent to 82 percent, illustrate the case. The winner of those battles will be the organization with the most aggressive public-relations operation and mastery of the ganglia of the Internet, not necessarily the organization with the most unassailable data and methodology.
Statistics on higher-education issues get thrown around even more casually. For example, we frequently see such statements as: Forty percent of entering four-year-college students take remedial courses. Twenty-five percent of entering four-year-college students and 50 percent of entering community-college students do not return for a second year. Out of every 100 ninth graders, only 18 will wind up with a bachelor’s or associate degree 10 years later.
None of those assertions can be supported by any national data that have been rigorously reviewed by the U.S. Department of Education’s National Center for Education Statistics. Yet assumptions based on those numbers have become totems of belief: Remediation is rampant, the college dropout rate is appalling, and the whole educational system is a failure. Unfortunately, policy will surely follow the propaganda.
It is counterproductive to make decisions based on assumptions derived from unexamined numbers. Yet that is what we in higher education do when we fail to question statistical assertions, when we fail to triangulate — that is, to find other sources and types of evidence to affirm or contradict those assertions. We have been gulled by a propaganda of numbers that has shaped how we think about our enterprise. It is our responsibility to exercise due diligence in generating, interpreting, and responding to statistical assertions, particularly those from unofficial sources. If we don’t, the propaganda of numbers will turn into tyranny.
The French sociologist Jacques Ellul’s classic Propagandes (1965) points out that there are many kinds of propaganda, some of which are commonplace and unnoticed in democratic societies. I propose that one type of “democratic” propaganda involves numbers influencing decisions and choices in the same way that Harold Lasswell and colleagues (in their 1968 Language of Politics: Studies in Quantitative Semantics) noted of words: The numbers become “verdict and sentence, statute, ordinance and rule” — even when they are lies.
Why does this matter in the world of higher education? The answer comes in two propositions:
First, the world has gone quantitative, and statistics sell as well as, if not better than, anecdotes — provided one wins the race to place one’s preferred numbers in the preferred media. In an age when bloggers command six-figure audiences and the home pages of organizations are visited by millions, what attracts attention is a banner headline with statistics. When enough bloggers and Web sites repeat the numbers, the statistics are assumed to be true (“accurate representations of reality” is a better phrase). When published and broadcast in the news media, the numbers acquire the sanctity of icons. The longer those numbers and statistics go unchallenged, the more accurate they are perceived to be, and the more often organizations mechanically repeat them as part of their policy liturgy. The icons rigidify, excluding other evidence that might suggest alternative policies.
Second, statistics are often presented in ways that most people don’t understand. Contexts and definitions of the universe are often nowhere to be found. We get percentages or ratios, but we are not sure of what. Inundated with information from the mass media, we don’t remember specific statistics; we retain only “general impressions.” Those general impressions create conditions under which groups will either act or (more likely in the world of higher-education policy) passively support the actions of others. Academics think they are immune to the effects of swirling numbers, but it is precisely because of the range of sources from which academics extract information that they are most vulnerable to messages from unofficial sources. In other words, there isn’t much triangulation going on.
Let’s get the distinction between official and unofficial statistics clear: Official statistics are, first and foremost, the responsibility of federal agencies — which happen to possess and deploy the extensive resources necessary to produce them. In the world of higher education, the principal official source is the National Center for Education Statistics, supported by the Census Bureau, Bureau of Labor Statistics, and National Science Foundation.
Three points about official statistics are critical:
1. They are impartial, but not value-free. I was one of the builders and editors of the databases of NCES’s longitudinal studies, and when I constructed a variable — for example, for college-attendance patterns — there is no question that my judgments of the sequences and combinations of institutions attended by students played a role in the information conveyed by that variable. Of course, I had to defend the variable before a review panel.
2. All official databases pass through a rigorous review process (NCES calls it “adjudication”), governed by publicly accessible statistical standards, before the data are released.
3. When federal agencies like NCES release official databases, nothing is hidden. Every variable presented in code books includes a description of how that variable was constructed. “Public release” versions of the databases are available online, and restricted versions are available by license to researchers through their institutions. Every official analysis using NCES data is accompanied by a technical appendix that describes the nature of the data set, its construction, and its limitations.
Unofficial statistics are those generated by organizations, associations, think tanks, and interest groups outside the statistical-review processes of federal agencies. In presentations of unofficial statistics, it is extraordinarily rare to find rigorous reviews; public standards; disclosure of limitations; and full, transparent, accessible maps of variables.
Once issued, of course, official statistics can be used by anyone to promote distinct views and interpretations of social, economic, and educational realities. In fact, one of the principal sources of unofficial statistics lies in the torture of official statistics. For example, if I want to tell a bad story about the proportion of entering college students who earn bachelor’s degrees within six years — that is, show that the proportion is low — and I use NCES’s “Beginning Postsecondary Students Longitudinal Study” to do so, I will include in my denominator beginning students of all ages (even though I know that older beginning students do not finish degrees at anywhere near the same rate as traditional-age students) and students who never set foot in a bachelor’s-degree-granting institution (for example, students who started and finished in cosmetology schools and then said goodbye to postsecondary schooling). The data source is official. But my unofficial presentation and analysis would do little more than stir up passion.
Two related cases illustrate the triumph of unofficial statistics and how they can misrepresent reality. Both examples concern college-graduation rates. They come from two very different time periods and corresponding media environments — pre-Internet-era and Internet-age — which condition their provenance and their perceived authority.
Case No. 1: College-graduation rates and the birth of the Student Right-to-Know and Campus Security Act (1990). Most readers of The Chronicle know that any institution of higher education wishing to qualify for federal financial aid for its students must submit its graduation rates to the Education Department, which then compiles those statistics and makes them available to the public. But that requirement is relatively recent. Before 1990 the reporting of institutional graduation rates was both sporadic and unsystematic, with one notable exception: Every year the National Collegiate Athletic Association reported its version of graduation rates for both student-athletes and other students at the institutions attended by the athletes. The NCAA data are unofficial statistics, but by their claim on prime space in the print media, they became the spur of public policy, eventually leading to passage of the Student Right-to-Know Act.
To track the influence of pre-Internet-era unofficial statistics, let’s examine how they were reported in three major newspapers and how the one-sidedness of the statistics left public policy with limited choices.
Using Internet search-and-retrieve tools, I examined all of the articles in three newspapers dealing with college-graduation rates from 1987 through 1992, the period bracketing the introduction, debate, and passage of what was subsequently called the Student Right-to-Know Act. In USA Today, 80 percent of those articles were in the sports section, as were 76 percent in the Los Angeles Times and 65 percent in The Washington Post. In all three newspapers, the adjectives used most frequently to describe graduation rates were “poor,” “low,” and “dismal.” Other descriptors included “scandalous,” “atrocious,” “miserable,” “pathetic,” “sagging,” “pitiful,” and, for contrast, “pretty high.”
What specific percentages for graduation rates were cited in those articles to support those dim judgments? Both USA Today and the Los Angeles Times published percentages that ran the gamut — for example 22, 26.6, 42, 47.9, 89, and 100 percent (the last for a women’s basketball team). The Washington Post, on the other hand, marked ranges such as “less than 20 percent” or “35-40 percent.” When statistics like those are spread out over six years of reporting, members of the public don’t remember specific numbers, but they do take away a general impression. At that time, the general impression, reinforced by anecdotes in the writing of the former Olympian Harry Edwards and the college-sports critic Murray Sperber, was reflected in the introduction in Congress of what was originally called “The Student-Athlete Right-to-Know Act,” in April 1989. The principal sponsors were two former all-star college and professional basketball players, Sen. Bill Bradley and Rep. Tom McMillen.
It makes perverted sense: The major print media treated college-graduation rates as a matter for the sports pages; NCES data showed that 1.2 percent of students who attended four-year colleges were on varsity teams in major sports; and reporters’ and readers’ perceptions of college graduation rates (“dismal,” “miserable,” etc.) were based on graduation rates of that group, particularly if the varsity teams in question were from high-profile NCAA Division I universities. The public perception was then canonized by a legislative process led by authoritative figures, which ultimately required the Education Department to produce official data under a definition called the “Congressional Methodology.” An awfully small tail came to wag a very big dog.
And we continue to live with the consequences: a graduation-rate formula that, for traditional-age students (who still make up 80 percent of entering students) excludes from its denominator (a) the 18 percent who happen to enter in a term other than the fall term; and (b) a somewhat overlapping 18 percent who enter part time. Furthermore, the numerator counts only those students who receive their degrees from the same institutions in which they began, thus excluding the 15 percent of bachelor’s-degree recipients who transfer from community colleges, and an additional 20 percent who start in a different four-year institution.
Add those populations up and one finds that roughly half of traditional-age undergraduates are excluded from the Education Department’s calculation of graduation rates. How do we know the size of the omitted population? From the transcript records included in NCES’s longitudinal studies. Transcripts may have some problems, but they don’t lie. As for beginning students who are 24 or older, the most recent NCES “Beginning Postsecondary Students Longitudinal Study” shows 51 percent starting out exclusively part time, and they aren’t counted, either.
If our “official” data — formulated in response to a Congressional mandate — exclude half of the entering students in higher education, those data don’t mean much. But they gain a public stage, particularly through U.S. News & World Report’s annual college-ranking issue; then the propagandistic multiplier effect sets in, through repetitions of those rankings in both traditional news media and now on the Internet.
The battle over proposals to track all postsecondary students — no matter how many institutions or in how many states they attend — that has generated both op-ed and legislative heat over the past two years reflects a realization that the propaganda of numbers in the 1987-92 period sent policy down the wrong road. The recent report of Education Secretary Margaret Spellings’s Commission on the Future of Higher Education recommended corrective action, but it will be a long time before we crawl our way back to the fork in the road and capture numbers that are better reflections of reality.
Case No. 2: Losses in the education pipeline. This is a trickier story. It involves more directly the role of the Internet in the propaganda of numbers and a case in which one unofficial organization repackaged data from a second unofficial organization (data that, in turn, came from five other sources), the result of which created a monstrous — and false — picture of American education.
In April 2004, the National Center for Public Policy and Higher Education, a respected independent consulting group, issued a policy alert claiming that “out of [every] 100 ninth graders,” 67 graduate on time from high school, 38 immediately enter postsecondary education, 26 persist to their second year of college, and only 18 wind up with a bachelor’s or associate degree 10 years later. Strangely derived from state-level data originally prepared by the National Center for Higher Education Management Systems (an organization for which I have great respect on other counts), the numbers made a stunning and dismal statement about our educational enterprise.
The distribution of this chain of numbers was swift and widespread. And the more those “leak in the education pipeline” numbers were repeated, both in the news media and on the Web sites of esteemed organizations, the more rigid and unerasable the claim became. Most telling, the numbers were picked up by someone on the White House staff and placed in both a presidential speech and an accompanying news release in May 2004, thus rendering them true in the public eye. The figures wound up in Congressional testimony within a month, were sanctified in a national report on accountability in higher education sponsored by the State Higher Education Executive Officers (chaired by two former governors), and linger to this day on the “fact sheets” and Web sites of yet other respected organizations, such as the Alliance for Excellent Education.
No one remembers the specific sequence of numbers, rather only a general message of system failure. Policy makers at every level assumed that if the president said it, and other distinguished bodies repeated it, it must be so. By the gravitas of the provenance chain, an unofficial statistic became quasi-official, and the “pipeline leaks” message has become part of the liturgy of condemnation.
But these unofficial statistics are flat-out hokum. There has never been a national longitudinal study of ninth graders, as the presentation of the numbers implied. But if there had been such a study, where would you go to confirm its bottom-line claim, that 18 percent of those ninth graders earn a bachelor’s or associate degree 10 years later — that is, by roughly age 24 or 25? There are two official sources against which to check the numbers, and both of them passed through adjudication governed by public standards: the Census Bureau’s “Current Population Survey” and NCES’s grade-cohort longitudinal studies, the most recent of which followed eighth graders from spring 1988 through December 2000.
For a few years now, the Census Bureau has been showing that 28 to 29 percent of the 25-29 age group holds at least a bachelor’s degree. While its population survey does not report associate-degree attainment separately, we can estimate it (using the annual degree-completion data reported to NCES) at another 5 to 6 percent. So, according to Census Bureau numbers, roughly 33 percent to 35 percent of 25-to-29-year-olds have earned associate or bachelor’s degrees. As for the NCES eighth graders, the center’s longitudinal study — which grounds its information in students’ high-school and college transcripts — shows that the proportion earning bachelor’s or associate degrees by age 26 or 27 was 35 percent.
When you have two official sets of data producing almost the same figure from completely different sources, something has to be right. Against that percentage, the unofficial assertion of an 18-percent national degree-completion rate looks highly suspicious. If the difference were only two or three percentage points, we would not need to be concerned. But 18 percent versus 33-35 percent? At 35 percent, we can say that while we’re not doing as well as we could, we are within range of hope and can begin to look carefully along the roads that students follow through adolescence and young adulthood to find the best points for changing the signals, traffic signs, and junctions. At 18 percent, and with passive endorsement from the major nodes of power in our society and reinforcement wherever one Googles, we throw up our policy hands in lamentations, point our policy fingers in accusations, and frenetically set out in 40 different directions, of which maybe four will benefit some of our grandchildren.
Looking at such a dismal prospect, we have to exercise due diligence, drill down, and ask how the National Center for Public Policy and Higher Education came up with a series of numbers that were presented as if they were the result of a longitudinal study, rout them once and for all, and engage in the serious business of improving students’ progress from high school through higher education.
The architects of the pipeline sequence at the National Center for Higher Education Management Systems described it to me as “a combination of metrics and sources,” with changing reference dates. Unfortunately, that setting requires championship statistical gymnastics. There is no way a “combination of metrics and sources” with different temporal reference points would pass NCES adjudication standards — or the standards of any other federal agency. Some of the NCHEMS data, which sat quietly on a subsidiary URL, could have been legitimately used by state agencies. But when another organization aggregates and carries those data onto the national stage, the stakes become higher, and more accountability is required.
First let’s examine the differences between what the public-policy center’s (unofficial) numbers say and what NCES’s transcript-based longitudinal study (official) says.
The education “pipeline” sequence ends with students’ earning bachelor’s degrees in 2002, six years after they graduated from high school. If we work backward from the center’s longitudinal claim, we can translate its sequence to college entrance in 1996 and ninth-grade status in 1992. The original NCHEMS calculation of high-school graduation was for the 1998-2002 period. But you can’t have the same people graduating from high school and college in the same year, so what the policy center’s presentation does, in effect, is to project a 2002 high-school-graduation rate backward in time. That is what I mean by “statistical gymnastics.” The purveyors of this sequence assume you won’t notice that their version starts in 1992, because they want to discredit the official NCES longitudinal study, the “NELS:88/2000,” as old and irrelevant. Well, the students in that study were ninth graders in 1989, and their final educational status was marked in December 2000. Does anyone living on this planet truly believe that the world of U.S. education collapsed between 1989 and 1992? That the educational attainment of traditional-age students who were 26 or 27 in December 2000 was radically different from that of students aged 24 or 25 in the spring of 2002? In the propaganda game in democratic societies, you cannot fool all of the people even some of the time.
Maybe it was for this reason, in the fall of 2005, roughly 18 months after the National Center for Public Policy and Higher Education issued its policy alert and its statistics gained dominance in the news media and among policy makers, that the National Center for Higher Education Management Systems (from which the policy center drew for state-level data) put up a “frequently asked questions” notice on its Web site that basically said: “Whoops! These analyses were not meant to imply a longitudinal history of ninth graders.” But no matter how contorted the accompanying explanations, it was too late to change public perception of the statistical message. The deed had been done. In an instantaneous multimedia environment, with the World Wide Web at its core, once the initial statistical statement was duplicated, it was sealed in the propaganda chain and has become impossible to erase. A FAQ buried by links on any Web site will not change the message.
The primary reference point for refuting the national “pipeline leak” is the official “NELS:88/2000" longitudinal study, based on a national stratified sample of eighth graders in 1988 representing 2.9 million students whose subsequent educational history is documented on high-school and college transcripts. We are looking at the same students for 12 years, and what we find is that out of 100 eighth graders, 78 graduated from high school on time, 53 entered postsecondary education (whether in summer, fall, or spring terms), 48 persisted from the first to the second year (whether in the same institution or another one), and 35 percent wound up with associate or bachelor’s degrees.
In the unofficial account, on the other hand, every calculation behind the putatively longitudinal statement ends in 2002, and no two students are the same. We have high-school graduates in 2002, college entrants in fall 2002, and first-to-second-year retention in 2001-2 (but only at the same institution). It is obvious that the same people cannot have reached all those milestones in the same year, so the opening gambit of the policy center’s 2004 policy alert, “out of [every] 100 ninth graders,” is delusional. In the category of degree completion, the “pipeline leak” calculations first change the base-year reference dates for beginning postsecondary students to 1999 (for “associate-degree seeking”) and 1996 (for “bachelor’s-degree seeking”), then combine them by attainment in 2002, but only if students earned their degrees in the same institutions in which they had begun. The results then weight each college degree by the percentage of first-time, full-time students in two-year colleges (associate) and four-year institutions (bachelor’s). In the most grievous example of what happens, that confabulation takes all the community-college transfer students, labels them “associate-degree seeking,” and then doesn’t count those who earned bachelor’s degrees at all.
If the reader has some difficulty following this gibberish, the reader is forgiven. In words commonly used in NCES adjudication proceedings, it “doesn’t pass the laugh test.”
It took a year to move any contrary educational-pipeline data from NCES into the communication chain, and even then the data got very little play (a story in The Washington Post in April 2005 and one in The Atlantic Monthly in November 2005). The unofficial statistical indictment had become so ensconced in the public’s perception that it could not be dislodged. And once data appear on an organization’s Web site, particularly with a multicolor bar chart, that group generally will not admit that it was wrong and take the numbers back. At the same time, federal agencies, despite the high quality and transparency of their data, generally do not engage in public-relations campaigns to counter such misrepresentations. It’s remarkable, but you don’t see federal agencies out there defending their own work.
Let this excursion into the propaganda of numbers be both a defense and a challenge to you, Chronicle readers. You are participants in a dynamic landscape of persuasion in a democratic society. Look carefully across that landscape and you will see parties asserting influence through the selection and/or manipulation of symbols — in the cases we have reviewed, statistical symbols. Those purveyors of unofficial data create a limited field of vision, from which follow policies with unintended, unpleasing, or irrevelant consequences. In the end, the excessive negativity that colors most presentations of unofficial data alienates the public from the higher-education enterprise.
So we are back to due diligence — and to Ishmael’s visit to the gallery of monstrous and less-erroneous pictures. The point, then and now, is that descriptions of reality matter. Whether the statistics presented to you about higher education are official, derived from official data, or unofficial, make sure you know how they were produced, check them against other numbers whenever possible, and challenge whenever justifiable. Make that a matter of breathing in and breathing out.
Clifford Adelman is a senior associate at the Institute for Higher Education Policy. He recently left the U.S. Department of Education after 27 years as a senior research analyst. He is the author, most recently, of the department’s “The Toolbox Revisited: Paths to Degree Completion From High School Through College” (2006).
http://chronicle.com Section: The Chronicle Review Volume 53, Issue 8, Page B6