The Economic Future of Liberal Arts Colleges
By: Alison Byerly, Professor at Middlebury College, Visiting Scholar at MIT; and Incoming President of Lafayette College (July 2013)
Last week’s AAC&U session on “The Economic Future of Liberal Arts Colleges” has already been described in Inside Higher Ed (IHE) as a bombshell that stunned the sizable audience into “uncomfortable silence.” The centerpiece of the session was a presentation by Charles Blaich and Kathy Wise of a comparison of data about college expenses from the Delta Cost Project with data about learning outcomes from the Wabash National Study. They posed two questions: Are some institutions more effective than others at offering a high-quality academic experience with fewer resources? And more generally, what is the cost-benefit balance between a high-quality environment and cost? The answer was summed up by IHE in the title of its article on the session: “Not Getting What They Pay For.”
The smoking gun in this argument was a scatter plot mapping the educational cost per student of forty-five unidentified schools against their scores on the National Survey of Student Engagement (NSSE). NSSE surveys students about their own experience of a variety of high-impact educational practices: frequent interaction with faculty, a high level of expectation and challenge, interactional diversity, and opportunities for reflective thinking. The presenters suggested that there is a correlation between high price and high-impact practices, but that it is “extraordinarily weak.” The more expensive schools score higher, but not as much higher as one might expect given the cost differential. They achieve relatively little gain for increased expenditures. Charles Blaich, in presenting the data, noted that each two-point percentage increase came at the cost of an additional five million dollars. Within one selected band of schools, very similar NSSE scores were seen at two institutions with very different per-student costs: $9225 at one institution, and $53,521 at the other. The less expensive schools, he concluded, offer “more bang for the buck.”
The audience raised a number of questions about the data. For example, did the cost per student account for different graduation rates? (It did not). Did the study attempt to account for educational activities, such as research, that are not directly related to instructional cost? (No.) And are the NSSE parameters the best measure of “value”? One questioner noted that the less expensive schools keep costs down by using fewer tenured faculty who are researchers as well as teachers, and, instead, using more adjuncts who are paid (less) simply to teach. When asked whether he thought that it was important to the intellectual enterprise that faculty at the high-end institutions do research as well as teaching, Blaich responded that “research is a good, yes,” but that most liberal-arts colleges identify themselves as teaching institutions. It seemed clear at the time, and in conversations afterwards, that what many listeners found most problematic about the presentation was not simply the idea that a less expensive school might be very effective according to the terms of the NSSE survey, but by the use of the limited NSSE parameters as a simple proxy for “value.”
The session did not offer any crystal-ball predictions about the economic future of liberal arts colleges. The presenters, who have not yet published their data, seemed confident that their chart offers clear evidence that some educational environments are more expensive than they need to be, and it certainly seems likely that circulation of this information will add grist to current public debates about the high cost of elite liberal arts colleges. But as Nate Silver points out in The Signal and the Noise, data is most effective in making predictions when you have multiple data sets that can be aggregated and cross-checked. The data presented in this session were fascinating, but the analysis—admittedly, a preliminary analysis—seemed somewhat reductionist in its reliance on a single set of measures. While NSSE is currently the most popular assessment tool being used by colleges and universities, recent studies have questioned its validity as a measure of student success, noting the “lack of any statistically significant relationship between NSSE scores and either grades or retention rates.” Those of us who teach and work at liberal arts colleges are not likely to be persuaded, without an argument, that aspects of the liberal arts college environment not captured by the NSSE data are irrelevant to the value of the experience.
We all recognize the urgency of the need to reexamine our economic model, and focus our resources in ways that best serve our core mission. But realistically speaking, if you are going to frame the argument in terms of economic value, you cannot afford to dismiss the idea that the reputation or prestige of particular institutions is part of the “value” that leads students to prefer diplomas from one college over another, and that some of this value results from long-term investments in the academic enterprise that are not visible in student responses to a general survey about their own level of engagement in particular activities and practices. This study is an important intervention in a critical debate, and deserves the attention it is likely to receive. But we should be wary of allowing it to dominate the “value proposition” discussion.