On (not) accounting for “intensities” in student learning

Like most liberal arts and sciences faculty at most American colleges and universities right now, I am asked, seemingly constantly, to justify and account for what I do in languages and metrics acceptable to administrators and, indirectly, to external constituencies like state legislators and members of governing boards. Typically, the preferred measurements are quantitative and extensive: number of students enrolled at the university, number of students enrolled in course sections, number of students enrolled in a major. Assessment of student learning is sometimes more nuanced than that, but still revolves around collective measurements and on drawing out generalizations regarding what is happening in the classroom (not to mention keyed to pre-determined priorities which may or may not have been articulated by teaching faculty).

As someone who works in the qualitative and interpretive areas of the social sciences, I am troubled by the biases in how my “success” or “failure”, or my field’s worth to the university, is most commonly measured. In one sense I am frustrated because by these measures, I often end up lacking, and this, of course, fosters anxiety about my position at the university. Most terms I have at least one upper division course that falls short of enrollment targets and I then have to spend time and energy justifying the offering. Similarly, while, historically, the geography faculty teach a significant number of students as part of the general education curriculum and play an important service function for a number of other programs, we tend not to attract a large number of majors, and there are very few students who would report that they decide to attend Western specifically for the geography department. So, yes, on one level, I have a material interest in how and what the university defines what matters and what doesn’t.

On the other hand, I am troubled by the nature of these choices because I think that there are crucial aspects of the college experience for students, in particular, that are missed by the focus on numbers and generalization.

To provide on example, I have a student this term from my introductory cultural geography course who clearly found our discussion of the body, sex, and gender to be revelatory, maybe even life-changing. I base this judgment on what I’ve seen from this individual in their writing and in their responses to routine learning assessments in class.

Maybe this student will take this experience to mean that they should major in geography or, at least, that they should take more geography courses. As much as I would love for either of these eventualities to come true, my experience tells me that the former is highly unlikely and the latter, while more likely, will largely depend on what the student’s program of study ends up being rather than on what they find natively interesting. The salient point here is that this student’s experience is unlikely to be captured by two of the most commonly referenced measures of success or worth at my university, namely, number of majors and course enrollments.

Furthermore, this student’s experience in my class may, from an institutional perspective, actually benefit programs other than my own. Maybe they become a gender studies minor or choose to focus their studies in their major on the body. Maybe they connect what they learned in my class to a class in some other department and for whatever reason choose that field as their major. Maybe this student defines their future education, job and career paths around these kinds of topics. Or maybe this experience simply enriches their understanding of who they are what they do in the world. I’d be happy with any of these outcomes, but the way success and worth are counted at my university, faculty are actually given an interest to compete with each other over students like this rather than encouraging the student to pursue their interests and passions in ways that make the most sense or appeal to them.

There is a chance that some part of this student’s experience could be captured by assessment of departmental learning outcomes, but as chance would have it, we were looking at an outcome this year that prompted me to pick a different area of the course for my contribution. In any case, even if that had not been the case, what I am writing about here is not whether or what students learn, but what’s meaningful about that learning. This student doesn’t stand out for how well they learned what I intended, but for the intensity of their response to the material.

Students are affected in different ways by what they do in their classes. Because so many enroll in my courses without really understanding what they are going to be learning, I frequently have students who report some kind of transformative experience as a result of having taken a course. Sometimes this stops at, “Wow, I had no idea that this is what geographers do,” but in other cases, more rare, but still notable, the response is more profound. I also often have students leave my courses having discovered a love of comics or a new appreciation of film. Needless to say, our departmental learning outcomes aren’t designed to anticipate these kinds of individual responses to material.

More to the point, no one in university administration is asking me or my colleagues to try to gauge these kinds of intensities, or to “count” these qualitative aspects of student learning when demonstrating what we do and why we matter. Fill slots. Acquire majors. Demonstrate what students, in aggregate, are learning. These, and especially the first two, are what drives decisions about faculty lines and non-tenure track hires. I’m not going to suggest that this will be true for everyone, but I would not be surprised if, for many students at schools like mine (smaller state schools with an undergraduate teaching focus and, nominally at least, a liberal arts mission), the most significant course or courses they take, over the course of their lifetimes, are just as likely to be from the general education offerings they took as from the more specialized coursework in their majors. Service like that to the university is seemingly discounted by every measure that matters in terms of material resource allocation.

Fundamentally, students are no longer being treated as students, but as tuition checks, and higher education has been reduced to a product. Departmental faculty are valued according to how much product they produce in the form of degrees conferred. Any other reason or value for what a university does is treated as frippery by just about anyone with immediate power to shape the institution.

What’s in a number (or, trying to understand the minds of university administrators)

After seemingly retreating from obsessive micro-managing of student numbers, certain administrators at my university have recently returned to routinely flagging “low enrolling” sections term-by-term while now also pointing to numbers of majors when citing decisions about faculty lines and other forms of program support.

I, like most faculty I know, understand the need for sound financial management and also the value in periodically assessing faculty lines and the distribution of university resources. As part of that, I accept both that not all programs will be viewed equally by administrators, and that there may be a variety of bases for determining the value of programs. As faculty, you find ways to deal with administrative priorities in whatever way you can.

However, it is difficult to understand these kinds of judgments where the logic of administrative assessment and decision making cannot be followed or is not articulated to faculty. I can think of a few reasons why there might be an intense focus on “low enrollments” and “program size” on the part of administrators on my campus right now, but I don’t actually know which, if any, of these might be the correct answer to why the metrics that are being valorized are, in fact, being valorized.

To begin my guesses, the Oregon University System is being dissolved. The larger institutions, University of Oregon, Portland State and Oregon State, have already been freed to establish independent boards. For smaller campuses like mine future governance it still to be determined. Our president has submitted a proposal to the legislature for an independent board.

It is possible that the obsession with enrollment minutiae has to do with either demonstrating “strong management” to members of the legislature to bolster the case for independence or is more practically concerned with scraping up nickels and dimes from, particularly, reducing adjunct hires, but also from not renewing tenure track lines, in order to finance the prospective board.

Second, after growing from 4,889 in 2006 to 6,233 in 2010, enrollments at Western have essentially settled to around 6,100. It is possible that the concern with micro-enrollments is a reaction to this macro-level contraction and leveling off (my data is from the Fall enrollment reports to OUS). The idea being, I suppose, to rationalize course numbers to what appears to be the popping of an enrollment “bubble” since 2010.

Third, and similar to the first, there have been a number of personnel changes in upper-level administration in the last year, particularly on the academic side of the university. What may be happening here is an attempt to establish some kind of “hard” management style or administrative united front. We have a strong union and a tradition of faculty control over curriculum. Focusing in on term-by-term enrollments and number of majors could be a strategy to divide faculty in competition over students.

A fourth possibility is that the concerns here are not so much about the present, but are about laying groundwork for future hiring decisions, that is, if administrators are on the record now expressing concern about “low enrolling courses” and “small programs”, then later, when decisions are made about faculty lines and adjunct sections on the basis of these metrics, no one can claim ignorance regarding the criteria for allocating faculty resources.

To extend this speculation further, maybe the targeting of section enrollments and rumbling about numbers of majors are expressions of an un-articulated plan for re-shaping the university, one imagines around a few select professional, pre-professional, or simply “practical” programs with currently high enrollments, while gradually downsizing, by attrition, many of the traditional liberal arts and sciences to supporting roles, with no majors and only a few upper-division courses as deemed necessary for the full programs that do remain.

This last point seems the most likely, not just because it imparts a clear logic to the imperatives at work here, but also because it seems consistent with recent actions on hiring. However, as noted, nothing like this reasoning has been directly or clearly shared with faculty.

For me, and many of my faculty colleagues, one of the biggest sources of frustration in this process is not knowing why we have to engage in the exercise of cutting or defending all sub-twelve sections every term. Assuming that the goal, or, okay, let’s say, “outcome”, is one that faculty can at least sympathize with, I am sure that many, maybe most, would be willing to reassess our course schedules and make whatever practical adjustments we can to maximize enrollment each term.

But right now no one particularly understands why every section of every course is expected to meet the same enrollment threshold every term, regardless of program, purpose or enrollment history (I’ll just note here that there are sections of courses on the schedule that are routinely capped at less than twelve). There are, perhaps, vestigial OUS directives being responded to, but that hardly explains the current level of intensity over course enrollments (and I don’t think it explains the attention being paid to number of majors at all), expect, perhaps, as part of my first guess, where dutifully fulfilling system mandates, even as the system is being dismantled, will be looked at favorably in making the case for an independent board.

Of course, if I were in upper-level administration and I had a plan to change hiring practices to favor programs that met certain metrics for section enrollments and program size, or an even more radical plan for remaking the university around those numbers, I probably wouldn’t want to share that with faculty in a direct way either. That is, if the desired outcome is contraction of most of the academic programs at the university, I would not expect most faculty to embrace that outcome.

In an immediate political sense the unspoken rationale or rationales behind both the focus on section enrollments and numbers of majors is that, currently, for faculty to make even the most minor of curricular changes requires detailed explanations (“we must have a culture of evidence!”) and an assessment plan for determining if the stated purposes, excuse me, “outcomes”, are met.

Right now faculty have no idea what the administrative outcomes are, what goals are meant to be achieved, by imposing an enrollment threshold of twelve and insisting that majors be of a certain size, or how those outcomes are to be assessed (and, it should be noted, the exact number for size of major has not been articulated).

Without that information these imperatives seem arbitrary. It isn’t difficult to think of other ways of measuring departmental health or vitality or value – whatever term you like – besides section-by-section enrollments or number of majors. Why not total section enrollments? Why not student credit hours? Why not annual enrollments? At a small school like mine, why look at the departmental scale, why not Division or College? Why look at number of majors instead of student contact hours? Why employ the same measurements or standards for all departments when different departments serve different primary functions, e.g., there are departments that are primarily service departments and there are departments that primarily serve majors? Why is teaching majors more important than teaching in the core?p

My point isn’t that these would all be better measures than the ones being deployed. My point is that how you measure value or success should depend on your underlying purpose and not the other way around, that is, I don’t think what we do as faculty, or as a university community, should be driven by metrics that are selected prior to understanding what kind of a place we want the university to be. There is no prima facie or obvious value to section enrollments, or to twelve students, as opposed to any other number of students, or to numbers of majors. The fact that we don’t know what value is being ascribed to these measures is a far bigger problem for me right now than is the insistence that they be applied in the first place. It makes me wonder why these numbers are not being presented for discussion, but are simply being imposed.