On (not) accounting for “intensities” in student learning

Like most liberal arts and sciences faculty at most American colleges and universities right now, I am asked, seemingly constantly, to justify and account for what I do in languages and metrics acceptable to administrators and, indirectly, to external constituencies like state legislators and members of governing boards. Typically, the preferred measurements are quantitative and extensive: number of students enrolled at the university, number of students enrolled in course sections, number of students enrolled in a major. Assessment of student learning is sometimes more nuanced than that, but still revolves around collective measurements and on drawing out generalizations regarding what is happening in the classroom (not to mention keyed to pre-determined priorities which may or may not have been articulated by teaching faculty).

As someone who works in the qualitative and interpretive areas of the social sciences, I am troubled by the biases in how my “success” or “failure”, or my field’s worth to the university, is most commonly measured. In one sense I am frustrated because by these measures, I often end up lacking, and this, of course, fosters anxiety about my position at the university. Most terms I have at least one upper division course that falls short of enrollment targets and I then have to spend time and energy justifying the offering. Similarly, while, historically, the geography faculty teach a significant number of students as part of the general education curriculum and play an important service function for a number of other programs, we tend not to attract a large number of majors, and there are very few students who would report that they decide to attend Western specifically for the geography department. So, yes, on one level, I have a material interest in how and what the university defines what matters and what doesn’t.

On the other hand, I am troubled by the nature of these choices because I think that there are crucial aspects of the college experience for students, in particular, that are missed by the focus on numbers and generalization.

To provide on example, I have a student this term from my introductory cultural geography course who clearly found our discussion of the body, sex, and gender to be revelatory, maybe even life-changing. I base this judgment on what I’ve seen from this individual in their writing and in their responses to routine learning assessments in class.

Maybe this student will take this experience to mean that they should major in geography or, at least, that they should take more geography courses. As much as I would love for either of these eventualities to come true, my experience tells me that the former is highly unlikely and the latter, while more likely, will largely depend on what the student’s program of study ends up being rather than on what they find natively interesting. The salient point here is that this student’s experience is unlikely to be captured by two of the most commonly referenced measures of success or worth at my university, namely, number of majors and course enrollments.

Furthermore, this student’s experience in my class may, from an institutional perspective, actually benefit programs other than my own. Maybe they become a gender studies minor or choose to focus their studies in their major on the body. Maybe they connect what they learned in my class to a class in some other department and for whatever reason choose that field as their major. Maybe this student defines their future education, job and career paths around these kinds of topics. Or maybe this experience simply enriches their understanding of who they are what they do in the world. I’d be happy with any of these outcomes, but the way success and worth are counted at my university, faculty are actually given an interest to compete with each other over students like this rather than encouraging the student to pursue their interests and passions in ways that make the most sense or appeal to them.

There is a chance that some part of this student’s experience could be captured by assessment of departmental learning outcomes, but as chance would have it, we were looking at an outcome this year that prompted me to pick a different area of the course for my contribution. In any case, even if that had not been the case, what I am writing about here is not whether or what students learn, but what’s meaningful about that learning. This student doesn’t stand out for how well they learned what I intended, but for the intensity of their response to the material.

Students are affected in different ways by what they do in their classes. Because so many enroll in my courses without really understanding what they are going to be learning, I frequently have students who report some kind of transformative experience as a result of having taken a course. Sometimes this stops at, “Wow, I had no idea that this is what geographers do,” but in other cases, more rare, but still notable, the response is more profound. I also often have students leave my courses having discovered a love of comics or a new appreciation of film. Needless to say, our departmental learning outcomes aren’t designed to anticipate these kinds of individual responses to material.

More to the point, no one in university administration is asking me or my colleagues to try to gauge these kinds of intensities, or to “count” these qualitative aspects of student learning when demonstrating what we do and why we matter. Fill slots. Acquire majors. Demonstrate what students, in aggregate, are learning. These, and especially the first two, are what drives decisions about faculty lines and non-tenure track hires. I’m not going to suggest that this will be true for everyone, but I would not be surprised if, for many students at schools like mine (smaller state schools with an undergraduate teaching focus and, nominally at least, a liberal arts mission), the most significant course or courses they take, over the course of their lifetimes, are just as likely to be from the general education offerings they took as from the more specialized coursework in their majors. Service like that to the university is seemingly discounted by every measure that matters in terms of material resource allocation.

Fundamentally, students are no longer being treated as students, but as tuition checks, and higher education has been reduced to a product. Departmental faculty are valued according to how much product they produce in the form of degrees conferred. Any other reason or value for what a university does is treated as frippery by just about anyone with immediate power to shape the institution.

Fall is coming

Well, it’s actually already kind of here for faculty in the OUS. Classes don’t begin until next week, hence the title, but this week faculty were called back to campus for rituals (e.g., state of the university and college addresses), welcoming and orientating new students, administrative functions (e.g., committee meetings), and, finally, class prep.

I’ve been in a tenure-track/tenured position at Western Oregon for just over ten years now, and it has taken this long to have a fall where I’ve felt as if I’ve been able construct my courses in an efficient and minimally stressful way (most of the stress this week has come from having to work course prep in between other responsibilities, not from my upcoming classes).

One reason why I often start Fall term behind the curve, or at least feeling that way, is that summers are usually the only time I have for sustained progress on research and scholarship. I imagine that this is true for most who work at undergraduate teaching focused institutions like mine. This summer, though, I was at the dissemination stage of my current project, and while I had a book chapter to write, that was, more or less, a digest of the same project. Dealing with getting work out into the world has been its own thing, of course, but nothing like being in the field or at the editing station.

When I write above about ‘constructing my courses’, I mean mainly drafting the syllabi. Summers are also time to read and look for resources. Typically, I will defer this kind of work to attend to non-course related scholarship, which leaves me having to those tasks while teaching my courses. This year I had the luxury not to have to do that and was able to get my deeper prep done a week or so before reporting back to campus.

I am an impossible tinkerer when it comes to my courses. This year I was overhauling two of the three I am scheduled to teach in the Fall (one, I’ve already written about). What saved me this summer was not making any major changes to my introductory cultural geography course. That is an offering that I have been perpetually dissatisfied with, but for which I have finally found an approach that works for me, and seems to work for students, too, at least in the ways I would like it to.

One image that non-teachers have of teachers, at whatever level, but maybe particularly of professors, is that of someone who essentially coasts, working off of the same set of notes decade after decade, freeing them from actually having to work in their courses.

I’ve never met that person.

There’s no question that my current colleagues work harder at making their courses approachable, interesting, and appropriately challenging for undergraduates than did most, if not all, of the professors in my graduate department, but I think that’s understandable. Faculty who work primarily with graduate students and undergraduate majors can, and should, see teaching differently than should faculty, like myself and most of my social science and humanities colleagues, who teach primarily undergraduates and a significant number of non-majors. However, that difference hardly becomes the equivalent of ‘dead wood’. On the contrary, while, pedagogically, a graduate seminar is a graduate seminar, readings are always in flux. I imagine that this is also true for the advanced undergraduate courses tied to professor interests.

At present, I’ve observed that some of my more senior colleagues seem to have certain courses, ones that they’ve been teaching consistently for more than a decade or two, ‘down’, but there are also irregular courses, or newly developed courses, that require substantial preparation to work.

In short, I don’t think I personally know any college or university faculty whose courses are entirely static, which is why I find some of the other functions of the opening week to Fall, many of the administrative functions, to be frustrating or stressful.

One primary administrative concern right now is “assessment” and getting faculty to do it, but I think that it is fair to say, given that no one’s courses stay exactly the same, that class prep necessarily entails assessment and the kind of assessment that everyone is supposed to want: the kind that leads to improvements in the classroom.

The problem I’m not sure that anyone has figured out entirely is how to articulate the kinds of assessment that lead to actual changes in the way courses are taught and that also satisfy administrative imperatives for data that can be used in accreditation, or in taking to boards and legislators.

The data that I use to change and improve my courses are rarely the kinds of things that are captured in institutional evaluations or templates for reporting on assessment. The data is scattered, sometimes impressionistic, often open-ended and come from my experience of being in the classroom with students. Administratively, faculty are expected to make discrete a process that is almost necessarily ongoing and messy, rarely mapping neatly onto formulaic statements of outcomes or ‘themes’ (‘core themes’ are a recent turn in jargon, at least for our accreditation).

Our college’s administrators seem to have finally settled on a set of procedures and forms, so maybe there is an opening to establish a departmental routine that suffices for everyone, instead of having, essentially, two separate processes, one for real work in the classroom and one for bookkeeping. I’ve been told that this is possible, but I have yet to see it in action (nothing makes me more skeptical about the push for assessment than being given models from other departments that entail, essentially, standardized testing, not only because it puts a lie to all of the assurances faculty are given regarding autonomy and what kinds of instruments or data are acceptable, but also because I have a hard time thinking that the results of such tests are at all effective in suggesting what might work better in the classroom. Identifying knowledge gaps is one thing; figuring out what to do about those gaps is another).

Tension in the classroom

This week in learning assessments, there seem to be some odd tensions roiling under the surface of one of my classes, with students expressing frustration with:

  • Students who don’t show up to class.
  • Students who don’t contribute to class.
  • Students who contribute too much in class.
  • Students who show up not having done the reading.

In the case of the third point, at least one student noted my efforts to elicit participation from others as something they found helpful.

In the past, I have opened discussion of these kinds of issues on the class blog, but I think that the second week is too early for that. For now, I’ll wait-and-see how the culture of the class develops. These frustrations do suggest that I have a number students who feel invested in the course, and that is a good thing, but these kinds of expressions also seem to represent feelings that could turn toxic if not managed well.

Distanced

One thing I do in my classes on a regular basis is administer short “learning assessments”, which sometimes focus on content, along the lines of a “one-minute essay”, and at other times on process.

I routinely get fascinating data from this question from Stephen Brookfield’s “Critical Incident Questionnaire” (pdf):

“At what moment in class this week did you feel most distanced from what was happening?”

Students admit to all kinds of things in answering this question, from not having gotten enough to sleep to not having done the reading or being distracted by their phones. One intent of a question like this is to encourage students to reflect on how they learn and to make adjustments that would, for example, lead to feeling less distanced in class.

This past week in reading over some of my assessments something clicked that I had been puzzled by before, which is the way in which students will often write that they felt most distanced when we talked about something that they found to be alien or uncomfortable or that they don’t like talking about.

Previously, for some reason, maybe in response to how students articulate their answers to this question, or maybe from being unable to get out of my own frames of reference, I thought that students who wrote about feeling distanced by discomfiting or unfamiliar material were just being incurious or were unsure about the question.

As I thought about one of the assessments I got back this week, I realized that most of the students who gave that kind of answer were likely addressing the question directly and understood it quite well.

From my perspective, one of the points of getting an education, of going to college, is to encounter the unfamiliar and to be made uncomfortable by what I don’t know or understand or have not yet experienced. This would (and did) make me feel less distanced as a student.

Of course, and I have known this for awhile, many of the students I have in my classes don’t look at their educations that way. If one wants an education to be credentialed or to affirm ideas and choices already made or already learned, then it makes sense, now at least, that unfamiliar or uncanny material would lead one to feel distanced by what was happening in the classroom.

Here’s thing I am left with: this looks like it should be an opportunity, not a barrier to learning. It seems unlikely that students who express feelings of being distanced in this way aren’t also learning at the same time (and, in fact, that could be part of why they feel distanced; if they were simply resistant, then whatever was bothering them about class would be just as likely to be deflected or written off as nonsense as to be getting under their skin).

However, I’m not sure how to turn this kind of data to my advantage. I should, maybe, start keeping track of what students say makes them feel distanced and see if I can notice any clusters around certain topics or activities. Or maybe I am seeing how some students natively process new or unfamiliar material or ways of learning, and I should re-interpret some of these answers as positive indications of what’s happening in class and not as problems to be solved.

In any case, it’s gratifying when these exercises seem to produce actual insight or meaningful results (as opposed to, say, our institutional evaluations. Ha! I kid because I care).

Teaching updates: intro course and small classes

Before my Spring responsibilities become too involving, I wanted to check in here on the two main teaching issues I have writing about: my Introductory Cultural Geography course and managing my very small classes.

During and after Fall term, I was optimistic about the changes I had made to the syllabus to intro cultural geography. In Winter, I learned that my caution regarding the reasons for that optimism was well founded. My Winter class did not work as well as Fall.

My main index of this is how the students in the two classes made use of Question Time, which is a period I have every meeting wherein students can ask questions about the syllabus, about assignments, or about class material, both current and from prior sessions. I also have Question Time entries each week on the class blog. Participation in Question Time is a minor, one-tenth, part of the grade in the class.

In Fall, I had a number of students earn full credit for Question Time, and an even larger number who were one or two points away from full credit. In Winter, no students earned full credit, and while that came as a surprise, it was also consistent with the fact that many days, Question Time would go unused, and while some of that time was made up on the blog, most of it was not.

More to the point, is the quality of the Question Time periods in the two classes. In Fall, we often had lively, relevant discussions about class material, both from earlier classes and from what was due that same day. In Winter, students, rarely, if ever, used Question Time to initiate discussions about material. Virtually all uses of Question Time were directed at the syllabus and related issues like assignment deadlines and requirements.

In short, my Winter class was much less engaged than my Fall class.

On the other hand, what I learned from students is that my new book selections worked in both terms, if not quite to the same effect. In both terms, students conveyed an appreciation for the Anderson text in terms of its accessibility and ability to hold their interest. And in both terms students also found the secondary text I chose to be useful in better understanding how to use concepts from the main textbook. The main difference is that in Fall I also saw these findings in action, while in Winter I did not, at least not to the same extent.

Students also continue to indicate that learning by doing is preferable to learning by exam. Again, I saw this, as well as heard or read it, from students more in Fall than in Winter.

I am often surprised by what students will connect with in a course. What seemed like disaffection to me during last term, apparently was a very different kind of engagement from the perspective of many students.

Early into the quarter, Spring is more like Fall than Winter.

Last term I wrote additionally about a couple of very small, less than ten students, classes, that I decided to run like tutorials or readings courses. On balance, those experiments ended up working well.

Students reported to me that being fully responsible for completing the reading compelled them to engage more closely with class texts, and also encouraged them to make their own meaning of the material. I saw this reflected in the weekly work students had to submit related to the readings as well as in end-of-the-term assessments. Most students also indicated that they liked the freedom and responsibility of the tutorial format, even where they were not initially comfortable with having to manage their own work to the extent that they had to in these classes.

On the other hand, virtually all students expressed that they would have liked more frequet meetings of the whole to discuss the material and reconnect as a class. This seems consistent with my experience regarding how often individual students chose to consult with me about the reading. Early in the term this happened more frequently and with more students than it did later in the term. Student desires for more meeting times suggests a high degree of self-awareness regarding their work habits. Whether that awareness was cultivated during these classes, or these students were already aware that they would be less likely to participate in discussions absent regular meetings , I don’t know.

Next time I teach a course in this way, I will likely either build in more class meetings, or try schedudling consulations with individual students to address these kinds of concerns about feeling adrift or missing out on opportunities for discussion.

I am, however, not sure when I will next be confronted with having to manage a small class like this. I have one this term, but the material is too complicated, and the role the course plays in our major is such that I did not think that the tutorial or readings structure would be appropriate, so I am running it more like a traditional seminar.