On (not) accounting for “intensities” in student learning

Like most liberal arts and sciences faculty at most American colleges and universities right now, I am asked, seemingly constantly, to justify and account for what I do in languages and metrics acceptable to administrators and, indirectly, to external constituencies like state legislators and members of governing boards. Typically, the preferred measurements are quantitative and extensive: number of students enrolled at the university, number of students enrolled in course sections, number of students enrolled in a major. Assessment of student learning is sometimes more nuanced than that, but still revolves around collective measurements and on drawing out generalizations regarding what is happening in the classroom (not to mention keyed to pre-determined priorities which may or may not have been articulated by teaching faculty).

As someone who works in the qualitative and interpretive areas of the social sciences, I am troubled by the biases in how my “success” or “failure”, or my field’s worth to the university, is most commonly measured. In one sense I am frustrated because by these measures, I often end up lacking, and this, of course, fosters anxiety about my position at the university. Most terms I have at least one upper division course that falls short of enrollment targets and I then have to spend time and energy justifying the offering. Similarly, while, historically, the geography faculty teach a significant number of students as part of the general education curriculum and play an important service function for a number of other programs, we tend not to attract a large number of majors, and there are very few students who would report that they decide to attend Western specifically for the geography department. So, yes, on one level, I have a material interest in how and what the university defines what matters and what doesn’t.

On the other hand, I am troubled by the nature of these choices because I think that there are crucial aspects of the college experience for students, in particular, that are missed by the focus on numbers and generalization.

To provide on example, I have a student this term from my introductory cultural geography course who clearly found our discussion of the body, sex, and gender to be revelatory, maybe even life-changing. I base this judgment on what I’ve seen from this individual in their writing and in their responses to routine learning assessments in class.

Maybe this student will take this experience to mean that they should major in geography or, at least, that they should take more geography courses. As much as I would love for either of these eventualities to come true, my experience tells me that the former is highly unlikely and the latter, while more likely, will largely depend on what the student’s program of study ends up being rather than on what they find natively interesting. The salient point here is that this student’s experience is unlikely to be captured by two of the most commonly referenced measures of success or worth at my university, namely, number of majors and course enrollments.

Furthermore, this student’s experience in my class may, from an institutional perspective, actually benefit programs other than my own. Maybe they become a gender studies minor or choose to focus their studies in their major on the body. Maybe they connect what they learned in my class to a class in some other department and for whatever reason choose that field as their major. Maybe this student defines their future education, job and career paths around these kinds of topics. Or maybe this experience simply enriches their understanding of who they are what they do in the world. I’d be happy with any of these outcomes, but the way success and worth are counted at my university, faculty are actually given an interest to compete with each other over students like this rather than encouraging the student to pursue their interests and passions in ways that make the most sense or appeal to them.

There is a chance that some part of this student’s experience could be captured by assessment of departmental learning outcomes, but as chance would have it, we were looking at an outcome this year that prompted me to pick a different area of the course for my contribution. In any case, even if that had not been the case, what I am writing about here is not whether or what students learn, but what’s meaningful about that learning. This student doesn’t stand out for how well they learned what I intended, but for the intensity of their response to the material.

Students are affected in different ways by what they do in their classes. Because so many enroll in my courses without really understanding what they are going to be learning, I frequently have students who report some kind of transformative experience as a result of having taken a course. Sometimes this stops at, “Wow, I had no idea that this is what geographers do,” but in other cases, more rare, but still notable, the response is more profound. I also often have students leave my courses having discovered a love of comics or a new appreciation of film. Needless to say, our departmental learning outcomes aren’t designed to anticipate these kinds of individual responses to material.

More to the point, no one in university administration is asking me or my colleagues to try to gauge these kinds of intensities, or to “count” these qualitative aspects of student learning when demonstrating what we do and why we matter. Fill slots. Acquire majors. Demonstrate what students, in aggregate, are learning. These, and especially the first two, are what drives decisions about faculty lines and non-tenure track hires. I’m not going to suggest that this will be true for everyone, but I would not be surprised if, for many students at schools like mine (smaller state schools with an undergraduate teaching focus and, nominally at least, a liberal arts mission), the most significant course or courses they take, over the course of their lifetimes, are just as likely to be from the general education offerings they took as from the more specialized coursework in their majors. Service like that to the university is seemingly discounted by every measure that matters in terms of material resource allocation.

Fundamentally, students are no longer being treated as students, but as tuition checks, and higher education has been reduced to a product. Departmental faculty are valued according to how much product they produce in the form of degrees conferred. Any other reason or value for what a university does is treated as frippery by just about anyone with immediate power to shape the institution.

Fall is coming

Well, it’s actually already kind of here for faculty in the OUS. Classes don’t begin until next week, hence the title, but this week faculty were called back to campus for rituals (e.g., state of the university and college addresses), welcoming and orientating new students, administrative functions (e.g., committee meetings), and, finally, class prep.

I’ve been in a tenure-track/tenured position at Western Oregon for just over ten years now, and it has taken this long to have a fall where I’ve felt as if I’ve been able construct my courses in an efficient and minimally stressful way (most of the stress this week has come from having to work course prep in between other responsibilities, not from my upcoming classes).

One reason why I often start Fall term behind the curve, or at least feeling that way, is that summers are usually the only time I have for sustained progress on research and scholarship. I imagine that this is true for most who work at undergraduate teaching focused institutions like mine. This summer, though, I was at the dissemination stage of my current project, and while I had a book chapter to write, that was, more or less, a digest of the same project. Dealing with getting work out into the world has been its own thing, of course, but nothing like being in the field or at the editing station.

When I write above about ‘constructing my courses’, I mean mainly drafting the syllabi. Summers are also time to read and look for resources. Typically, I will defer this kind of work to attend to non-course related scholarship, which leaves me having to those tasks while teaching my courses. This year I had the luxury not to have to do that and was able to get my deeper prep done a week or so before reporting back to campus.

I am an impossible tinkerer when it comes to my courses. This year I was overhauling two of the three I am scheduled to teach in the Fall (one, I’ve already written about). What saved me this summer was not making any major changes to my introductory cultural geography course. That is an offering that I have been perpetually dissatisfied with, but for which I have finally found an approach that works for me, and seems to work for students, too, at least in the ways I would like it to.

One image that non-teachers have of teachers, at whatever level, but maybe particularly of professors, is that of someone who essentially coasts, working off of the same set of notes decade after decade, freeing them from actually having to work in their courses.

I’ve never met that person.

There’s no question that my current colleagues work harder at making their courses approachable, interesting, and appropriately challenging for undergraduates than did most, if not all, of the professors in my graduate department, but I think that’s understandable. Faculty who work primarily with graduate students and undergraduate majors can, and should, see teaching differently than should faculty, like myself and most of my social science and humanities colleagues, who teach primarily undergraduates and a significant number of non-majors. However, that difference hardly becomes the equivalent of ‘dead wood’. On the contrary, while, pedagogically, a graduate seminar is a graduate seminar, readings are always in flux. I imagine that this is also true for the advanced undergraduate courses tied to professor interests.

At present, I’ve observed that some of my more senior colleagues seem to have certain courses, ones that they’ve been teaching consistently for more than a decade or two, ‘down’, but there are also irregular courses, or newly developed courses, that require substantial preparation to work.

In short, I don’t think I personally know any college or university faculty whose courses are entirely static, which is why I find some of the other functions of the opening week to Fall, many of the administrative functions, to be frustrating or stressful.

One primary administrative concern right now is “assessment” and getting faculty to do it, but I think that it is fair to say, given that no one’s courses stay exactly the same, that class prep necessarily entails assessment and the kind of assessment that everyone is supposed to want: the kind that leads to improvements in the classroom.

The problem I’m not sure that anyone has figured out entirely is how to articulate the kinds of assessment that lead to actual changes in the way courses are taught and that also satisfy administrative imperatives for data that can be used in accreditation, or in taking to boards and legislators.

The data that I use to change and improve my courses are rarely the kinds of things that are captured in institutional evaluations or templates for reporting on assessment. The data is scattered, sometimes impressionistic, often open-ended and come from my experience of being in the classroom with students. Administratively, faculty are expected to make discrete a process that is almost necessarily ongoing and messy, rarely mapping neatly onto formulaic statements of outcomes or ‘themes’ (‘core themes’ are a recent turn in jargon, at least for our accreditation).

Our college’s administrators seem to have finally settled on a set of procedures and forms, so maybe there is an opening to establish a departmental routine that suffices for everyone, instead of having, essentially, two separate processes, one for real work in the classroom and one for bookkeeping. I’ve been told that this is possible, but I have yet to see it in action (nothing makes me more skeptical about the push for assessment than being given models from other departments that entail, essentially, standardized testing, not only because it puts a lie to all of the assurances faculty are given regarding autonomy and what kinds of instruments or data are acceptable, but also because I have a hard time thinking that the results of such tests are at all effective in suggesting what might work better in the classroom. Identifying knowledge gaps is one thing; figuring out what to do about those gaps is another).

Tension in the classroom

This week in learning assessments, there seem to be some odd tensions roiling under the surface of one of my classes, with students expressing frustration with:

  • Students who don’t show up to class.
  • Students who don’t contribute to class.
  • Students who contribute too much in class.
  • Students who show up not having done the reading.

In the case of the third point, at least one student noted my efforts to elicit participation from others as something they found helpful.

In the past, I have opened discussion of these kinds of issues on the class blog, but I think that the second week is too early for that. For now, I’ll wait-and-see how the culture of the class develops. These frustrations do suggest that I have a number students who feel invested in the course, and that is a good thing, but these kinds of expressions also seem to represent feelings that could turn toxic if not managed well.

Distanced

One thing I do in my classes on a regular basis is administer short “learning assessments”, which sometimes focus on content, along the lines of a “one-minute essay”, and at other times on process.

I routinely get fascinating data from this question from Stephen Brookfield’s “Critical Incident Questionnaire” (pdf):

“At what moment in class this week did you feel most distanced from what was happening?”

Students admit to all kinds of things in answering this question, from not having gotten enough to sleep to not having done the reading or being distracted by their phones. One intent of a question like this is to encourage students to reflect on how they learn and to make adjustments that would, for example, lead to feeling less distanced in class.

This past week in reading over some of my assessments something clicked that I had been puzzled by before, which is the way in which students will often write that they felt most distanced when we talked about something that they found to be alien or uncomfortable or that they don’t like talking about.

Previously, for some reason, maybe in response to how students articulate their answers to this question, or maybe from being unable to get out of my own frames of reference, I thought that students who wrote about feeling distanced by discomfiting or unfamiliar material were just being incurious or were unsure about the question.

As I thought about one of the assessments I got back this week, I realized that most of the students who gave that kind of answer were likely addressing the question directly and understood it quite well.

From my perspective, one of the points of getting an education, of going to college, is to encounter the unfamiliar and to be made uncomfortable by what I don’t know or understand or have not yet experienced. This would (and did) make me feel less distanced as a student.

Of course, and I have known this for awhile, many of the students I have in my classes don’t look at their educations that way. If one wants an education to be credentialed or to affirm ideas and choices already made or already learned, then it makes sense, now at least, that unfamiliar or uncanny material would lead one to feel distanced by what was happening in the classroom.

Here’s thing I am left with: this looks like it should be an opportunity, not a barrier to learning. It seems unlikely that students who express feelings of being distanced in this way aren’t also learning at the same time (and, in fact, that could be part of why they feel distanced; if they were simply resistant, then whatever was bothering them about class would be just as likely to be deflected or written off as nonsense as to be getting under their skin).

However, I’m not sure how to turn this kind of data to my advantage. I should, maybe, start keeping track of what students say makes them feel distanced and see if I can notice any clusters around certain topics or activities. Or maybe I am seeing how some students natively process new or unfamiliar material or ways of learning, and I should re-interpret some of these answers as positive indications of what’s happening in class and not as problems to be solved.

In any case, it’s gratifying when these exercises seem to produce actual insight or meaningful results (as opposed to, say, our institutional evaluations. Ha! I kid because I care).

Teaching updates: intro course and small classes

Before my Spring responsibilities become too involving, I wanted to check in here on the two main teaching issues I have writing about: my Introductory Cultural Geography course and managing my very small classes.

During and after Fall term, I was optimistic about the changes I had made to the syllabus to intro cultural geography. In Winter, I learned that my caution regarding the reasons for that optimism was well founded. My Winter class did not work as well as Fall.

My main index of this is how the students in the two classes made use of Question Time, which is a period I have every meeting wherein students can ask questions about the syllabus, about assignments, or about class material, both current and from prior sessions. I also have Question Time entries each week on the class blog. Participation in Question Time is a minor, one-tenth, part of the grade in the class.

In Fall, I had a number of students earn full credit for Question Time, and an even larger number who were one or two points away from full credit. In Winter, no students earned full credit, and while that came as a surprise, it was also consistent with the fact that many days, Question Time would go unused, and while some of that time was made up on the blog, most of it was not.

More to the point, is the quality of the Question Time periods in the two classes. In Fall, we often had lively, relevant discussions about class material, both from earlier classes and from what was due that same day. In Winter, students, rarely, if ever, used Question Time to initiate discussions about material. Virtually all uses of Question Time were directed at the syllabus and related issues like assignment deadlines and requirements.

In short, my Winter class was much less engaged than my Fall class.

On the other hand, what I learned from students is that my new book selections worked in both terms, if not quite to the same effect. In both terms, students conveyed an appreciation for the Anderson text in terms of its accessibility and ability to hold their interest. And in both terms students also found the secondary text I chose to be useful in better understanding how to use concepts from the main textbook. The main difference is that in Fall I also saw these findings in action, while in Winter I did not, at least not to the same extent.

Students also continue to indicate that learning by doing is preferable to learning by exam. Again, I saw this, as well as heard or read it, from students more in Fall than in Winter.

I am often surprised by what students will connect with in a course. What seemed like disaffection to me during last term, apparently was a very different kind of engagement from the perspective of many students.

Early into the quarter, Spring is more like Fall than Winter.

Last term I wrote additionally about a couple of very small, less than ten students, classes, that I decided to run like tutorials or readings courses. On balance, those experiments ended up working well.

Students reported to me that being fully responsible for completing the reading compelled them to engage more closely with class texts, and also encouraged them to make their own meaning of the material. I saw this reflected in the weekly work students had to submit related to the readings as well as in end-of-the-term assessments. Most students also indicated that they liked the freedom and responsibility of the tutorial format, even where they were not initially comfortable with having to manage their own work to the extent that they had to in these classes.

On the other hand, virtually all students expressed that they would have liked more frequet meetings of the whole to discuss the material and reconnect as a class. This seems consistent with my experience regarding how often individual students chose to consult with me about the reading. Early in the term this happened more frequently and with more students than it did later in the term. Student desires for more meeting times suggests a high degree of self-awareness regarding their work habits. Whether that awareness was cultivated during these classes, or these students were already aware that they would be less likely to participate in discussions absent regular meetings , I don’t know.

Next time I teach a course in this way, I will likely either build in more class meetings, or try schedudling consulations with individual students to address these kinds of concerns about feeling adrift or missing out on opportunities for discussion.

I am, however, not sure when I will next be confronted with having to manage a small class like this. I have one this term, but the material is too complicated, and the role the course plays in our major is such that I did not think that the tutorial or readings structure would be appropriate, so I am running it more like a traditional seminar.

Meaningful assessment & avoiding black holes

Working off of a couple of other blog posts, notably this one by Historiann, Dr. Crazy argues for a view of assessment in higher education that gets out from under the usual pitched battles between those who see a need for “accountability” and those who want faculty to retain/reclaim control over curriculum. Both sides agree that students are not learning as well as they should be, but disagree over the nature of the problem and the role that external constituencies should have in shaping the direction of higher education and how you assess or determine whether learning is happening or not. For the former group, appeasing external constituencies, boards, legislators, accreditors, taxpayers is paramount, and for the latter, it is an unwanted interference, at least at the level of determining what students should be learning.

Historiann counters in the first comment to Crazy’s piece that assessment in many places is already happening, but that the data is generally ignored, especially when it is inconvenient, i.e., when it suggests that students would be better served by additional faculty hires, smaller class sizes, etc., or when the only function it serves is to show external constituencies that an institution is “serious” about holding faculty accountable. In that case, all you really need is a file cabinet, in an office or on a server, that you can point to when someone starts asking questions. With so many institutions understaffed, and looking at increasing enrollments, there are more serious things to worry about than “accountability” and “assessment”.

While I admire Dr. Crazy’s will to find ways to make assessment work for faculty, and most importantly, students, I am more in agreement with Historiann, at least in the sense that I think that, in too many cases, the whole process has been corrupted by having been associated with satisfying external constituencies, and from a faculty perspective, administrative imperatives tied to those constituencies, and not with actually improving student learning.

That point, that assessment needs to be about students, both in the sense of seeing them as active agents in their own learning and in making their learning the point of the process, is Dr. Crazy’s strongest argument for faculty taking a constructive role in crafting assessment regimes, and not just chafing under the pressure to take part so that administrators can look shiny for boards, accreditors, legislators, and the public.

As far as I can tell, in situations where faculty have initiated programs to assess study learning within their courses, majors, and minors, there is value for students in terms of curriculum reforms that help to address deficiencies and student needs in relationship to what faculty think students should be learning in their fields. I’ve seen this on my own campus where certain departments, for whatever reason, were out in front of the current mania for assessment.

The problem is that for other departments the assessment discussion has been started not as a faculty-student issue, but as a administration to faculty matter. What you end up with in terms of instruments, ends, and what counts as data looks very different when the conversation begins that way. Even more problematic are those circumstances where a faculty have tools for assessment in place, but is dismissed because it doesn’t fit into the right kinds of boxes for administrative review. In our last round of accreditation, every department in Social Science, even those with theses and senior capstones, and where faculty could document how those experiences had been used to improve curriculum for students, was told that they lacked adequate mechanisms for assessing student learning.

This is where Historiann is right: many faculty already do assessments of student learning. Where assessment becomes “makework” is where we have to turn that into some kind of report or document that serves administrative goals related to “accountability”. I have my own experiences with the Black Hole of Information, sometimes never even getting to the point of reporting the results of an assessment activity, but simply submitting the instrument used without ever being asked for the data.

(And one side issue here is how the terms of assessment are constantly changing. I can look at my syllabi and track the shift from “objectives” to “outcomes” and now accreditors in our region want every institution to have “core themes”, and have changed both the accrediting standards and the schedule for visits. As a consequence, as faculty, we get an ever evolving, and not entirely rational, set of requests from our Dean for this or that kind of assessment using this or that kind of form, most frequently, with great desperation near the end Spring term.

Which just goes to Crazy’s point that this should be about students, and it isn’t).

I was thinking the other day how my class blogs, especially the one I have for my intro classes, are treasure troves for assessment of student learning. I mine a lot of information out of those discussions regarding what my students are and are not getting out of the course, what is working for them and what is not, and combined with in-class assessment I do on a weekly basis, I am continually generating notes on how to change or improve the course.

To the best of my knowledge, though, I cannot satisfy administrative requests for data by pointing them to the blogs. And I don’t have the time or energy to digest the useful stuff for others, let alone in some kind of format or medium that is bureaucratically useful. Does my university have a person or department who could or would do that kind of work? Of course not.

What we do have is a software service that will pull together all manner of standardized and quantitative data for interested constituencies to peer at. The extent to which assessment has become an industry also plays a role in the corruption that has infected this issue, perhaps irreparably, at least when it comes to getting us to where Crazy would like everyone to be.

One of the other commenters at Reassigned Time 2.0, Susan, remarks that one area where formalized assessment activities of the kind currently required of faculty has been useful is in thinking about curriculum in a programmatic way – what do we want our students to be learning and what role do different courses play in achieving those outcomes? That’s a useful conversation to have.

But I also agree with Earnest English that, while statements of learning outcomes, the usual first step towards assessment in the current parlance, is useful on a macro level for programs, and for individual courses, on a day-to-day, student-to-student basis, I don’t know how much this process matters, or the extent to which it obscures more than it reveals. As this commenter notes, what any individual “gets” out of a class is going to be highly variable, maybe it corresponds neatly with the predetermined outcome and maybe it does not. What I do know is that I rarely think about these outcomes after I’ve finished writing a syallbus, and am in the thick of actually teaching actual students.

Of course, I can always take what happens in the course of actual teaching and make it conform to statements of outcomes if I want or need to. I can make this easier by seeding my assignments and class materials with keywords taken from the outcomes, but, at a basic level, what does that provide evidence of except my ability to manipulate language? I think what Earnest English is getting at is that it is very difficult to account for what kinds of knowledge students will take and make from their classes, and, however useful it might be to set a few marks to meet as far as what we think our students ought to be learning, it would be a mistake to treat learning as a simple matter of hitting those marks. More to the point, it is a mistake for faculty to give into a system of assessment that rests on holding faculty “accountable” to the kinds of simple statements of outcomes with which we litter our catalogs and syllabi.

In the spirit of Dr. Crazy’s appeal to get out of the morass of faculty vs. admin vs. everyone else, to me the only helpful thing would be for a time out on all official demands for assessment so that faculty, who need to catch up to bureaucratic curve, can do so in a way where we get to think about students first and admin and everyone else second, or even as afterthoughts. And then admin and everyone else can come up with the means by which that work is translated into something that meets their needs.

My university is in the same situation as Historiann’s right now. We have skyrocketing enrollment, but with a tenure track faculty that hasn’t grown significantly in a decade or more. In my division, at least, adjunct hires have contracted over time, even to replace faculty on sabbatical or with course releases. We have administrators scrutinizing registration for low enrolling upper division courses to cancel so that faculty can be reassigned to teach higher enrolling lower division offerings. This is how we are trying to deal with our increased enrollments instead of hiring new faculty, a solution that undercuts one of the university’s other goals, which is to have students complete their degrees in four years.

That software service I mentioned costs money. Doing all of the “makework” to fulfill system and accreditor demands for data costs money or faculty time that, as above, we don’t have. Like Historiann, I think that most faculty have more to worry about than doing assessment in the properly bureaucratic manner, or to generate data and instruments that simply disappear into a black hole in the Dean’s office.

Recommended daily reading – 21 September

Nothing from yesterday, but here are some items from today:

On Reassigned Time 2.0, Dr. Crazy makes the case for college and university faculty to use their power over curriculum and assessment in productive ways, and to not cede that control to others simply because the work can be sticky and boring. I left a comment, but I’ll add here that what I wrote on Dr. Crazy’s blog is from recent experience as Faculty Senate President and as the chair of a committee charged with assessing general education at WOU. Short version: I basically agree with Crazy, but am not convinced that faculty can or should control assessment to the same degree as curriculum (though the two are not easy to tease apart, which is why, I think, she approaches them together).

On BLDGBLOG, Geoff Manaugh speculates on the architectural possibilities of the recent, and widely publicized, nine-day traffic jam in China.

Last, on Top Shelf 2.0 you can try to puzzle out Tymothi Godek’s “!”.