Archive

Monthly Archives: October 2013

Each of our readings on access, accessibility, and diversity has demonstrated the trouble that tacit assumptions make for fair writing assessments. For instance, Ball demonstrates through her study that a teacher’s experience and race affects how she perceives and responds to student work, just as the student’s race affects how teachers take up her work. Similarly, Haswell and Haswell demonstrate the complex effects of gender on the writing assessment process: male raters respond differently than female raters to male and female students, and female students respond differently to their peers work–depending on gender–than do their male counterparts. Most interestingly, all raters seem troubled by writing that is not easily identifiable as male or female–what Haswell and Hawell call “gender-switched” (414). Anson takes a slightly different approach by identifying the lack of research on race and diversity in WAC research and scholarship, which he attributes, in part, to the fact that WAC scholarship focuses on students as a “generalized construct” rather than on students as “individuals who bring specific histories, experiences, and ‘vernacular literacies’ to their learning” (23). 

This tension between student-as-generalized-constrcut and student-as-culturally-and-historically-situated-individual raises a couple of questions for me. First and foremost, I think that many (if not most) of us would agree that teachers must work toward understanding individual students and their various literacy and cultural practices. Considering these differences during formative classroom assessment is also crucial. However, it seems that once we move toward large scale assessment, whether at the programmatic level or state/national level, individual students begin to be viewed as a generalized construct–otherwise large scale assessment, as it is now, wouldn’t work. Not only are large scale assessments typically operating under an invalid construct of writing; they’re also operating under an invalid construct of student. 

Secondly, I’m reminded of Mary Sheridan-Rabideau’s bibliographic piece on writing, gender, and culture in Bazerman’s Handbook of Research on Writing. Sheridan-Rabideau raises a key question regarding the teaching of writing in relation to women:  “Is it more effective/better to prepare women to engage these agonistic academic structures as they are, or is it possible/better to create alternatives that might be more suited to women’s language practices?” (258). While I recognize the importance of this question in relation to women’s experiences of joining a community of practice dominated by the “masculinist communication style of academic writing” (257), I think this question is equally applicable to students from diverse racial and cultural backgrounds, to students who do not identify as male (or female), and so on. Ball seems to touch on this a bit when she draws on the work of Delpit to show that not being explicit and not “correcting” grammar can be just as, or possibly more harmful, than over-correcting students. Perhaps Delpit and/or Ball would say that we do need to prepare all students to write and engage in a society which values SWE and, if we’re talking about the academy, typically masculine modes of communication. If we accept that answer, then it seems to me that the implications for assessment are clear: we can continue to assess students based on a generalized construct of student because that construct is actually what we’re teaching towards. Large scale assessments can carry on. 

On the other hand, if we do value alternative modes of communication, and if we do view our students as culturally and historically situated individuals, then the implications are both more and less clear. Under these conditions, large scale assessment seems out of the question; how can you create individualized assessments on a national, state, or even programmatic level? Alternatively, can assessments be too individualized? Is some measure of standardization, or at least comparability, necessary? I continue to be drawn to Wardle and Roozen’s ecological model, which I think provides one viable alternative to the more traditional assessments we’ve discussed this semester. Though the model likely has flaws that I’m not currently able to see, I do think that it makes possible types of formative and summative assessment more focused on student-as-individual than student-as-generalized construct. Writing assessment scholars have clearly understood the necessity of local assessments for at least a decade; perhaps re-imagining the local as more student-based rather than place-based is the next logical step.