Skip to content

Composition Forum 27, Spring 2013
http://compositionforum.com/issue/27/

Local History, Local Complexities: The First-Year Writing Curriculum at the University of Louisiana at Lafayette

Bookmark and Share

Clancy Ratliff

Abstract: This profile describes a new WPA’s choice to work incrementally to assess an inherited, fledgling First-Year Writing curriculum at the University of Louisiana at Lafayette and change it over a three-year period with continual stakeholder involvement. The methods used for assessment were two rounds of instructor surveys and three rounds of direct assessment of student writing samples, motivated in part by the university’s reaccreditation review. The resulting curriculum’s main areas of focus are academic writing techniques, argument structure, and research-based writing.

How then should new WPAs proceed, especially if they wish to or have been invited to “revitalize,” “update,” or otherwise change a current writing program? Carefully. Very carefully.
—David Smit, Curriculum Design for First-Year Writing Programs

An untenured assistant professor, new to writing program administration, begins a job directing a first-year writing program, and the waters are somewhat turbulent. One year prior, the curriculum had been radically changed, with some faculty resistance, after twenty-five years of stasis. The writing teachers, naturally, experience confusion and frustration adapting to the new curriculum, which they bring to the new WPA. The new WPA is asked to assess the new curriculum and, if appropriate, change it, but constraints are in place. Per a pledge to a student task force supported by the university president, the writing program is honor-bound to keep the same textbooks for at least three years, to keep costs down for students. The writing program must also engage cultural diversity and international issues as mandated by the university system and Board of Regents. Soon after starting the position, the WPA is asked to conduct course-embedded assessment in preparation for review by the university's accrediting agency. What does the WPA do in the midst of these constraints and challenges?

This is the scenario I faced when I came to the University of Louisiana at Lafayette in the fall of 2007. I took the position as writing program director having only received my PhD in Rhetoric and Scientific and Technical Communication one year prior, with no past experience directing or assisting in directing a writing program, and without a strong background in the subfield of writing program administration. Especially given that I was untenured, I needed to establish collegial working relationships with departmental faculty at all levels, so I wanted to mine their practical wisdom regarding the student population and their writing needs, which they knew much more deeply than I did, before making any changes to the curriculum. I couldn't make any major changes anyway, because we had to continue using the default textbooks for three more years. While getting rid of textbooks altogether might have been an option, I preferred having the apparatus that a book provides. I also had to spearhead program assessment for first-year writing, always a complex endeavor, but especially so under the circumstances. In the end, I decided to work incrementally, making small changes over time, and always asking for teachers' feedback before making decisions about the program. This approach proved effective as I enacted curricular change while assuming a leadership role and navigating local context and institutional constraints.

First-year writing, at most universities, functions within the context of the larger general education curriculum. K.J. Peters has termed it, I believe accurately, as “the university homeroom," vulnerable to “administrative intrusions” (n.p.). Peters' examples of intrusion are mostly recruitment and retention efforts, such as requests to let prospective students visiting campus sit in on a class and requests to allow a portion of class time to be used for giving a survey about drug and alcohol use. Program assessment, though it gives us rich data for improving curriculum design and has become a thriving area of research in composition studies, can be viewed as an intrusion as well. I would point out that past practice and departmental and university politics can also act as, if not intrusions, then certainly as other formations to be navigated. Examples include the three-year minimum period for textbook adoption I mentioned, as well as the departmental expectation that the WPA will not assert authority over the honors first-year writing course, which is often taught by tenure-stream faculty.

Issues such as these, as well as the more quotidian matters of grade appeals and other forms of conflict resolution, can be difficult even for senior faculty administrators to negotiate, but more so for assistant professor administrators. The abbreviation “jWPA,” for "junior [faculty] Writing Program Administrator,” has been in the field's parlance for at least six years, and the Council of Writing Program Administrators' Assistant Professor Administrator Special Interest Group has been active for over six years as well. I situate my profile in the conversation that started with Debra Dew and Alice Horning's collection Untenured Faculty as Writing Program Administrators: Institutional Practices and Politics. Certainly the most consequential issue in the jWPA conversation is job security: occupying a politically vulnerable position that can take time away from research needed for tenure and promotion (Dew and Horning). Fortunately, I have a supportive department that values administrative work, so job security is not one of my concerns. What I hope will become clear is the way I, as a jWPA, grew into the position and my authority as an administrator over time. This profile will show the ways I worked, incrementally and with teachers' involvement, to change the curriculum carefully over a three-year period, using assessment as a guide—in the form of surveys distributed to teachers and direct assessment of student writing.

Contextualizing the ULL Writing Program

The University of Louisiana at Lafayette is a public university with approximately 15,300 undergraduate students and 1,500 graduate students, 94% of whom are from Louisiana. It is the second largest university in the state and serves mainly the population of southwestern Louisiana. Until 1999, it was an open-admissions university. Most of the FYW courses are taught by PhD- and MA-level graduate students in the English department, though some are taught by adjunct instructors, non-tenure-track instructors, and tenure-stream faculty. Graduate students receive training in the form of two seminars: one is a theory course, mostly made up of students who have not yet taught, and these students do several observations of experienced writing teachers' classes. The other seminar is a practicum, which is taken by students who are teaching, and I observe their classes and give feedback on assignment handouts and graded student work. At the beginning of each semester, I hold a training session for all first-year writing instructors, and we offer regular professional development workshops, some with invited speakers from other universities. In a typical academic year, we have on average 70 first-year writing teachers in fall semesters and 56 in spring semesters. The class size for first-year writing during my first five years directing the program was 27 students per section, and it is now 25.

We currently offer three courses: a first-semester course (English 101), a second-semester course (English 102), and an honors course (English 115). Prior to Fall 2012, we offered a Basic Writing course as well, until the state Board of Regents raised admissions standards, effectively making Basic Writing a property of community colleges, as many other states are doing.{1} The writing program is housed in the English department, with one director and two assistant directors, one an instructor and one a graduate assistant, and a First-Year Writing Committee composed mostly of non-tenure-track instructors who have been teaching in the department for many years. The program director has a reasonable degree of autonomy but, as is the case with most writing programs, is accountable to multiple levels of the hierarchy of higher education—the state Board of Regents, the university system, the offices of the president and provost, and the college—and is also expected to take the advice of the First-Year Writing Committee. My assessment efforts have focused on the regular first-semester and second-semester sequence. These courses are English 101: Introduction to Academic Writing; and English 102: Writing and Research about Culture. Both courses are required for graduation.

Prior to academic year 2006-2007, there was no programmatic approach to first-year writing instruction at UL Lafayette. The first-semester course was a fairly free-form expository writing course, and the second-semester course focused on writing about literature. The WPAs over the years provided teacher training, instructional support, and guidance, but common outcomes were not explicit, and there was not much consistency across sections of first-year writing. In academic year 1987-1988, the First-Year Writing Committee wrote common grading standards, but I'm not sure how widely that document was disseminated or for what length of time. For the courses, teachers had a choice of several textbooks, and for the second-semester course, instructors chose novels or other novel-length works; these readings were a diverse group which included, for example, Mardi Gras, Gumbo, and Zydeco; Winesburg, Ohio; The Basketball Diaries; Darwin's Radio; and The Odyssey.

In academic year 2005-2006, the then-director of the writing program redesigned the curriculum. That curriculum, which I inherited, was designed with the following general outcomes in mind, which applied to both English 101 and English 102:

In the course of writing thesis-driven essays, students will:

  1. Engage in writing as a recursive process
  2. Recognize the structures of argument
  3. Use writing and reading for learning, thinking, and communicating
  4. Respond to the needs of various audiences
  5. Discuss appropriate voice, tone, and level of formality
  6. Integrate their ideas with those of others

Many readers will recognize the strong influence of the WPA Outcomes Statement for First-Year Composition on the above list, particularly the last four outcomes, which are almost identical to how they appear in the WPA Outcomes Statement as “Use writing and reading for inquiry, learning, thinking, and communicating”; “Respond to the needs of different audiences”; “Adopt appropriate voice, tone, and level of formality”; and “Integrate their own ideas with those of others” (Council of Writing Program Administrators n.p.). Both courses in the sequence were focused on argument with a writing-as-inquiry approach guided by Bruce Ballenger's textbook The Curious Writer, which was used in both courses, partly as a cost-saving measure. Ballenger's definition of inquiry, or writing-as-inquiry, is based on the ideas of pursuing questions rather than predetermined position statements, decentering teacher authority (the teacher doesn't have the right answer; she is also inquiring), suspending judgment throughout the research and writing process, learning research methods as strategies of inquiry, and inquiring for a purpose and to/with an audience.

There was (and still is) a specified amount of writing expected as part of the curriculum: 15-20 pages of polished writing in English 101 and 20-25 pages in English 102. The assignments were generally what the textbook supported: Personal Essay, Profile, Review, Ethnographic Essay, Critical Essay, and Research Essay. There was no clear partitioning of which assignments were the province of 101 and which of 102, except that instructors were required to assign an annotated bibliography in 102. No policy was in place preventing teachers from assigning an annotated bibliography in 101; some teachers did, which added to students' and some teachers' perception that 101 and 102 were not sufficiently distinct. The second course in the sequence built on the inquiry foundation by adding in an emphasis on research-based writing and a focus on issues of diversity, per a system-level mandate, and international issues, per a Board of Regents mandate.

On April 28, 2003, the University of Louisiana system instituted a diversity requirement, which was to take effect in the fall 2005 semester. The policy indicated that students should have some exposure to and engagement with diversity (which, to be sure, we in the First-Year Writing Program already believe is important, given the theoretical contributions of those associated with the social turn in composition studies), which the UL system defined as “recognizing and appreciating the myriad qualities that make each of us unique, in an atmosphere that promotes and celebrates both individual and collective achievement” (Departmental Memo, unpublished). ULL administrators saw a connection between the diversity requirement and a state Board of Regents international requirement, which stated that “Colleges/universities shall insure that each degree student has been exposed to international education (awareness, learning, scholarship, and/or engagement) before graduation,” and they designated English 102 as a course that would satisfy both of these requirements (Board of Regents n.p.). These mandates, while part of the English 102 curriculum, are not on the list of writing-focused outcomes.

To adapt English 102 to its new designation, my predecessor and the First-Year Writing Committee decided to adopt topic readers and to retitle the course “Writing and Research About Culture.” The readers are from the Longman Topics Series: The Counterculture Reader, Issues of Gender, Language and Prejudice, Writing Places, The Changing World of Work, and Music and Culture. Instructors who did not wish to teach a theme-based course were given the option of using Writing and Reading Across the Curriculum as their course reader. These readers, it was claimed, would provide opportunities for teachers and students to engage with cultural diversity and international issues in their reading, class discussion, and writing.

The first year of the radically redesigned FYW curriculum was academic year 2006-2007. The director who had designed the curriculum had left the university, so during AY 2006-2007, the program had a non-tenure-track instructor serving as interim director. I came to ULL and began work as the WPA in the fall of 2007, entering a climate of considerable uncertainty and confusion among the writing program faculty, many of whom, being creative writers and literature scholars, objected to the changes that had been made and wanted to go back to writing about literature in both courses, but particularly in English 102, in which writing about literature had previously been the focus. The department administration told me that I was free to redesign the curriculum as I saw fit, but that they preferred that I keep the fledgling curriculum in place and assess it. This was my strong preference as well; I wanted to learn as much as possible about the university, teachers, and students before making any changes to the curriculum, and I wanted to honor the writing program's stated adherence to the recommendation from our Student Government Association's task force on textbooks, which was that general education courses would use the same textbooks for at least three years before making any changes. Keeping costs down for students, I would come to learn, is a high priority throughout our university, especially in the writing program. It is for me as well.

My immediate goal, then, was to assess the curriculum in place when I became director in order to see if it was necessary to make any changes at all. As I was gathering information about the curriculum, I was also tasked with conducting course-embedded assessment in preparation for our university's reaccreditation review by the Southern Association of Colleges and Schools (SACS). Those were the two distinct exigencies for my administrative work those first few years: the recommendation by departmental administration to evaluate the curriculum as it was, and the university's requirement that all departments and programs engage in assessment. Both exigencies called for an inquisitive and observant attitude, which, as a jWPA, I was happy to maintain. However, these two motivators could also be seen in conflict with each other; assessment for reaccreditation purposes asks a WPA to find out how effective her program is, when I didn't have a strong sense of ownership of the program. I saw this lack of ownership as an advantage, though, as I didn't see the success or failure of my own curriculum design to be at stake in the process. University administrators were supportive, assuring me that the point was to engage in the process of assessment, not necessarily to get a certain result, and to close the loop if results fell below expectations. Thanks to their encouragement, I share Wendy Sharer, William Banks, Tracy Ann Morse, and Michelle Eble's position that reaccreditation can be a valuable opportunity for WPAs. In the call for proposals for an in-progress edited collection titled Reclaiming Accountability: Using the Work of Re/Accreditation to Improve Writing Programs, they write that “[f]or many, implementing, tracking, and sustaining large-scale assessment of student learning for accreditation or reaccreditation purposes are daunting and joyless, but necessary, tasks,” but that “within the accreditation cycle,” WPAs “across the country have found the impetus for substantial, long-term change” and “a culture of assessment” in their writing programs. Indeed, ULL passed the reaccreditation review several semesters ago, but I have continued regularly assessing the program.

The process I explain in this profile takes place over my first three years as WPA. During my first two years, academic years 2007-2008 and 2008-2009, I gathered information about the curriculum as it was at the time, surveying teachers of the writing courses. In fall 2008, spring 2009, and spring 2010, I conducted course-embedded assessment in conjunction with the SACS review, gathering samples of student writing and having readers evaluate them using a common rubric. As I hope I will make clear in this profile, this assessment taught me that the curriculum was sound, especially its outcomes, but that it needed more material to help students learn academic argumentation techniques and academic genres, such as the scholarly journal article (that is, its more approachable version for first-year writing, the “research paper”) and the annotated bibliography. What the assessment revealed led me to seek new textbooks and new assignment sequences that would support instruction in academic writing techniques. Then in fall 2009 and spring 2010, teachers class-tested new textbooks that were selected based on teacher feedback from the surveys. Fall 2010 was the first semester we implemented my changes to the curriculum. In the next section, I will briefly describe the curriculum as it is now, and in the remainder of this Profile, I explain how we arrived at our curriculum through a process of assessment.

The Current State of the Writing Curriculum, or Where We Are

Fall 2010 was the first semester using the curriculum that I redesigned, which took the program from an inquiry focus to an emphasis on academic argumentation without losing the inquiry element entirely. We began using new textbooks: for English 101, instructors choose between John Ramage, John Bean, and June Johnson's Writing Arguments and James McDonald's The Reader. For English 102, teachers use Gerald Graff and Cathy Birkenstein's They Say/I Say and one topic reader. I provided recommended assignment sequences as well. For English 101, the assignment sequences supported by Writing Arguments include 1) Argument from Personal Experience, Definition Argument, Evaluation Argument, and Causal Argument; 2) Rhetorical Analysis, Analysis of a Visual Argument, Proposal Argument, and Resemblance Argument. Teachers may create an assignment sequence using any combination of these, and they may design other assignments if they emphasize academic argumentation. The assignment sequences supported by The Reader are 1) Literacy Narrative, Interview Essay, Argument Essay, and Comparison and Contrast Essay; 2) Personal Essay, Rhetorical Analysis, Literary Analysis, and Analysis of an Advertisement. For English 102, I recommended a sequence of two research projects consisting of an annotated bibliography and a research paper each. The sequence, then, is Annotated Bibliography 1, Research Paper 1, Annotated Bibliography 2, and Research Paper 2. Students write the first pair of assignments on the same topic and then choose (or are assigned) a new topic for the second pair of assignments. This way, students can write their way into the two research papers using the two annotated bibliographies. For teachers who want to start with a shorter assignment with fewer sources, I recommended an Analysis of a Disagreement Essay, in which students use two sources expressing different opinions about the same topic. As with English 101, teachers may design their own research-based assignments. At the beginning of each semester, I conduct a syllabus review to ensure that teachers' course designs are aligned with program outcomes.

Methods of Curriculum Assessment, or How We Got There

Instructor Surveys

I began the process of assessment in Fall 2007, the first semester I arrived at ULL. First, I distributed open-ended and multiple-choice surveys to the writing program faculty to ascertain their perceptions of students' needs and how they were being met by the curriculum in place. Any writing curriculum, in my opinion, is for the teachers as well as the students, and both groups should be considered in curriculum assessment. It seemed to be a natural starting point; the teachers are authorities on our student population's writing strengths and challenges, and they had experienced the contrast of teaching both the then-current curriculum and the courses as they were before, without a coordinated curriculum. Another reason, though, had to do with politics. As I mentioned earlier, teachers were having a hard time adjusting to a major change in the curriculum, a change which, while I was told and I believe that they were consulted about the changes, many felt took place above their heads. The surveys were, in part, a way to concretize instructor participation in the curriculum design process. In fall 2007, I gave an open-ended survey (see Appendix 1). Given that it asked for detailed written feedback and was time-consuming to complete, this survey had a low response rate (only seven out of 68 instructors—it might be more accurate to think of this as a written focus group). Still, the feedback we received, which I will soon explain, was helpful. I followed these surveys with one multiple-choice survey in spring 2008 (see Appendix 2) and two short multiple-choice polls in fall 2009.

Nineteen instructors out of 59 responded to my longer multiple-choice survey in the spring 2008 semester. One question asked instructors to rank the goals of our first-semester course, with 1 as the highest priority and 7 as the lowest. Admittedly, these goals—which are listed below—are not our program's listed outcomes; instead, they represent a variety of purposes and student needs, which helped me to see how teachers envisioned the first-year writing sequence and its mission at ULL. These purposes and student needs are expressed in rhetoric and composition scholarship, and I wanted to see how teachers thought they applied to our student population. They also represent several types of pedagogies in composition studies: current-traditional, expressivist, WAC, and rhetorical pedagogy. Rankings are calculated here by the mean of the rankings in the total number of responses for that outcome.{2} The instructors ranked their goals for the first-semester course as follows (in order of importance):

  1. To teach basic conventions of academic writing (citation practices/issues of plagiarism, common organizational patterns of research papers, library research, etc.) (Mean: 2.11)
  2. To help students see themselves as writers—a focus on the student as a developing writer rather than on a specific kind of writing (Mean: 2.63)
  3. To prepare students for writing in upper-level courses of their chosen majors (Mean: 3.32)
  4. To teach students about rhetorical principles (e.g., rhetorical appeals, heuristics, logical fallacies, enthymemes, syllogisms, inductive and deductive logic, etc.) (Mean: 3.68)
  5. To ensure that students master grammatical and mechanical skills in standard English (Mean: 4.26)
  6. To enable students to write about personal experiences (narrative, autobiographical writing) (Mean: 5.63)
  7. To encourage experimental writing that challenges the conventions of academic discourse (Mean: 5.89)

The instructors ranked their goals of our second-semester course as follows:

  1. To teach basic conventions of academic writing (citation practices/issues of plagiarism, common organizational patterns of research papers, library research, etc.) (Mean: 2.44)
  2. To prepare students for writing in upper-level courses of their chosen majors (Mean: 2.72)
  3. To teach students about rhetorical principles (e.g., rhetorical appeals, heuristics, logical fallacies, enthymemes, syllogisms, inductive and deductive logic, etc.) (Mean: 3.00)
  4. To help students see themselves as writers—a focus on the student as a developing writer rather than on a specific kind of writing (Mean: 3.28)
  5. To ensure that students master grammatical and mechanical skills in standard English (Mean: 4.83)
  6. To encourage experimental writing that challenges the conventions of academic discourse (Mean: 5.22)
  7. To enable students to write about personal experiences (narrative, autobiographical writing) (Mean: 6.00)

This survey helped me get a general reading of what teachers saw as the highest priorities of the first-year writing courses. Whatever changes I made to the curriculum, I wanted to do so with as much faculty buy-in as possible, so it was important to me to find out teachers' perspectives of the relative value of each of these learning goals vis-à-vis the needs of our student population. Unsurprisingly to me, the highest priority for both courses was “to teach basic conventions of academic writing”; I think teachers ranked that goal highly for both courses on the principle that academic writing cannot realistically be mastered in one semester. Also unsurprising was the low prioritizing of personal and experimental writing; the results corroborated workshop discussion and informal conversations I'd had with teachers, who said that what students needed most was exposure to and practice in academic argumentative techniques.

The open-ended survey responses generally revealed that teachers did not believe the curriculum was addressing students' writing issues. Teachers felt that the textbook and assignments it supported skewed toward personal rather than academic writing, which they believed our student population needed more experience doing. They said that the assignments—in particular the Review, which asked students to write a review of a movie, restaurant, or the like—were magnets for plagiarism. While its emphasis on inquiry was valued by some teachers, inasmuch as it was understood as a central factor in the curriculum, the writing program faculty as a group argued that students needed more help with technique as it pertains to academic writing, especially ways of framing and organizing arguments, as well as integration of sources in research-based writing. “More how-to,” the teachers requested.

Teachers also noted that although the sequence had been intended to be smooth and coherent in its overarching emphasis on inquiry, unlike the disjointed nature of the first- and second-semester courses from years past, the way it worked in practice was an overcorrection resulting in redundancy. Because the recently designed curriculum had called for teachers to use The Curious Writer in both the first- and second-semester courses, with the topic reader and The Curious Researcher added in the second semester, teachers of English 102 found that assignments they were giving students had sometimes already been given by the English 101 teacher. For example, a Profile Essay and a Critical Essay were two of the assignments supported by The Curious Writer. Sometimes English 101 teachers give those assignments, but because either can reasonably entail research, some teachers assigned them in 102. While I could have designated some assignments to be 101-specific, I did not want to restrict teachers' choices too much. Teachers, though, felt that there needed to be more differentiation between the two courses in order to challenge students, especially in the second-semester course. As the survey results show, teachers have similar goals for both courses; I believe they think of the goals as extending across the writing sequence. The differentiation they sought was on the level of classroom practice and types of assignments.

Course-Embedded Assessment

For the course-embedded assessment, we used samples of student writing done in the second-semester course. Upper administration had asked that we focus on student work from the second-semester course, in particular, in order to gauge students' culminating abilities at the end of the first-year writing sequence. In Fall 2008 and Spring 2009, we conducted assessments of samples of student writing in which readers applied a rubric to roughly 100 samples of student writing from our second-semester course. I selected samples using a method recommended by my university's Vice President for Institutional Planning and Effectiveness, who, along with the Provost and the Assistant Vice President of Academic Affairs, coordinates university assessment efforts. The sampling method they recommended was to look at the class schedule and highlight every seventh section, then ask those teachers to contribute student writing samples. While the papers were not all exactly the same assignment, they were all research-based argument papers (I omitted papers that did not meet those criteria, such as advertisement analysis papers and personal narratives that did not make explicit arguments).

For the assessment, we needed a rubric, which I created after reviewing samples of writing rubrics at many other universities. I drew upon some of the curriculum outcomes, particularly “recognize the structures of argument” and “integrate their ideas with those of others.” The rubric assessed four areas: “Content,” “Research,” “Organization,&Rdquo; and “Language Issues” (see Appendix 3). I gathered a group of volunteers consisting of first-year writing teachers, mostly graduate students and adjunct instructors, and together we read and scored four sample papers. I went around the room and asked each teacher to call out the scores he or she gave each paper, and we discussed our reasons for the scores we assigned. After discussing the first sample paper, I guided the readers to explain their scores with reasons they could point to in the rubric. Each paper was scored analytically by two readers and given a score of 1, 2, or 3 in each category. The average of the four scores was the paper's “Overall” score. The university's desired benchmark was that at least 70% of the papers would earn an overall score of satisfactory or higher.

We would not meet the benchmark that semester.{3} The findings in Fall 2008 were that 51% of the essays were rated as “poor” and only 49% as “satisfactory” or “outstanding.” The First-Year Writing Committee and I reflected on the assessment process, effectively treating it like the pilot project it was. We suspected that timing was a problem: generally speaking, students taking first-year writing courses off-cycle (101 in the spring, 102 in the fall) are more at risk than other students, often having failed the course in a prior semester; these classes have a higher withdrawal/failure rate than on-cycle first-year writing classes. Also, in fall 2008 in particular, Hurricane Gustav and Hurricane Ike struck our region, and the ULL campus closed for several days. Teachers reported high absentee and attrition rates as students and their families dealt with damage to their homes. We should have been able to collect our set of 100 papers, which the upper administration said we would need in order to assess the program sufficiently, with four sections of English 102. In fact, we had to collect papers from seven sections of 102. However, more significant than the semester's circumstances, the First-Year Writing Committee and I decided, were the somewhat unrealistic standards set in the rubric. I had asked for feedback about the rubric prior to assessment, and the comments were all positive; no one recommended changes. However, after I revealed the results of the assessment, teachers critiqued the rubric as setting too high a standard for first-year students to meet. To offer an example of the standards, here are the “Satisfactory” and “Outstanding” levels for the “Organization” category of the original rubric:

Satisfactory

Outstanding

  • Paper contains a clear introduction, development, conclusion
  • Divided into discrete sections, each supporting the thesis
  • Logical, smooth transitions between sections
  • Plan of development stated (forecasting statement, self-announcing structure to argument)
  • Paper is not only well-organized but shows an unusually keen consideration of the audience: questions are anticipated and answered, arguments seem arranged artfully for the most persuasive effect

After hearing from teachers as well as a colleague at another university whose advice I sought that these standards were unreasonable for first-year students, I agreed. I realized that most students probably would not reach the “Outstanding” level in organization after a two-semester sequence, and I decided that the criteria in "Outstanding” weren't aligned with the outcome of "recognize the structures of argument” (emphasis added). Even the criteria in “Satisfactory,” in the original rubric, actually exceeded that outcome.

Teachers also pointed out that the standards in the “Research" category were unrealistic. Though some teachers assign research-based writing in English 101, many do not, as English 102 is a dedicated research writing course. I agreed that the following configuration from “Research” in the original rubric was not feasible for what was possibly only one semester of research-based writing, even with radical interventions in the curriculum:

Satisfactory

Outstanding

  • Sources are integrated into student's argument (student is in control of the source material; no “data dump”)
  • Sources are credible according to academic standards
  • Works cited list is present even if documentation format is incorrect
  • Source material is integrated seamlessly into the student's argument
  • Bibliography is even-handed (sources both in support of and contra student's argument)
  • Paper contains works cited page in correct documentation format

As with the “Organization” category, I overshot with respect to the curriculum outcome of “integrate their ideas with those of others.” I thought also that we'd benefit from a more versatile rubric that could be used, with minor tinkering, for almost any assignment in the writing sequence, including those that weren't research-based, such as rhetorical analyses of one text or visual argument.

The findings revealed particular weaknesses in the “Research” and “Organization” categories of the rubric (the mean scores in those areas were 1.84 and 1.93 respectively), which corroborated teachers' concerns stated in the open-ended surveys that those areas, both requiring knowledge of academic organizational and research techniques, were particular weaknesses for our students. The findings provided an additional impetus to revise the rubric, as they clearly did not align with teachers' expectations. Admittedly, I don't have a data set showing grade distributions in first-year writing courses during this period, but I know that teachers were not assigning grades of F to 51% of the students.

Generally, the rubric changes I made involved a shift of the “Satisfactory” standards from the original rubric to “Outstanding,” and a new definition of “Satisfactory” (see Appendix 4 for the revised rubric). In the Fall 2008 version of the rubric, I had “contains some acknowledgement of opposing/divergent views” in the “Satisfactory” category under “Content.” This criterion was moved to the “Outstanding” category in the Spring 2009 version. Also, I had listed “sets forth a clear thesis/position on the issue” under “Satisfactory” in the earlier version, but I later changed that to “position/argument is comprehensible even if not clearly stated.” The Spring 2009 rubric re-labeled the “Satisfactory” criteria from the old version as “Outstanding” and featured new criteria to describe “Satisfactory,” which were: “introduction is recognizable even if it is not always reader-based,” “paragraphs generally treat one idea at a time,” “attempts at transitions between paragraphs are made, even if they are awkward,” and “conclusion provides some closure to the argument, even if only a summary of the main points.”

We conducted the next round of assessment in Spring 2009. The differences that semester were, first, that we were using a slightly different rubric, and second, that we were collecting papers from students representing the majority of the first-year student cohort: those who were originally placed in and successfully completed our first-semester course on the first try. The findings that semester were that 41% of the essays were rated as “poor” and 59% as “satisfactory” or “outstanding”—better, but still short of the desired benchmark. The mean score for “Content,” which now included criteria evaluating the quality of the research, was 2.00: lower than the 2.05 mean from the previous semester, but slightly higher than a simple average between the “Content” and “Research” categories from the previous semester (1.95). The mean score for “Organization” was only slightly increased as well, from 1.93 in Fall 2008 to 1.96 in Spring 2009. I decided at that point that while we were still bound to our current textbooks for the upcoming year, I would publicize the assessment rubric to students and teachers as much as possible, starting with revising our writing program's custom-published booklet, The Freshman Guide to Composition at the University of Louisiana at Lafayette, to include the rubric. It made sense to me to create as much awareness as possible, in as many different groups of people as possible, of the standards articulated in the rubric.

I also decided to take the suggestion of a professor in my department who, at a faculty retreat, asked me why I had not included any papers from the honors FYW course, English 115, in the data set. I did not want the English 115 papers to be overrepresented, thus skewing the results, but it made sense to me that including some honors students would provide a more accurate representation of the range of achievement in the class of first-year students, despite the fact that the honors course is only coincidentally part of the FYW program and does not use our curriculum—though it is a general education writing course that students are placed into and take at our university, and successful completion of the course is considered to be an endpoint, making it appropriate for assessment. I decided, then, to include a proportional number of papers from the honors course. The set of writing samples we ended up with was fairly uniform: English 102 papers were source-based arguments on a variety of cultural topics, and the English 115 papers were analyses of plays, stories, novels, or poems, sometimes using scholarly sources, but not always. When selecting papers for the sample, I omitted those genres that the rubric is less well suited to evaluate: personal essays and annotated bibliographies, for example.

In the Spring 2010 round of assessment, of the 100 papers we scored, 87 came from our second-semester course, and 13 came from one section of our honors course. Also, by the beginning of Fall 2009, the writing program rubric was published in our booklet for students. I told teachers that this rubric represented the writing program's general standards and that it would be used as a point of reference as needed, such as in the grade appeal process. I asked teachers to refer to the rubric often in class and to use it as a template in designing their own assignment-specific rubrics (I made an electronic copy of the rubric available for this purpose). I also asked the Writing Center director to keep the rubric in mind as a tool for offering feedback on student writing, and I put the rubric on our First-Year Writing Program's public web site, where it would be available to prospective students and parents.

Another change I made was to change the scoring range from 1-3 to 1-6. This change follows a method used by Asao Inoue, which directs assessment readers to put student work into “conceptual buckets”: a score of 1-2 would put student work into the “poor” bucket, 3-4 in the “satisfactory” bucket, and 5-6 in the "outstanding” bucket (n.p.). Readers now had the discretion to give a paper a “low satisfactory” or “high satisfactory” ranking if there was some overlap between the categories, as there usually is. The readers scored papers in each of the three categories from 1-6, where 1-2 were poor, 3-4 were satisfactory, and 5-6 were outstanding. During this round, we met our benchmark: 72% of student papers earned an overall score of satisfactory or higher. One or some combination of the three changes I made—publishing the rubric, including honors students, and using “conceptual buckets” scoring—evidently made a difference. In hindsight, it would have been interesting to make these changes one at a time, thereby testing the impact of each, especially the impact of publishing the rubric. I will soon do another round of course-embedded assessment now that I have adjusted the required textbooks and assignment sequences, and I hope to collect short portfolios to gain a more nuanced representation of students' writing capabilities, which are not all consistently represented in every writing assignment.

The process of our curriculum and program assessment has been messy at times, perhaps befitting a jWPA new to administration. The rubric especially has followed a procedure of trial and error as I work toward a higher degree of construct validity. During this period of indirect (instructor surveys) and direct (student writing) assessment, I learned much about institutional realities such as the lower performance of off-cycle fall English 102 classes, and I learned more about the limitations of rubrics. I already understood that a rubric is an unwieldy, one-size-fits-all instrument that often doesn't work in practice, but I learned more about why this is the case, particularly from Bob Broad's scholarship. In his book Organic Writing Assessment: Dynamic Criteria Mapping in Action, Broad writes that rubrics “are too simple and too generic to effectively portray the educational values of any specific classroom, department, or program” (4). In correspondence with me, Broad explains that a rubric is usually only a “deductively and speculatively derived account” of a writing program's values, as was the first rubric I created. While I agree that the “Outstanding” column of my initial rubric describes outstanding academic writing, the rubric was not, as Broad recommends, “an inductively and empirically derived account of values” in our writing program as it needs to be. The first rubric was more like a wish list; even students in my own first-year writing classes certainly wouldn't have scored well by those standards. While some may argue that I gamed the system to get the assessment results I wanted, I would disagree. The fall 2008 round of assessment was a recipe for failure: a writing curriculum that was in the midst of a difficult transition, a data set of only off-cycle English 102 papers, an unrealistic rubric that the teachers and students didn't even have access to as they were giving and completing assignments. It demanded a change to more accurately reflect the local teaching context.

Acting on Assessment Results: Taking the Next Steps

The assessment of student writing revealed that the students are generally proficient in the “Language Issues” category; in every round of assessment, over 70% of papers scored as satisfactory or higher on that part of the rubric. The problem, as I saw in the assessment and heard repeatedly from writing teachers, was that students needed more guidance in learning and writing academic discourse and more instruction in rhetoric. Their needs were those articulated by David Bartholomae over twenty-five years earlier:

What our beginning students need to learn is to extend themselves, by successive approximations, into the commonplaces, set phrases, rituals and gestures, habits of mind, tricks of persuasion, obligatory conclusions and necessary connections that determine the 'what might be said' and constitute knowledge within the various branches of our academic community. (614)

With this overall goal in mind, and because teachers had called for more attention to technique in argument, organization, and conventions of academic writing, I thought of John Ramage, John Bean, and June Johnson's Writing Arguments, which I had taught first-year writing with for several years at two different universities, and Gerald Graff and Cathy Birkenstein's They Say/I Say, which impressed me though I hadn't taught with it. The Toulmin schema in Writing Arguments as well as the use of claim types provide teachers and students with a sturdy apparatus for constructing as well as analyzing arguments, which fits well with the “recognize the structures of argument” outcome. And the templates in They Say/I Say are also valuable tools for organizing claims and evidence and for integrating material from research, which our second-semester course emphasizes. Also, Writing Arguments has a chapter on inquiry that helps support that focus from the earlier curriculum, which I didn't want to lose. Inquiry is a sound approach to academic writing: beginning a writing project with a question, suspending judgment and engaging fully in the research and writing process, and using writing as a means of making knowledge. The material on inquiry in Writing Arguments helps us retain what was working well with inquiry while providing the follow-through of teaching students to present their arguments using claim types and structures that are recognizable to an academic audience.

While I don't want to give the impression that I think of “curriculum” and “textbook” as synonyms, I do think that a textbook is the most concrete embodiment of a curriculum and that a good book can be a strong pedagogical guide, especially for new teachers. Based on assessment—both from the instructor surveys and findings from student writing samples—I knew that the students needed a curriculum with emphasis on building and organizing arguments for an academic audience. Still, in our institutional context, the issue of textbook cost is important. I didn't want to make any changes to the required textbooks—any change would be a three-year minimum commitment—without faculty buy-in, so I issued open calls to class-test new books, copies of which were provided by the publishers. David Smit writes that

at many institutions money may be tight and those with a stake in the nature of the first-year writing program may have very determined but contrary visions of what first-year writing courses should be like. At such institutions all that a WPA may be able to do is to influence slightly the choice of textbooks that current instructors already use or other stakeholders approve of. (203)

If I had chosen default textbooks unilaterally, I suspect that teachers would not have used them. I thought carefully about which books would be class-tested and sought response to my suggestions from the First-Year Writing Committee and other faculty in the writing program. I also asked teachers to suggest possible books for class-testing.

Academic year 2009-2010 was our year of class-testing. In addition to Writing Arguments and They Say/I Say, we decided to class-test James McDonald's The Reader. While it consists mostly of a collection of readings, it also has a sound rhetoric apparatus befitting our outcomes, especially “use writing and reading for learning, thinking, and communicating.” Teachers were curious about the book because McDonald is the head of our department, but he was not involved at all in the selection process. The committee decided on the book because they believed its readings were engaging for the teachers and well-suited for our student population, and because faculty buy-in was important to me, I supported adopting the book.{4} Two instructors tested McDonald's The Reader, one tested Ede's The Academic Writer, one tested Ramage, Bean, and Johnson's Writing Arguments, and I tested Graff and Birkenstein's They Say/I Say. These teachers reported to the First-Year Writing Committee about their experiences with and students' responses to the new books and evaluated, based on their own impressions and knowledge of the writing program, how the new books might better support our outcomes (especially “Recognize the structures of argument” and “Integrate their ideas with those of others,” which teachers have indicated as weaknesses in student writing) and meet students' writing needs. The evaluative process was informal: we invited teachers who'd done the class testing to attend a First-Year Writing Committee meeting and asked them to prepare some remarks about which parts of the books they thought were especially effective or ineffective, how they used the readings to facilitate discussion or inform the design of writing assignments, and how students reacted to the readings. Teachers attended our meetings and spoke briefly on these matters, and then the committee asked follow-up questions for clarification. Each of these interviews lasted approximately twenty minutes. In the end, after polling the faculty (with a response rate of 43 of the approximately 60 instructors that semester), we decided to discontinue using The Curious Writer and adopt Writing Arguments and The Reader for the first-semester course (teachers may choose which of the two they will use) and adopted They Say/I Say for the second-semester course{5}, along with one of the topic readers, which, based on instructor feedback from surveys, we decided to continue using.

Future Program Development

Like any university, ULL presents challenges that increase the difficulty of curriculum design in particular as well as institutional and state-level factors that I believe work to the students' detriment in general. All these problems should be understood against the backdrop of widespread state budget cuts to higher education. The first of these is class size in First-Year Writing. Although the enrollment cap for the honors class is low (15 students), English 101 and English 102 classes are capped at 25 students per section, which is higher than I would like. However, it is lower now than it was in recent years. When Hurricane Katrina struck and ULL experienced an enrollment surge as New Orleans universities closed for the Fall 2005 semester, the cap, which had been 26 students per section, was raised to 27, where it remained from Fall 2005 through Spring 2012. During the Fall 2011 semester, I did research about class size in FYW courses nationally and statewide, and I wrote a report proposing to lower the cap. The administration responded by lowering the cap to 25 students per section, which in our local context was a major victory. I plan to track student performance in the aggregate—particularly the attrition and failure rates, which I researched for the proposal to lower class size—and hopefully lower class size further.

Another factor that I believe affects the quality of our writing program is that it's currently easy for students to get automatic credit for one or both FYW classes. Students who receive an ACT English subscore of 28 or higher (SAT Verbal score of 630 or higher) get automatic credit for our first-semester writing course and are placed in either the honors writing course or the second-semester course according to the student's preference. Students who receive a score of 3 on the College Entrance Examination Board's Advanced Placement Exam get automatic credit for our first-semester course, and a 4 carries credit for both courses in the sequence. With the help of my department head, I have made some progress in changing the exemption policies; prior to Fall 2011, a 3 on the CEEB AP exam granted credit for both FYW courses.

An additional concern I have, one that directly affects curriculum design and development, is the increasing population of first-year students who take one or both of their writing courses in high school dual enrollment programs. Dual enrollment is a fast-growing area in ULL's writing program due to a recent piece of legislation, the GRAD Act (Granting Resources and Autonomies for Diplomas). Higher education's budget has been drastically cut by the state, leaving public colleges and universities desperately needing the “granting of resources” from the law's title. Under the terms of the GRAD Act, state institutions receive “autonomies” to raise tuition by up to ten percent per year{6} provided that they increase retention and graduation rates. Governor Bobby Jindal claims that “[t]he new law incentivizes universities and colleges to increase graduation and retention rates and align academic programs with workforce needs which are critical as we work to deliver more value for students across Louisiana” (“Regents Approve” n.p.). The same press release explains that the new law “includes four performance objectives –student success, articulation and transfer, workforce and economic development, and institutional efficiency and accountability.” Dual enrollment falls under the objective of student success. Because of this law, retention is more important than ever before, as is recruitment of high-achieving students—an effort that stands in dramatic contrast to ULL's long history of being an open-admissions university. Recent changes in admissions standards call for students to be “English 101 ready,” among other increased requirements, in order to be admitted to ULL, which is predicted to result in a steep enrollment drop for the university. Expanding dual enrollment partnerships and offerings, the argument goes, will aid recruitment if the students in the dual enrollment courses matriculate to ULL, and close partnerships with local high schools will enable a smoother transition from high school to college and a clearer understanding of college-level expectations and college readiness.

Increased retention and student success are surely noble goals. Curriculum design and development are affected, however, in that colleges and universities are put in a position to compete for dual enrollment partnerships with high schools. Those administering writing programs are asked to make accommodations for the state's high school English curriculum, which must also be taught as students are earning credit for their first-year writing courses. Usually, what this means is that dual enrollment first-year writing courses have a writing-about-literature focus that does not always work coherently with the writing program's curriculum, or that students are not doing the amount of writing required in our courses in addition to the writing that is required in the high school curriculum.

Still another matter affecting future curriculum development is the department's honors FYW course. This class is taught only by faculty with PhDs; of those, some are adjunct instructors and some are tenure-stream faculty at the assistant, associate, and full levels. The course is firmly ensconced in writing about literature. Faculty members who teach it have carte blanche to design the course how they want: there are no common outcomes or required number or types of assignments. The default text is the Bedford Introduction to Literature, which is usually not used by tenure-stream faculty who teach the course; instead, they teach the course using novels, collections of short stories, and collections of poetry of their choosing. One of the department's intentions for the honors course is that it functions as a site of recruitment into the English major. The course has no administrative oversight except in cases of plagiarism, grade appeal, and student complaint about instructors. The honors course, unlike our second-semester writing course, is not designated as a class that meets the requirement of engaging with diversity and international issues. To be sure, the WPA could design and put into effect a set of common outcomes, recommended assignment sequences, common textbooks, and a syllabus review process much like the one currently in place for our writing sequence, but compliance would be minimal. A future project for the ULL writing program is to design a curriculum collaboratively with faculty who regularly teach the honors writing course in order to secure more buy-in, but given the faculty hierarchy, the project would necessarily entail designing only a “suggested curriculum.”

Despite the challenges and problems, I continue to assess and develop the writing curriculum. During the spring 2011 semester, I distributed a survey to 546 students in first-year writing courses: 426 in English 102 classes and 120 in English 101 classes. The survey is based largely on questions designed in 2007 by Charles Paine, Robert Gonyea, Paul Anderson, and Chris Anson for the Consortium for the Study of Writing in College, a partnership between the National Survey of Student Engagement and the Council of Writing Program Administrators. The questions are intended to serve as a supplement to the NSSE to measure the degree to which students engage in writing throughout all their college coursework—in both general education and their majors—but the authors of the survey permit professors and administrators to use the instrument for smaller studies at their own universities. It is outside the scope of this profile to give a complete account of these findings, but I believe that the results of this survey offer information about the curriculum that we hadn't been able to get through other means. To give a brief example, one question I asked was whether students had received feedback from the instructor on a draft before turning in the assignment for a grade, a question that was directly in line with one of our program outcomes: to understand writing as a recursive process. The result of this question is below:

English 101

English 102

All assignments

35%

29%

Most assignments

20%

26%

Few assignments

15%

23%

One assignment

13%

9%

No assignments

12%

12%

I'm not surprised by this result. Our full-time, non-tenure-track instructors teach a 5-4 load, and our adjunct instructors are similarly overworked. Our graduate students teach a 2-2 load while usually taking three courses each semester. Still, I would like to see every student in our writing program say that for at least a few assignments in the writing sequence, they got comments on drafts prior to receiving a grade. I continue to work toward this goal with teachers by showing that this process doesn't have to take a lot of time, that they may do most of the commenting at the draft stage—when it's most important to do the commenting—and make minimal comments on the graded version.

Another program development project I have planned for the fall 2012 or spring 2013 semester is to assemble a group of writing program faculty to create a dynamic criteria map for our program using the procedure Broad outlines in his book What We Really Value: Beyond Rubrics in Teaching and Assessing Writing. My goal is to help define what “good writing” is at UL Lafayette, and in pursuit of the goal, I have collected a corpus of student essays nominated for our department's annual Ann Dobie Outstanding Freshman Essay Award contest. Instructors of English 101, 102, and 115 nominate the papers they consider to be the best of the semester, usually no more than three essays per section. What I want to do in this project is to examine these samples of writing, judged to represent the best of our students' efforts, and get a clearer, more productive and useful definition of what “good writing” is in our writing program, with the variety of genres assigned. While some may object to selecting a set of student writing samples that are all “outstanding” writing, I know based on my use of some of these papers in faculty development workshop that teachers will, in fact, disagree on the merits of these papers. Also, my main interest is in creating an assessment instrument that is not based on an “ideal paper” with artificially inflated standards that are not reasonably attainable by first-year students in a two-semester period.

For those new to Broad's work, dynamic criteria mapping is an assessment method influenced by grounded theory, in which researchers examine a data set to see what coding categories emerge. As Broad writes, “The group process of DCM is designed to get multiple readers' comments bouncing off each other in such a way that readers reveal values and criteria of which they may very well have previously been unaware," which yields a representation of values that is “more rich, more context-sensitive, and more accurate” than our current rubric (personal communication). This map, which resembles a concept map with criteria grouped into related clusters, would be distributed to teachers and published in the university's Freshman Guide to Writing. I also plan to use the full data set to generate some basic descriptive statistics. Questions I have in mind include these:

  • How long are these essays?
  • Are they mostly personal narratives or analytical/research papers?
  • How many sources do they cite?
  • What kinds of sources do they use?

I believe the dynamic criteria map and the descriptive statistics will help us articulate our expectations of students to ourselves as well as to students, and will improve pedagogy in the writing program overall. In sum, I believe that the only way to approach curriculum design is to evaluate the program continually, which is what I have done and continue to do throughout the six years I have directed the program at UL Lafayette.

Reflecting on Curriculum Building and Assessment

What would I have done differently if I could go back to Fall 2007? I have several answers for this question. First, I would have familiarized myself with the last several decades of the history of the program sooner. Filing cabinets in the First-Year Writing office contain minutes of the First-Year Writing Committee's meetings going back to the 1970s, as well as other policy documents and memorandums. I did find and read all this material, but not until the end of my second year directing the program. These documents gave me a great deal of reliable institutional memory that has enabled me to clarify my own and others' misunderstandings of policies, procedures, and practices. Admittedly, this work is not directly related to curriculum design, but during my first year in the position, I often found myself overwhelmed listening to experienced faculty members' impressions of how certain situations (involving placement, exemption, conflict resolution, plagiarism, and so on) had been handled in the past, requests to settle matters that clearly had a complex history, and suggestions of how to change the program—which I was not comfortable doing without having full comprehension of the program's history. As an administrator, I was naturally thrust into encounters tinged with institutional politics, and while senior faculty members certainly made themselves available to answer my questions, I wish I had read all the documentation at my fingertips so that I could have handled those encounters with more confidence. Most of us who have served as WPAs know that the urgent but not important daily work that we must do can be a serious distraction from more consequential, ongoing projects such as curriculum design, development, and assessment.

If I could go back to 2007 knowing what I do now, I would have done assessment exercises with samples of student writing along with the instructor surveys I distributed; in other words, I would have started course-embedded assessment in Fall 2007 or Spring 2008, rather than Fall 2008. I would have compared instructors' assertions of what students needed with the needs I saw directly in the students' writing, and I would have consulted more scholarship about how to design rubrics, in addition to simply reading the rubrics themselves, prior to creating our program's rubric. I would have done more research about holistic versus analytical scoring (White; Cherry and Meyer), even though the scores in the individual categories were useful in diagnosing the most common problems in the students' writing. I might also have insisted on portfolio assessment. Where we are now at ULL with assessment is still in Kathleen Blake Yancey's “second wave” of assessment practice. The research on portfolio assessment proves its higher validity: more samples of writing from each student are assessed, giving a clearer idea of student writing qualities, both in the individual and the aggregate. Plus, when students include statements in their portfolios reflecting on their writing processes, teachers and administrators are in a better position to assess program outcomes related to writing process, something I'd like to do.

I also would have subjected the program's outcomes to more investigation initially. Any curriculum should begin from its outcomes, but my approach has been to treat the outcomes as a constant (as I think the outcomes are sound and sensible) and only evaluate how well the assignment sequences and textbooks supported those outcomes. I have directed the writing program for almost five years and am just now turning an eye toward fine-tuning the program's outcomes to make them more specific and action-based, also perhaps to draw on the habits of mind listed in the Framework for Success in Postsecondary Writing. I have mapped the habits of mind to our current outcomes and integrated them to give an idea of how the habits of mind could be demonstrated in the context of a writing course; this table is in Appendix 5.

This profile of our program offers an account of how forcefully local conditions can drive curriculum design, especially if the (untenured) WPA wants to blend in harmoniously with the institutional culture. In my case, we have a system mandate that our second-semester course engage diversity and international issues. We are contractually obligated to expand our dual-enrollment offerings, which requires adjusting the FYW curriculum to accommodate the state-mandated curriculum for senior-level high school English classes (this must be done in order to secure the partnership with the high school). My local conditions include a strong call for keeping costs low for students. The university prides itself on quality education at a modest price, subsidized widely by the Taylor Opportunity Program for Students (TOPS), a state program giving scholarships to students who attend universities in Louisiana. At the department level, there were other factors affecting my latitude as a WPA. For over 25 years, there had been no change to the first-year writing course sequence. The first year of what, for the teachers, was a strikingly different curriculum passed under the leadership of an interim director who, while very diligent and ethical in her approach to the work, was not an experienced WPA and, out of necessity, was preoccupied with handling the everyday situations (paperwork, mediating student-teacher conflict, etc.) smoothly. I could have arrived on campus and erased the slate; I could have designed a cutting-edge writing curriculum using emerging open-source technologies and experimental genres. I could have changed, or at least tried to change, any policy, procedure, or departmental custom I wanted; no one was technically stopping me. However, one WPA's theoretical vision is limited and tempered by institutional forces and realities, as well as by the need to maintain goodwill among teachers. I'm satisfied with my choice to work incrementally, or “very carefully,” as Smit recommends.

Appendices

  1. Appendix 1: November 2007 Surveys
  2. Appendix 2: February 2008: Surveymonkey Survey Not Requiring Open-ended Questions
  3. Appendix 3: First Iteration of FYW Program Rubric (separate page)
  4. Appendix 4: Second Iteration of FYW Program Rubric (separate page)
  5. Appendix 5: Alignment of Outcomes and Habits of Mind (separate page)

Appendix 1: November 2007 Surveys

[out of 68 instructors, 7 responded]

  1. As compared to your expectations or to the old curriculum, what have been the successes and the challenges of teaching the new curriculum of 101 and 102? Please list each course’s successes and challenges individually.
  2. What chapters in The Curious Writer and The Curious Researcher do you use and plan to use repeatedly? Why? What chapters do not sequence well with the skill sets of 101 and 102?
  3. Which assignments in our curriculum have gone well in your class? Which have been less successful? What do you believe are the reasons for the success or lack of success for each assignment? Do you provide assignment sheets for students explaining the assignment’s requirements and your expectations?
  4. What type of material beneficial to your teaching practices and/or the 101/102 curriculum, from pedagogy to prose, is absent from The Curious Writer?
  5. Have you found that the skill sets and writing stages of 101 and 102 sequence well in your syllabi? Please describe how the “seams” between these stages in our curriculum have played out in your classroom and students’ writing.
  6. 102 became a themed course because an interdepartmental approach to Writing Across the Curriculum was not sustainable at UL. Knowing this, are the essays in the topic readers (The Changing World of Work, The Counterculture Reader, Writing and Reading Across the Curriculum, etc.) useful for both thematic material and for models of how the various disciplines write? Do you discuss and practice a variety of genres in your 102? Do you discuss how viewpoints depend upon one's discipline?
  7. How can the Guide to First-Year Composition be more useful to your course? What additions would you like to see in the guide? Please base these additions upon two criteria: One, your use in the classroom. Second, as the only research and writing handbook that students will have after they sell their other texts upon completion of 101 and 102.
  8. What types of pedagogical material would you like to see developed in instructor workshops and roundtable discussions? Grading tips (Ex: time-saving and response strategies), reading skills, peer review strategies, etc.
  9. If you attended the book fairs during the fall semester, did you find other textbooks you believe are better suited to our students and our goals for English 101 and 102? If so, please list the authors and titles of the books along with brief explanations about why you believe they are better than our current books.
  10. Do you have anything to add that was not covered by the above questions?

Appendix 2: February 2008: Surveymonkey Survey Not Requiring Open-ended Questions

[Out of 59 instructors, 19 responded]

  1. What do you believe are the most important goals of **English 101**? Rank these choices from 1 (most important) to 7 (least important).

    • To help students see themselves as writers—a focus on the student as a developing writer rather than on a specific kind of writing
    • To teach basic conventions of academic writing (citation practices/issues of plagiarism, common organizational patterns of research papers, library research, etc.)
    • To prepare students for writing in upper-level courses of their chosen majors
    • To teach students about rhetorical principles (e.g., rhetorical appeals, heuristics, logical fallacies, enthymemes, syllogisms, inductive and deductive logic, etc.)
    • To enable students to write about personal experiences (narrative, autobiographical writing)
    • To ensure that students master grammatical and mechanical skills in standard English
    • To encourage experimental writing that challenges the conventions of academic discourse
  2. What do you believe are the most important goals of **English 102**? Rank these choices from 1 (most important) to 7 (least important).

    • To help students see themselves as writers—a focus on the student as a developing writer rather than on a specific kind of writing
    • To teach basic conventions of academic writing (citation practices/issues of plagiarism, common organizational patterns of research papers, library research, etc.)
    • To prepare students for writing in upper-level courses of their chosen majors
    • To teach students about rhetorical principles (e.g., rhetorical appeals, heuristics, logical fallacies, enthymemes, syllogisms, inductive and deductive logic, etc.)
    • To enable students to write about personal experiences (narrative, autobiographical writing)
    • To ensure that students master grammatical and mechanical skills in standard English
    • To encourage experimental writing that challenges the conventions of academic discourse
  3. Do the reservations you may have about the textbooks in the 101-102 curriculum outweigh the benefits of being able to re-use teaching materials? (i.e., benefits of not having to start over with a new textbook, create a new syllabus, assignment handouts, etc.)<
  4. The first-year writing program frequently offers teaching workshops (ten this semester), but attendance is voluntary. Should attendance be mandatory?
  5. If the following items were added to the Freshman Guide to Composition, which would you be most likely to use in class?

    • Sample student essays (e.g., Outstanding Freshman Essay award winners)
    • Grammar exercises
    • Material about databases in UL's library
    • Heuristics
    • Other (please specify)
  6. Would you participate in a peer mentoring program if the first-year writing program offered one?

    • Yes, as a mentor
    • Yes, as a mentee
    • No
    • Other (please specify)

Notes

  1. For one review of this trend that cites several others, see Webb-Sunderhaus and Amidon. (Return to text.)
  2. To give an example, one of the outcomes teachers were asked to rank was “To teach basic conventions of academic writing (citation practices/issues of plagiarism, common organizational patterns of research papers, library research, etc.).” Eight teachers ranked this outcome as number one, three teachers ranked it as number two, one teacher ranked it as number three, four teachers ranked it as number four, two teachers ranked it as number five, and one teacher ranked it as number seven. (Return to text.)
  3. Only, in fact, in the “Language Issues” category did we meet the benchmark in Fall 2008 and Spring 2009, with 79% of papers rated as satisfactory or higher in that area in Fall 2008 and 82% rated satisfactory or higher in Spring 2009. (Return to text.)
  4. We also class-tested other books that teachers said worked well but that we ultimately didn't choose to adopt: most notably, Lisa Ede's The Academic Writer and Rebecca Moore Howard's Writing Matters. The matter is decided by vote of the First-Year Writing Committee, and when we again consider new textbooks to adopt, both of these will be serious contenders. (Return to text.)
  5. While some may argue that They Say/I Say is better suited for a first-semester course, I believe it works best in the second-semester course in our curriculum. Our first-semester course introduces students to basic argument and analysis. Assignments in that class often focus on only one source or on personal experience. In English 102, students must write research-based arguments that draw from multiple sources expressing multiple perspectives. To put it another way, in English 102, there's definitely a “they” for “they say,” whereas there may not be in English 101, or not to the same extent. (Return to text.)
  6. Other such autonomies include “the ability to retain unspent funds at the end of a fiscal year; permission to execute individual contracts up to $49,999 in a 12-month period without further review by the state’s Office of Contractual Review; and the ability to dispose of obsolete equipment without first getting the approval of another state agency" (Gomila n.p.). (Return to text.)

Works Cited

Ballenger, Bruce. The Curious Writer. 2nd ed. New York: Pearson Longman, 2008. Print.

Bartholomae, David. Inventing the University. When a Writer Can't Write: Studies in Writer's Block. Ed. Mike Rose. New York: Guilford Press, 1985: 134-66. Print.

Board of Regents, State of Louisiana. 24 Dec. 2011. Web.

Broad, Bob. What We Really Value: Beyond Rubrics in Teaching and Assessing Writing. Logan, UT: Utah State University Press, 2003. Print.

Broad, Bob, Linda Adler-Kassner, Barry Alford, Jane Detweiler, Heidi Estrem, Susanmarie Harrington, Maureen McBride, Eric Stalions, and Scott Weeden. Organic Writing Assessment: Dynamic Criteria Mapping in Action. Logan, UT: Utah State University Press, 2009. Print.

Cherry, Roger D., and Paul R. Meyer. Reliability Issues in Holistic Assessment. Validating Holistic Scoring for Writing Assessment. Eds. Michael M. Williamson and Brian A. Huot. Cresskill, NJ: Hampton Press, 1993. 109-41. Print.

Council of Writing Program Administrators. WPA Outcomes Statement for First-Year Composition. 3 July 2012. Web.

Council of Writing Program Administrators, National Council of Teachers of English, and National Writing Project. Framework for Success in Postsecondary Writing. CWPA, NCTE, and NWP, 2011. Print.

Dew, Debra Frank, and Alice Horning, eds. Untenured Faculty as Writing Program Administrators: Institutional Practices and Politics. West Lafayette, IN: 2007. Print.

Dew, Debra. Labor Relations: Collaring WPA Desire. Dew and Horning 110-133. Print.

Gomila, Billy. LSU Board of Supervisors Approves GRAD Act Autonomies. 21 Oct. 2011. 24 Dec. 2011. Web.

Graff, Gerald, and Cathy Birkenstein. They Say / I Say. New York: W. W. Norton, 2006. Print.

Horning, Alice. Ethics and the jWPA. Dew and Horning 40-57. Print.

Inoue, Asao. Self-Assessment As Programmatic Center: The First Year Writing Program and Its Assessment at California State University, Fresno. Composition Forum 20 (2009). Web.

Peters, K.J. A New Rhetorical Topography: How the Composition Classroom Became the University Homeroom and Where to Draw the Line. Enculturation 5.2 (2004). Web.

Ramage, John, John Bean, and June Johnson. Writing Arguments. 3rd Concise Edition. New York: Pearson Longman, 2009. Print.

Regents Approve GRAD Act Agreements. 27 Oct. 2010. 24 Dec. 2011. Web.

Smit, David. Curriculum Design for First-Year Writing Programs. The Longman Sourcebook for Writing Program Administrators. Eds. Irene Ward and William J. Carpenter. New York: Pearson Longman, 2008. Print.

Webb-Sunderhaus, Sara, and Stevens Amidon. The Kairotic Moment: Pragmatic Revision of Basic Writing Instruction at Indiana University-Purdue University Fort Wayne. Composition Forum 23 (2011). Web.

White, Edward. Holisticism. College Composition and Communication 35 (1985): 400- 409. Print.

Yancey, Kathleen Blake. Looking Back as We Look Forward: Historicizing Writing Assessment. College Composition and Communication 50.3 (1999): 483-503. Print.

Bookmark and Share

Return to Composition Forum 27 table of contents.