Reading Response 2: Analyzing CCNY Mid-90s Pilot Writing Program for Incoming Freshmen

The main objective in Barbara Gleason and Mary Soliday’s pilot writing project for incoming freshmen at CCNY in the mid-90s appeared to be crafting a program that would support the needs of a diverse student body. That diversity would be cultural, but as they were not able to directly survey students on cultural background and mainly relied on transcripts and students’ class output, they could only analyze the diversity of students’ abilities and preparation levels. After criticism that CCNY was declining in quality due to open admissions and not enough distinction between remedial writing and mainstream writing courses, this project was meant to show that groups in both could produce quality work. Overall, the results demonstrated a similarity between the average passing scores of remedial and mainstream students (Gleason 579). All those scores averaged out to 70-80s (Cs and Bs). Proponent of open admissions saw this as evidence that the standardized writing tests that divide these groups were ineffectual at proving work quality. Opponents of open admissions saw this as further proof of a decline: if remedial students could pass so easily, there’s a problem; if mainstream students are just getting by with passing grades, there’s an even bigger problem.

Though the project would not change the minds of its opposition, it did give a lot of insight into what worked well and what did not.

The standardization of the CUNY Writing Assessment Test, which placed students into remedial or mainstream freshman English, is looked down upon by Gleason. It has to be used, because that is how the CUNY application process is set up, but Gleason continually draws attention to the fact that it does not clearly predict the future success of students. She references a 1885-1990 study by Alexander Astin that showed high school grades and SAT verbal scores as more accurate predictors of students’ college success. Those too have issues, such as the variables of sociopolitical situations, but Gleason doesn’t enter into that complication.

Despite her stance against standardization, the writing program requires standardization on some level. This is for ease of grading and for later data review, but also for the professors and students, who when surveyed after the first semester wanted more standardized expectations. A student portfolio which originally included five components was narrowed down to three by the second semester, limiting students’ chances to demonstrate their skill in a wider variety of assignments. There were not much uniformity in the classes themselves, as freedom had been given to instructors. Later, in teacher interviews, they were asking for more clarity on expectations and purpose to the courses overall.

In my own teaching experience in NYC public schools as a first-time student teacher in 2012, I wanted to avoid standardization, as I felt it excluded students by not taking other qualities into account. I didn’t even want a rubric, especially as I felt like there was more to writing quality than checking off items on a list. Reality hit me, however, when I had so many essays to grade and had to assess each one individually. They were all on similar topics, so they began to all sound somewhat the same. I did end up creating a rubric, learning by practice something I had before only worked with in pedagogical theory courses. I still don’t believe in strict rubrics, as no rubric can really create a consensus on the uniqueness of a student’s voice or style, but I know that some bar must be met in certain categories (understanding of prompt, basic grammar/spelling correctness, structural clarity, etc.). Even with the rubric, I had problems. I was teaching U.S. government for high school seniors, and they hated writing, even one-page essays. I ended up lowering my standards to reward those who even attempted to write something coherent or longer than five sentences. The ESL student whose writing was a mess grammatically, but factually correct and deeply earnest in its thoughts–not to mention two pages long–received high marks. I couldn’t separate the rubric from my own emotions at times, showing its–and my–fallibility and inevitable inconsistency.

There is no perfect way to judge student output and the overall success of a course. There are too many variables, as you are dealing with people. In addition, you are not asking them to find a number by running it through an equation, but expecting them to pull original words out of their minds and place them onto paper. Any study, no matter the funding or support it receives, will likely raise more questions than provide answers. Studies will also show us more problems in our educational system than provide solutions.


Gleason, Barbara. “Evaluating Writing Programs in Real Time: The Politics of Remediation.” College Composition and Communication 51.4 (June 2009): 560-588.