Exceptional Children, Vol. 59, No.1, pp. 41-53. © 1992 The Council for Exceptional Children

Implementing a Successful Writing Program in Public Schools for Students Who Are Deaf THOMAS N. KLUWIN ARLENE BLUMENTHAL KELLY Gallaudet University

ABSTRACT: A 2-year project to improve the writing skills of children who are deaf included instruction for teachers in the process approach to teaching writing. The project encompassed 10 public school programs for students who are deaf and included 325 students in Grades 4-10 and 52 teachers. The project included specific training goals for teachers, a self-report procedure for the teachers, and a data-collection and analysis phase to assess short-term effects on students' writing. Teacher self-reports indicated widespread involvement in the project, and pretest and posttest results showed dramatic improvement in students' writing-particularly in grammatical skills. Scoring systems for students' papers are included.

o

The difficulties that young students who are deaf have with writing English are well documented in a history that goes back several decades (Heider & Heider, 1940; Kluwin, 1979; Stuckless & Birch, 1966; Taylor, 1969; Thompson, 1936; Walter, 1955). More recently, in a study of the changes made by students who are deaf to narratives shared on a computer system, Livingston (1989) showed that the students engaged in surface word changes or rephrasings of entries to respond to teachers' inquiries for clarification rather than any major restructuring of the text. The writers who were deaf tended to make surface changes by adding or substituting words rather than through deletions as was characteristic of some hearing writers. In addition, other research on the composing process of young children who are deaf suggests that some apparent errors in the writing of these children may result from the process of thinking in one language and writing in another when there is no clear concept of how to compose in either situation (Mather, 1989). Although researchers have reExceptional Children

peatedly observed that young writers who are deaf have a poor command of written English syntax, more recent work suggests that their problems with writing may be related to an ignorance of how to compose effectively. Parallel to these investigations of poor writing skills of children who are deaf is an extensive research history on the process of writing, as both a theoretical construct and a curricular innovation (Applebee, 1982; Hillocks, 1987; Humes, 1983). The thrust of this work is that writing, as a process, is not a linear sequence of steps but rather a recursive process that has identifiable subprocesses. This approach to writing both as theory (Humes) and as pedagogy (Applebee) has a considerable and successful tradition. The goal of teaching writing as a process is to get students to work through the same general steps in composing that skilled writers go through rather than teaching writing through correcting finished compositions. In other words, process approaches to writing instruction are effective in 41

that they promote the thinking process of the individual student. In a review of 20 years of writing research (about 2,000 studies), Hillocks (1987) came to some specific conclusions about the effects of such an intervention. Hillocks reported that the use of editing skills, such as grammar and mechanics corrections, as the primary focus of writing instruction had a negative effect on outcomes. Writing programs that focused on a study of writing as "products" were more effective-but not as effective as forms of writing instruction that focused on the production of discourse or on activities that fostered the production of discourse, such as planning or organizing. In a large-scale study, Baxter and Kwalick (1984) reported that the writing of 1,029 highschool students improved after only 15 weeks of instruction using a process approach to composing. They reported "contradictory" results in that holistic scores for papers had increased, but at the same time there was an increase in the number of grammatical errors made by the students. Working with 48 high-school students in a one-semester project, Moriarity (1978) reported that instruction in any component of the writing process led to an improvement in compositions that were rated impressionistically. Though Moriarity's study was flawed by a possible "Hawthorne" effect, it does fit into the regular pattern of findings for this kind of evaluation. Working with collegeage students, Clifford (1981) reported that a modified process approach was successful in an experimentallcontrol group study. Covarying for initial between-group differences, Clifford reported that there were significantly greater gains in the holistic scores of the students, but no differences were found in their knowledge of the mechanics of writing or their use of the mechanical conventions of writing. HUmes (1983) offered a possible explanation for the differential effects of process approaches to writing instruction. In her review of the research in this area, she commented that the biggest impact of this type of composing Was on planning, with considerably less emphasis on "translating," or putting words to paper. In addition, for those who were successful in this type of composition instruction, most revisions involved conceptual restructuring and responding to audience interests. Writers placed less explicit emphasis on revision of formal aspects of the composition. Knudson (1988) supported this contention: She reported that reduced amounts of 42

direct teacher involvement led to better compositions when the approach was to teach writing as a process. This history of process approaches to teaching writing suggests three general findings: • Studies regularly cite positive effects for this approach When holistic or impressionistic scoring is employed, even for relatively short periods of instruction. • These approaches report very mixed results in the improvement of specific grammatical or mechanical skills; that is, improvements in grammatical skills are occasionally reported but are difficult to link to the instructional procedure used. • Some information exists concerning the use of process approaches with writers with learning disabilities but not with writers who are deaf. That is not to say that the approach has not been used with these writers, but rather, there are no formal evaluations of these attempts using student writing as an outcome measure. Consequently, to improve the English composing skills of young writers who are deaf, we conducted a 2-year intervention program in 10 public school programs for students who are deaf by training teachers to use the process approach to teaching writing. We assumed that the method would be generally effective in improving the overall quality of students' compositions as measured by impressionistic scoring of overall writing quality and hoped that improvements in grammatical complexity would be seen as well.

METHOD Teacher Training Fifty-two teachers from 10 school districts around the United States with an average of 10 years of teaching experience participated in the project. Participants came from every region of the United States. Forty percent of the teachers had master's degrees, 31 % had done postgraduate work, and the remaining teachers had bachelor's degrees. Eighty-two percent had permanent certificates as teachers of the deaf; 3 were certified to teach English, and 7 were certified to teach secondary-level classes. One participant teacher identified herself as hard of hearing and 2 identified themselves as deaf. During the 2 years of the project, two separate workshop sequences were conducted for the September 1992

TABLE 1 Comparison of Four Types of Student Participants

Variable Age: Mean SD Sex % Female Ethnicity: % Minority Hearing loss BEA mean SD Reading level G.E.S. SD

Groupl, First Year Only (n=l44)

Group 2, Second Year Only (n=lOl)

Group 3, Both Years/Different Teachers (n=79)

Group 4, Both Years/Same Teacher (n=102)

14.09 (3.01)

12.05 (2.70)

13.49 (2.86)

13.28 (2.60)

49.7

56.6

48.1

55.9

50.4

58.2

54.7

37.1

86.46 (19.29)

83.63 (21.07)

91.45 (19.58)

82.90 (19.72)

3.70 (2.32)

4.62 (3.82)

4.06 (2.46)

4.00 (2.27)

Note: BEA == Better Ear Average in decibels. G.E.S. = Grade Equivalent Score.

teachers of the deaf involved in this project, both on and off the Gallaudet University campus. The first-year workshops focused on developing a rationale for writing instruction, teaching writing as a process rather than as a product, and the promotion of writing through dialogue journal writing. The point of the second-year workshops was threefold: to review the first year's training goals with an emphasis on a clearer definition of the goals of a writing program, to learn to use specific rationales in the selection of writing topics, and to learn how to provide clear and useful feedback to the students about their compositions. During the second-year workshops, the participants were taught how to express nonjudgmental acceptance of the content of students' writing while discussing revisions to the form and how to create classroom dialogue about form as a means to convey content in a specific fashion. Two forms offeedback were stressed during the second year. First, the teachers were given additional training in the preparation of scoring guides. Second, the faceto-face writing conference was introduced as a technique to provide feedback. Each training session consisted of two 8-hr days. Training procedures included a mix of lectures, discussion, and feedback from the teachers. Posttraining feedback questionnaires indicated a positive response to the content of the training and the presentations. Exceptional Children

Sample This was a quasi-experimental study of the implementation of a teaching method under local schooling conditions; consequently, the movement of students into and out of the project was not controlled for. As a result, four types of student groups emerged naturally as the project went along: I. Students who started the project but left after I year. 2. Students who entered the project at the start of the second year. 3. Students who were in both years of the project but who changed teachers from Year I to Year 2. 4. Students who kept the same teacher for both years of the project. Because of this difference in the degree of participation, the variable, exposure to instruction, was defined as having four values for these definitions. Table 1 compares the four groups of students by age, gender, ethnicity, severity of hearing loss, and reading level. Some systematic differences could be seen among the four groups. Group 2 was considerably younger than any of the other groups, but the difference in age between Group 3 and Group 4 was not significant. Group 1 was 43

significantly older than the other groups. The gender differences did not appear to be substantial, but the number of minority group students was considerably lower in Group 4. Group 3 had more students with a more severe hearing loss than the other groups. Group 1 had a reading level that was significantly lower than that of the other groups. This group was also the most difficult to collect posttest data from because we did not know they were out of the project until the fall of the following year. As a result, they were not used in later comparative analyses. On the whole, there were a number of random, but significant, differences among the four groups. These differences were compensated for in later analyses by adjusting pretest and posttest scores statistically for the differences in age, ethnicity, hearing loss, and reading ability.

Instrumentation Demographic Information. With the permission of the students' parents and the cooperation ofthe schools and the Center for Assessment and Demographic Studies, we obtained background information on the students. This included the date of birth, sex, ethnic group, degree of hearing loss, etiology, and onset of deafness for each child. In addition, we obtained students' current reading achievement scores from school records. The measure of reading skill was the Stanford Achievement Test, Hearing Impaired version. We asked the schools for the scores on the reading comprehension subtest. Because not all schools provided us with scaled scores, we used grade equivalent scores (see Table 1). Writing Assessment. During the fall of 1987, each student in the project was scheduled to be given the descriptive, persuasive, and business letter test stimuli appropriate to his or her age level that had been developed by the Educational Testing Service (ETS) for use by the National Assessment of Educational Progress (NAEP) (Mullis, 1980). The tests were administered locally by the teachers in the project and returned by mail to the research team during the fall of 1987. Students were allowed as much time as needed but generally completed each test within half an hour. With the exception of one group of essays (submitted by a teacher who encouraged students to rewrite their essays), all essays were the product of a single draft. The test administrators were told to en44

courage the children to write and to explain to the students what was expected but not to tell them what or how to write. The stimulus was provided to the students in print and was read to them using total communication. The process was repeated again in the spring of 1989.

Teacher Logs. To assist teachers in monitoring their writing instruction, we developed a self-report system that requested information from the teachers about the quantity of the writing the students were doing. This information included a description of the books used, pages covered, and amount of classwork and homework. On a monthly chart, the teachers were to estimate what portion of their class time was devoted to a specific activity in 15-min increments. The purpose of the coding system was to gather estimates of the amount of writing instruction taking place and the degree to which the teachers followed the procedures for teaching writing as a process. Six categories of writing instruction were to be coded by the teachers: dialogue journal writing, prewriting or organizing activities, writing in class, revision activities, publishing or any class time devoted to the production of material in a finished form, and other writing activities.

ANALYSIS Essay Scoring Several scoring systems were used. For the persuasive and descriptive essays, counts were made of the number of words, sentences, and clauses, both grammatical and ungrammatical, and the number of "t-units." Words and sentences were defined orthographically; t-units were defined as a main verb clause and any subordinate clauses. Grammatical clauses were defined as complete verbs with their subjects and subordinating or coordinating conjunctions, if appropriate. If a group of words functioned as a clause in a sentence but lacked a major element such as a complete verb or an appropriate subordinate conjunction, it was counted as an ungrammatical clause. The persuasive and descriptive essays were also coded using holistic scoring systems. The format of the holistic scoring system for the descriptive and the persuasive essays consisted of a 6-point system, ranging from outstanding papers to barely comprehensible papers. A seventh category was used when the paper was comprehenSeptember 1992

TABLE 2 Holistic Scoring Guide for Persuasive Essays, "Letter to the Principal": NAEP Age 13 Criteria Modified for Students Who Are Deaf Description ofPapers

Score

6

These papers have all the elements of a "5" paper. In addition, they cast the materials in a systematic structure that reflects the logical steps in the process of bringing about the change. At least two, and possibly all, of the elements are expanded so that the various issues are related to each other and to the proposition being defended. There may be a few minor mechanical problems.

5

Papers state a change or identify a problem, explain how to bring about the change or solve the problem, and tell how the change will benefit the school. Mechanical problems should be relatively minor (unless the paper is very strong). Errors would include occasional run-on sentences or sentence fragments, very infrequent morphological errors, and some word choice errors such as preposition usage or idiomatic expressions.

4

These papers state what the problem is or what change is needed, provide a reason for the change or problem, and explain with some detail how to solve the problem or bring about the change. Mechanical or grammatical errors include some morphological errors or word deletions, particularly articles; some verb tense errors; some run-on sentences or sentence fragments; some word choice errors; and missing possessive or plural markings.

3

These papers state a clear problem in the school or request a specific change. They must offer some indication of a resolution of the problem. Errors include the deletion of prepositions and articles, frequent run-on sentences, failure to mark plurality, frequent word choice errors, and inappropriate use of punctuation.

2

There is a central core of difficulty, but it is not defined. These papers are often detailed complaints. They include errors in syntax, diction, and mechanics. Errors include many run-on sentences or sentence fragments, some deletions of major sentence elements, frequent deletion of morphological markers, frequent word choice errors, frequent capitalization errors. These papers do not have an identifiable paragraph structure and may list sentences or words related to a topic. Simple sentence structures or incomprehensible sentence fragments, deletions of major sentence elements, or other serious syntactic faults are present. Word choice is marked by very simple vocabulary and repeated word/phrases.

o

No-response papers should be given to the Table Leader for further analysis/scoring.

Note: NAEP = National Assessment of Educational Progress. Source: Mullis, I. (1980). Using the primary trait system for evaluating writing. Princeton, NJ: Educational Testing Service. Adapted by permission.

sible but off the topic, which happened more with the persuasive essays than with the descriptive essays. The operational definitions ofthese scales are shown in Tables 2 and 3. The business letters represented a distinct type of writing from the persuasive and descriptive papers; thus, two different types of scoring systems were used. Because variations in missing information in the business letter made holistic scoring difficult, we developed an individual-featureanalysis system, which counted the presence or Exceptional Children

absence of specific pieces of information such as the greeting, the internal address, or the closing. Two primary categories were used: form and content. The form categories included the internal address, date, greeting, writer's name, return address, and closing. The contents were coded for a reference to the calendar, a request for the item, statement of a specific choice, and the addition of extraneous information. In addition, there were specific content requirements for effective communication about the topic: the writer had to 45

TABLE 3 Holistic Scoring Guide for Descriptive Essays, "Describe Something": NAEP Age 13 Criteria Modified for Students Who Are Deaf Score

Description of Papers

6

These papers focus on a single object and describe it with concrete and clear language. They contain considerable detail and substance, originality of language, and some sense of structure. There may be a few minor mechanical problems; however, the focus remains strong.

5

These papers focus on a single object and describe it clearly, but with less detail, originality, or focus than the "6" papers. There may be a little sense of organization, but the object should be individualized, and mechanical problems should be relatively minor (unless the paper is very strong). Errors would include occasional run-on sentences or sentence fragments, very infrequent morphological errors, and some word choice errors such as preposition usage or idiomatic expressions.

4

These papers describe something, but are generally thin and are often very short or confused. Mechanical or grammatical errors include some morphological errors or word deletions, particularly articles; some verb tense errors; some run-on sentences or sentence fragments; some word choice errors; and missing possessive or plural markings.

3

These papers describe something, but the understanding is marred by serious grammatical or mechanical problems. Errors include the deletion of prepositions and articles, frequent run-on sentences, failure to mark plurality, frequent word choice errors, and inappropriate use of punctuation.

2

Papers scored as "2" are very brief, nondescriptive, and confused. They contain serious errors in syntax, diction, and mechanics. Errors include many run-on sentences or sentence fragments, some deletions of major sentence elements, frequent deletion of morphological markers, frequent word choice errors, and frequent capitalization errors. These papers do not have an identifiable paragraph structure and may list sentences or words related to a topic. Simple sentence structures or incomprehensible sentence fragments, deletions of major sentence elements, or other serious syntactic faults are present. Word choice is marked by very simple vocabulary and repeated word/phrases.

o

No-response papers should be given to the Table Leader for further analysis/scoring.

Note: NAEP =National Assessment of Educational Progress. Source: Mullis, 1. (1980). Using the primary trait system for evaluating writing. Princeton, NJ: Educational Testing Service. Adapted by permission.

mention a particular time, to request that the item be sent, and to provide information as to where to send it. The coding system involved only checking for the presence of key words or phrases. An explanation of this system is provided in Table 4. Because the business letters were so brief, it was impractical to do grammatical counts on them. Instead, they were evaluated using a 6point primary trait scoring system for grammatical correctness, which rated the business letters on a scale ranging from being virtually free of grammatical or mechanical errors to having sub46

stantial deletions of major syntactic elements and a failure to observe orthographic conventions (see Table 5). A seventh category was used, which indicated that the letter was too brief to be evaluated. The same general scoring procedure was followed in the case of all three impressionistic scoring systems: the holistic scoring system for the descriptive essay, the holistic scoring system for the persuasive essay, and the primary trait rating system for the grammaticality of the business letter. Before papers were scored, we explained the scoring system to two readers. We discussed the September 1992

TABLE 4 Trait Identification Scheme: "Rock Poster" Business Letter Scoring ofResponses Form Structure ofLetter Internal Address

Complete (2) ~aryJones,~anager

National Book Store

Partial (1) Mary Jones or Manager or National Book Store

Date

Month, day, year

Any indication of a date

Greetings

Ms. Mary Jones Manager Miss: or Ms.:

Mary or Jones

Writer's Name

Chris Brown Student's full name

Chris or Brown or student's first or last name

Address

37 Elm St. Gulf, Ohio 76453 or student's own home address

Missing any information

Closing

Closing with complete name Partial name or inconsistent signature or closing only

Content ofLetter

Score

Presence ( 1)

Reference

Refers to calendar "I want a poster about C. L. Cool." "Send me 'Mountains and Streams."

Choice

States a choice. "I want a poster about C. L. Cool." "Poster's name is Night Ranger."

Extraneous

Includes information unrelated to a business letter. "I like rock and roll." "How are you? I am fine."

Request

Asks for a poster or calendar. "Send me the free poster calendar."

Score

Note: The following are instructions for this scoring system: • Rationale: The stimulus requires respondents to clearly communicate the information necessary to explain the situation and obtain an appropriate response. The purpose of the scoring is to identify the elements essential to these goals. • Form Structure: For each item, write a "1" or "2" as appropriate. A" 1" indicates a partial response according to the criteria. A "2" indicates a complete response according to the criteria. A "3" is reserved for special coding. • Content Scoring: Score these with a "I" to indicate presence of the content.

criteria and the anchor papers. Then the readers practice-scored 20 papers in groups of 5 to develop reliability. This process was repeated until the desired level of reliability was achieved. During the scoring session, each reader assigned a score from I to 6 to a paper. If the scores of the two readers were within I point of each other, the scores were accepted as being in agreeExceptional Children

merit. The score for the paper was the sum of the two readers' scores. In the event of a disagreement, the referee "pulled" the paper to discuss the discrepancy and to attempt to reestablish reliability between the two readers. However, after training, the readers usually did not differ by more than I point on any paper. Consequently, discrepant scores occurred less than 3% of the time. 47

TABLES Primary Trait Scoring Guide: "Grammaticality" Description of Writing Sample

Score

6

Few if any errors. Random errors, typos, etc.

5

Occasional run-on sentences or sentence fragments. Very infrequent morphological errors. Some word choice errors; i.e., prepositions, idiomatic expressions. Nonexistent to minor mechanical faults; i.e., comma faults or inappropriate end punctuation.

4

Some morphological errors. Some word deletions, especially articles. Some verb-tense errors. Some run-on sentences or sentence fragments. Incorrect subordination. Some word choice errors. Possessive or plural punctuation missing.

3

Deleted prepositions and articles. Frequent run-on sentences. Inappropriate question formation. Confusion of subordination and coordination. Failure to mark plurality. Some word choice errors. Absence of end punctuation. Other types of punctuation errors.

2

Many run-on sentences or sentence fragments. Deletion of major sentence elements. Frequent deletion of all morphological markers. Frequent word choice errors, especially interchanging nouns and verbs. Inversion of compound words or incomplete compound words. Frequent capitalization errors.

1

Double or redundant use of words; i.e., "our my." Simple sentence types, or invariant or stereotypical sentence types. Many incomprehensible sentence fragments. Frequent deletion of major sentence elements. Scrambled syntactic elements. Very simple vocabulary choice. Repeated words or phrases. Many capitalization errors.

o

Blank

When the readers were consistent with each other and the criteria, they then scored blocks of20 papers each to check consistency with the referee. This system was developed by ETS (Mullis, 1980) as a way of ensuring greater reader agreement in testing situations where individual sub48

ject or program evaluations were concerned. In theory, reader agreement is 100% because no disagreements beyond the l-point discrepancy limit are allowed; however, in practice, uncorrectedreliability across all three separate scoring systems was 97%. September 1992

Outcome Measures Grammatical Complexity. From the counts of the grammatical categories for the descriptive and persuasive essays, three measures of syntactic complexity were computed: words per clause, words per t-unit, and clauses per t-unit. Consequently, there were nine measures of grammatical complexity for the pretest papers including a measure of syntactic errors for both the descriptive and persuasive pretest essays, and the primary trait grammatical rating for the pretest business letter. A factor analysis, computed to reduce the number of variables, generated a grammatical complexity factor which was a measure of syntactic complexity and general grammatical accuracy in that it consisted of the t-units per clause variables, the overall quality rating for the grammaticality of the business letter, and the syntactic complexity measures for the descriptive essay. This factor score was used as a global description of grammatical complexity since it included a range of writing settings and measures. To create posttest scores, the factor loadings for the pretest grammatical complexity measure were used as weights in computing the posttest factor scores for grammatical complexity. Overall Writing Quality. To develop a composite quality measure, as opposed to a measure of grammatical complexity, the holistic scores for the descriptive pretest and the persuasive pretest were factor analyzed, along with the three pretest factor scores for the business letter described below. In the process of generating a single score for the general quality of the business letter, a factor analysis of the trait counts for the business letter form and content categories produced three separate factor scores. First, the content mastery factor included all the content measures and the essential information of the return address. Scoring well on this trait would mean that the individual would receive the product described in the stimulus, whereas a low score would mean that one would not get the product. Second, the formal mastery factor included the formalism of the internal address and the date as well as the return address and the name. Third, the social mastery factor score included the elements that were descriptive of a social letter as well as of a business letter, whereas the categories in the other factors seemed more unique to business letters. The soExceptional Children

cial mastery factor also contained a large loading for extraneous information. The composite quality score consisted of the persuasive holistic score and the descriptive holistic score, as well as the business content mastery score and the business form mastery score. The composite quality score was then used in further analyses as the measure of overall writing improvement. In summary, the measure of writing quality used in this study addressed three questions: Did the students' ability to write a description improve? Did the students' ability to construct a persuasive argument improve? Did a student's chances of receiving a product as the result of writing a business letter improve?

Evaluation Questions What Evidence Was There That the Teachers Actually Taught in the Way They Were Trained? The teachers in the project kept logs of their teaching activities during the 2 years of the project as an inexpensive check on the impact of the training. Teachers reported the number of minutes they spent in writing instruction during the day. On the average, about 40% of all available class time for literacy-related subjects was devoted to teaching writing. During the first year of the project, an average of 22.21 min per day were devoted to the teaching of writing; the average for the second year was 16.45 min per day. To assess the implementation of teaching writing as a process, the logs were analyzed for the completion of the various categories of teaching writing as a process; that is, did the teachers engage in the cycle of prewriting-writing-revision-publishing respective of other writing activities? Of the 52 teachers involved in the project across the 2 years, 60% regularly taught all four phases of the writing process by their own report. Thirty percent of the teachers regularly taught the first three phases of the process, but did not regularly engage in "publishing" as an activity. Eight percent of the teachers engaged only in prewriting and writing in the classroom. One teacher did not follow the writing process approach or failed to properly note her activities on the self-reporting log system. This teacher entered the project in the second year. Her students are not included in the subsequent inferential analyses. Was Instruction Effective? The primary hypothesis of the study was that if instruction were ef49

fective, a posttest score adjusted for the effects of maturation and differences in the implementation of the training would be greater than a pretest score. Since there was variability in the amount of instruction provided, it was possible that the students who were only in the second year of the project would show the smallest amount of change in their writing skills and those students who had 2 years of instruction with the same project teacher would show the greatest amount of change. To assess the impact of additional training on writing change, this difference was kept in the analysis. To test the primary hypothesis, the pretest factor scores for the overall quality of the composition and for the grammatical complexity of the essays were adjusted for between-subjects differences described earlier in this article. Adjusted pretest scores were computed in a multiple-regression analysis using the students' beginning reading ability as measured by their grade equivalent score, their degree of hearing loss as measured by better ear average, the gender of the student, the age of the student when the data collection was done, and the ethnicity of the student as predictor variables. Maturation was controlled for by using the child's age at the two testing points in the prediction equations to create the "adjusted scores." In other words, the posttest score would be reduced because the child was older. The equation to adjust the pretest composite quality score accounted for nearly 60% of the variance. Because these were the variables that appeared to differentiate the groups who participated in the study, we are confident that we have controlled for sources of variance not related to instruction. The F value for the multiple-regression equation to adjust the grammar pretest factor score Was statistically significant, although only 18% of the variance was accounted for in this equation. What this suggests is that, although there may be between-groups differences on demographic variables, these same demographic variables are not substantially related to the grammatical complexity of the students' writing. With one addition, the same multiple-regression procedure was used to adjust for betweensubjects differences on posttest factors scores. First, because some of the students had been in the study for only I year, whereas others had been in the study for 2 years, it was necessary to adjust the posttest results for the amount of time between testing sessions. For the students who were 50

in the study for 2 years, this was 18 months. For the other students, it was 9 months. This equation predicted about 60% of the variance in the posttest composite quality score, suggesting that the holistic scores were more sensitive to between-student differences than were the grammar measures, which predicted only 14% of the posttest grammatical complexity score variance. In summary, both pretest and posttest scores were adjusted for between-groups differences noted earlier, specifically, age, sex, ethnicity, degree of hearing loss, and reading ability. In addition the posttest score was adjusted for the amount of instruction, as measured by the months of instruction that the students received. Because both the adjusted pretest scores and posttest scores included corrections for the age of the students, maturational effects were eliminated from the subsequent analysis. Figure 1 shows the adjusted pretest and posttest group means for both the composite quality score and for the grammar complexity score. To test the hypothesis stated previously, a repeated-measures analysis of variance was computed using the within-subjects factors of the time of testing-adjusted pretest versus adjusted posttest-and the measure used to judge progress-composite quality versus grammatical complexity. The two indexes were the posttest score and the expected posttest score. The threelevel factor was the degree of exposure to instruction. The first level of the factor consisted of students who were only in the second year of the project; the second level, students who were in both years of the project but had different teachers; the third level, students who had the same teacher for both years of the project. The between-subjects factor of instructional group membership-l year of instruction, 2 years of instruction with two different teachers, and 2 years of instruction with the same teacher-was included as a secondary control for the effects of amount of instruction. In Table 6, Exposure refers to the three groups of students who had varying degrees of exposure to instruction, that is, 9 months or 18months. This difference should have been controlled for in the process of creating the adjusted scores described earlier. It is clear from the F value of exposure that the process of adjusting for between-groups differences on this factor was successful in controlling for these differences. September 1992

FIGURE 1 Adjusted Scores for Composite Quality and Grammatical Complexity



Composite Quality 1 YearOnly

~

Composite Quality

~ 2 Years/Different Teachers ~ Composite Quality

Adjusted Pretest

~ 2 Years/Same Teacher ~

Grammaticality

~

Grammaticality

I7A lLJ

GrammaticaJity 2 Years/Same Teacher

~ 1 YearOnly

~ 2 Years/Different Teachers

o

0.5

1

1.5

2

2.5

3

Cell Means for Adjusted Scores

TABLE 6 Repeated-Measures Analysis of Variance

Source of Variation Tests of between-subjects effects Exposure Tests involving Time within-subject effect Time Exposure x Time Test Exposure x Test Exposure x Time x Test

df

M

F

Sig. ofF

2,183

3.91

1.58

.208

1,183 2,183 1,183 2,183 2,183

320.46 2.95 190.31 5.19 3.27

646.74 5.95 222.21 6.05 10.86

.000 .003 .000 .003 .000

Time refers to the time of testing, that is, before or after training. There is a statistically significant F value for time of testing in Table 6. In other words, adjusted posttest scores, which included an adjustment for maturation effects, were higher than the adjusted pretest scores. Instruction was effective because the factor that measured the difference between the adjusted pretest scores and the adjusted posttest scores was statistically significant. There was a statistically significant effect for the type of test, identified as Test in Table 6. As Exceptional Children

can be seen in Figure I, adjusted posttest scores were higher for the grammatical complexity measure than for the composite quality measure; however, the magnitude of the differences between the pretest and posttest scores were greater than the differences among the three groups. There is a two-way interaction of Exposure to instruction and Test. The bulk of this effect is traceable to pretestJposttest differences. The group that had only I year of instruction showed the same amount of change between their pretest composite quality scores and their posttest com51

posite quality scores; however, they showed dramatic improvements in the complexity of their grammar. For the other two groups, those with 2 years of instruction, the improvement in their grammatical complexity was nearly as great as for the group with only 1 year of instruction; but their composite quality scores jumped nearly as much. The three-way interaction of exposure, time of testing, and type oftest is reflected in Figure 1. This source of the three-way effect comes primarily from the pretest/posttest differences and the substantial differences between the composite quality score change and the grammatical complexity score change. The apparent changes in the adjusted grammar scores are partially explicable as artifactual results because similar findings have been noted during a reanalysis of the l3-year-olds' essays from the NAEP (Soltis & Walberg, 1989). Specifically, the discrepancy in results may lie in the nature of the scales that were used. The holistic scores that generated the composite quality score were a discrete, ordinal scale with a range of 12 values. The counts for the grammatical measures had a maximum range of 30 points on a continuous, interval scale. There is more variance between scores for the grammatical measures, thus creating the possibility for greater score differences. In addition, the multiple regression to adjust for various extraneous effects accounted for less ofthe variance for the grammatical complexity score than it did for the qualitative score. What is probably true of these data is that the differences and the directions of the differences are real, but that the degree of discrepancy between measures may be artifactual. In other words, there was an increase in the grammatical complexity of all of the students, but the magnitude of that change in relation to the change in the composite quality score may not be as great as it appears.

DISCUSSION It is apparent from our study that teaching writing as a process results in improvements in the writing of students who are deaf. When the teacher's focus was on providing specific steps that the students could take to improve their compositions, the students' writing improved beyond what would be expected from normal maturation, as measured both by overall impressionistic measures and by measures of increasing grammatical complexity. 52

Teaching writing as a process improved the overall quality of the writing of students who are deaf. Previous research on teaching writing through a process approach had strongly suggested that we would in fact achieve such results using impressionistic scoring techniques which focus on the overall quality of the writing. It was not as apparent from reviewing the previous research that changes in grammatical complexity could be consistently expected, but such a finding has on occasion been reported. When writers become preoccupied with grammatical correctness, they have a tendency to use simpler constructions, employ more familiar or common words, and experiment less with language. The apparent change in grammatical complexity may be due to greater experimentation on the part of the students or to a greater sense of freedom of expression, which could be seen in a major increase in the length of students' sentence elements, especially the number of words per clause and per tunit. Because writing is a complex task with many components that might be taught, there is a tendency on the part of teachers to either avoid teaching writing or to attempt to substitute other more manageable tasks for actual composing or instruction in composing. By giving the teachers an approach that produced rapid, apparent improvements in the writing of individual students, we encouraged composing as an activity. Consequently, not only did the students become better writers, but many of the teachers were .able to have a positive and rewarding teaching experience. Because this activity was sustained over a 2-year period, it was possible to document a substantive change in writing quality.

REFERENCES Applebee, A. N. (1982). Writing and learning in school settings. In P. M. Nystrand (Ed.), What writers know: The language, process, and structure of written discours.(pp. 365-382). New York: Academic Press. Baxter, M., & Kwalick, B. (1984). Holism and the teaching of writing (Research Report). New York: Scribner Educational Publishers. Clifford, J. (1981). Composing in stages: The effects of a collaborative pedagogy. Research in the Teaching of English, 15(1),37-53. Heider, F., & Heider, G. (1940). A comparison of sentence structure of deaf and hearing children. Psychological Monographs, 52(1),42-103. Hillocks, G. (1987). Synthesis of research on teaching writing. Educational Leadership, 44, 71-76. September 1992

Humes. A. (1983). Research on the composing process. Review of Educational Research. 53(2). 201216.

Models for the Evaluatio n of Treatment Efficacy A Research Conference

Kluwin, T. ( 1979). The effects of selected errors on the written discourse of deaf adolescents. Directions. 1(2),46-53.

Nove mbe r 17-18, 1992 Sa n Anlo Ria. Tex as T his research conference is desi gned 10 fos ter interac lion and pro vide nelw ort.in g op poetumues am:>nJ es tablished and potential cl inica l inve~ilafon with an interne or experti se in speech, voice. dysphag ia. language . and hearing treaIrJ1enl c1flCaC)'iS5UeS. lhc intent is to crea te a foru m for encouraging and promoIi ng the treatment efncac y research of invesaigalCX"S acrms mal -

Knudson, R. E. (1988). The effects of highly structured versus less structured lessons on student writing. Journal ofEducational Research. 81(6). 365-368.

tiplc di §ci pl i ~.

Livingston, S. (1989) . Revision strategies of deaf student writers. American Annals of the Deaf. 134. 2126.

The con ference will fea ture a l cynol c addre ss relati ng, bioe thics to dfic8Cy research . General 5n\ion include :

• De. ign requ irement s in effi cacy researc h

Mather, S. (1989). Visually oriented teaching strategies with deaf preschool children . In C. Lucas (Ed.), The sociolinguistics ofthe deafcommunity (pp. 165190).New York: Academic Press.

• Selection of appropriate rese arc h designs

• Outcome mea.wTt'~ • Implication" of efficac y research for serv ice delivery In addi tion . facilitat ors will lead informa l group di scu . sioes on topics related 10 effi cac y research in comm unication disorders.



Moriarity, D. J. (1978). An investigation ofthe effects of instruction in five components of the writing process on the quality and syntactic complexity of students ' writing (Research Report). Framingham, MA: Framingham State University.

Fer further mfonnation.call Ihc:ASIIA ACI10NUNE at 1·8()()...638·M68 or write : Treatment Efficac y Researc h Con fere nce . AS HA. IOWI Rockville Pike.

Rockville. Maryland 20852-3279. Regi>lraIion by cred it card is available (S75 fO(AS HA membe rs. SI25 for nonme mbers . S35 for NSSL HA membe rs ) by cal ling ASHA. Conven tion and MorIingl> aI (30 1) 897·5700.

Mullis, I. (1980). Using the primary trait system for evaluating.writing. Princeton . NJ: Educational Testing Service.

c.,.. ,._"IJ",,,,.. Amtria n Spftch . l.aItp.~ II"", 1lI

...

AJ8OCialkJoD

S . liofW h Klhul e _ On' ,,", .1Id Other

Soltis, J., & Walberg. H. (1989). Thirteen -year-olds ' writing achievements: A secondary analysis of the fourth national assessment of writing. Journal ofEducational Research. 83(1 ), 22-29. StuckJess, E., & Birch, J. (1966). The influence of early manual communication on the linguistic development of deaf children . American Annals of the Deaf. 111(4). 425-460, 499-504. Taylor, L. ( 1969). A language analysis of the writing of deafchildren (Final Report). Tallahassee: Department of English, Florida State University. Thomp son, W. (1936). Analysis of errors in written compo sition by deaf children . American Annals of the Deaf. 81(2),95-99. Walter, J. (1955). A study of the written sentence construction of a group of profoundly deaf children. American Annals ofthe Deaf. 100(3),235-252.

topics

Com"'U . kaUOll ~

CEC's 1993 Annual Convention

~~

t

t'

SAN ANlOMiO

Dr AlmJlliInIDim

ApIil5-9,1993

San Antonio, Texas April 5-9, 1993

"Partnerships for Success" is all about building collaborative relationships. Discover the many pathways to success through cooperation.

ABOUT THE AUTHORS THOMAS N. KLUWIN (CEC #192 ) is a Professor in the Department of Educational Foundations and Research and ARLENE BLUMENTHAL KELLY is a Research Technician in the Research Institute at Gallaudet University, Washington, DC. Manuscript received June 1990; revision accepted April 1991. Exceptional Children

CEC's annual "Big Show" means .. . information, training, exhibits, access to recognized experts, networking, social events...and the gusto of a big Texas welcome! Watch your journals and your mailbox for full details in the months ahead.

53

Implementing a successful writing program in public schools for students who are deaf.

A 2-year project to improve the writing skills of children who are deaf included instruction for teachers in the process approach to teaching writing...
2MB Sizes 0 Downloads 0 Views