Journal of School Psychology 51 (2013) 683–700

Contents lists available at ScienceDirect

Journal of School Psychology journal homepage: www.elsevier.com/locate/jschpsyc

Instructional and behavior management practices implemented by elementary general education teachers☆ Linda A. Reddy a,⁎, Gregory A. Fabiano b, Christopher M. Dudek a, Louis Hsu a a b

Graduate School of Applied and Professional Psychology, Rutgers University, Piscataway, NJ, USA University of Buffalo, USA

a r t i c l e

i n f o

Article history: Received 13 August 2012 Received in revised form 12 October 2013 Accepted 14 October 2013 Keywords: Teacher assessment Teacher behavior

a b s t r a c t This investigation examined 317 general education kindergarten through fifth-grade teachers' use of instructional and behavioral management strategies as measured by the Classroom Strategy Scale (CSS)-Observer Form, a multidimensional tool for assessing classroom practices. The CSS generates frequency of strategy use and discrepancy scores reflecting the difference between recommended and actual frequencies of strategy use. Hierarchical linear models (HLMs) suggested that teachers' grade-level assignment was related to their frequency of using instructional and behavioral management strategies: Lower grade teachers utilized more clear 1 to 2 step commands, praise statements, and behavioral corrective feedback strategies than upper grade teachers, whereas upper grade teachers utilized more academic monitoring and feedback strategies, content/concept summaries, student focused learning and engagement, and student thinking strategies than lower grade teachers. Except for the use of praise statements, teachers' usage of instructional and behavioral management strategies was not found to be related to years of teaching experience or to the interaction of years of teaching experience and grade-level assignment. HLMs suggested that teachers' grade level was related to their discrepancy scores of some instructional and behavioral management strategies: Upper grade teachers had higher discrepancy scores in academic performance feedback, behavioral feedback, and praise than lower grade teachers. Teachers' discrepancy scores of instructional and behavioral management strategies were not found to be related to years of teaching experience or to the interaction of years of teaching experience and grade-level assignment. Implications of results for school psychology practice are outlined. © 2013 Published by Elsevier Ltd. on behalf of Society for the Study of School Psychology.

1. Introduction Teacher accountability is a prominent topic of conversation in educational arenas (Bales, 2006; Reddy, Kettler, & Kurz, submitted for publication). Recent changes in the American education system, including the passage of the No Child Left Behind legislation, have focused attention towards general education teachers and their practices and performance in classrooms. At the same time, Response to Intervention (RtI; Fletcher, Lyon, Fuchs, & Barnes, 2007) and Positive Behavioral Intervention and Support (PBIS; http://www.pbis. org; Sugai & Horner, 2002, 2007) frameworks are being integrated into school systems. Both programs heavily emphasize the role of the general education teacher as a key individual who implements best practice interventions for academic instruction, behavior

☆ The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305A080337 to Rutgers University. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education. ⁎ Corresponding author at: Rutgers University, 152 Frelinghuysen Road, Piscataway, NJ 08854-8085, USA. Tel.: +1 732 289 1365; fax: +1 732 445 4888. E-mail address: [email protected] (L.A. Reddy). ACTION EDITOR: Renee Hawkins 0022-4405/$ – see front matter © 2013 Published by Elsevier Ltd. on behalf of Society for the Study of School Psychology. http://dx.doi.org/10.1016/j.jsp.2013.10.001

684

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

management, or both. The current United States Secretary of Education recently underscored this emphasis by stating, “The quality of the teacher in the classroom is the single biggest in-school influence on student learning” (Duncan, Gurria, & van Leeuwen, 2011). Thus what, how, and at what level of quality teachers utilize best practices are critical contributors to elementary classrooms. Perhaps one reason for the continued emphasis on the practices of general education teachers is that, in their role as a teacher, general educators may choose from a number of potential approaches to help students learn and ultimately achieve. These choices and the degree to which a teacher uses (or does not use) a chosen strategy can have implications for learning in the classroom. For example, one of the most robust predictors of academic achievement is the provision of academic response opportunities. Academic response opportunities represent chances for the student or students to provide answers, apply concepts, or contribute to group discussions on class content. Research has highlighted the number of academic response opportunities present in the classroom to be related to student participation and engagement in learning (e.g., Partin, Robertson, Maggin, Oliver, & Wehby, 2010; Stitcher et al., 2009; Sutherland, Adler, & Gunter, 2003; Sutherland, Wehby, & Yoder, 2002; Taylor, Pearson, Peterson, & Rodriguez, 2003). Current research suggests these opportunities should occur frequently, as many as 3- to 4 times per minute (Englert, 1983; Stitcher et al., 2009). In addition to providing these opportunities to respond, teachers must also offer time for students to think about and process academic material (Stitcher et al., 2009). An additional strategy teachers may use to help present and integrate academic content is to frequently review lesson content and material through summarizing concepts and lesson content. Concept summaries may include the activation of thinking about prior learning through review, serve as an advance organizer for the present lesson, reinforce learning through summary and repetition, and subsequently improve students' organization and recall of material taught and overall understanding of lesson content (Brophy, 1998; Brophy & Alleman, 1991; Hines, Cruickshank, & Kennedy, 1985; Reddy, Fabiano, Barbarasch, & Dudek, 2012; Rosenshine & Stevens, 1986). Additionally, the quality of academic feedback and the promotion of metacognitive, higher-order thinking (i.e., students' thinking about thinking) can serve as ways of promoting engagement in learning (e.g., Adey & Shayer, 1993; Bangert-Drowns, Hurley, & Wilkinson, 2004; Bender, 2008; Haywood, 2004; Mevarech & Kramarski, 1997; Taylor et al., 2003; What Works Clearinghouse, 2012). In addition to instruction-related strategies that are proximal to learning, there are classroom management strategies that can also promote effective learning environments (Gable, Hester, Rock, & Hughes, 2009). Multiple studies in the 1960s and 1970s illustrated that teacher attention (following positive behaviors), reprimands (following negative behaviors), and instructions impacted student behavior and rule following (e.g., O'Leary, Kaufman, Kass, & Drabman, 1970). These behaviors include positive attending strategies such as labeled praise or “catching students being good.” Multiple studies indicate that such contingent attention results in improved classroom behavior and rule-following (e.g., Hall, Panyan, Rabon, & Broden, 1968; Madsen, Becker, & Thomas, 1968; Thomas, Becker, & Armstrong, 1968; Walker & Buckley, 1968; Ward & Baker, 1968; White, 1975). Likewise, corrective feedback in the form of reprimands, informing the child privately and neutrally of misbehavior, or other methods of redirecting (e.g., prompting and preventing misbehavior through routines) can also improve classroom behaviors (e.g., Abramowitz, O'Leary, & Rosen, 1987; Acker & O'Leary, 1987; O'Leary et al., 1970; Rosen, O'Leary, Joyce, Conway, & Pfiffner, 1984). In addition, clear behaviorally-specific instructions and commands result in higher rates of student compliance and follow-through compared to instructions and commands that are vague or unclear (e.g., Forehand & Long, 1996; Walker & Eaton-Walker, 1991). Based on this long-standing and considerable research literature, these teacher strategies have clear evidence as effective interventions to promote student behavior and learning. However, this literature is limited in some respects. First, these strategies are typically employed in a reciprocal, recursive, and ongoing fashion in classrooms with multiple combinations of strategies being necessary and dependent on the content and type of lesson (e.g., White, 1975). Studying any single strategy in isolation ignores the fact that teachers typically employ many of these strategies and some are dependent on one another (e.g., a teacher who issues many vague directives may have to issue more corrective feedback if students are not following the directives). This point is underscored when one considers the ratio of positive, supportive statements and demands or reprimands that occur in the classroom. Recommended ratios of at least three praise statements for every demand or reprimand are often required for improving student behavior and academic performance (e.g., Fabiano et al., 2007; Good & Grouws, 1977; Pfiffner, Rosen, & O'Leary, 1985; Stitcher et al., 2009). Second, there are important developmental considerations that may make some strategies more appropriate for younger ages relative to older ages in school. For example, White (1975) documented the decrease in teachers' use of positive attending strategies starting in the second grade of school. One explanation for this finding could be that as children progress through school and learn routines and expectations, there may be a reduced need for frequent behavior management in some situations (Brophy & Good, 1986). However, it remains unclear how educators' grade-level assignment impacts general instructional and behavioral management practices. In addition, there is a question regarding whether teaching experience may play a role in the use of best practice strategies. Although intuitively it may make sense that more experienced teachers utilize greater amounts of best practices, research findings regarding the effects of teacher experience on strategy use are mixed (Ghaith & Yaghi, 1997; Guskey, 1988), and this area of research is in need of additional study. This investigation examined general education kindergarten through fifth-grade teachers' use of classroom instructional and behavioral management practices through direct observations with a new teacher assessment tool, the Classroom Strategies Scale (CSS)-Observer Form. One output produced from the CSS-Observer Form, is an actual frequency rating of a teacher's use of specific instructional and behavioral management strategies (e.g., providing opportunities to respond; providing corrective feedback to students) as well as a complimentary recommended frequency rating of the degree to which the strategy should have been used given the classroom context. To facilitate the development of practice goals, a discrepancy score is calculated between the frequency and recommended frequency rating. Small discrepancy scores indicate practice appropriate for the observed context whereas large discrepancy scores suggest areas of instructional practice that may need improvement.

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

685

To this end, two major research questions were addressed. The first question concerns the frequency that teachers' use commonly employed general education instructional and behavioral management strategies. The second concerns possible effects of two factors on the frequency of use of these strategies and the discrepancy of strategy usage. These factors were (a) grade-level assignment and (b) years of teaching experience. No specific hypotheses were formulated for the first research question due to its descriptive nature. For the second, it was hypothesized that classroom management strategies would be more widely employed at the lower grade levels relative to the upper grade levels. Given the mixed results of previous investigations concerning effects of years of teaching experience (e.g., Ghaith & Yaghi, 1997; Guskey, 1988), we examined the relation between years of teaching experience and the interaction of grade level and years of teaching experience on educators' use of behavioral and instructional strategies. Because of the nesting of teachers (N = 317) within observers (N = 67), the effects of grade level, teachers' years of teaching experience, and of the (grade level × years of experience) interactions on the: (a) CSS frequency scores and (b) discrepancy scores (i.e.,|recommended frequency − frequency ratings|) were estimated using hierarchical linear models. 2. Method 2.1. Sample A sample of 317 general education teachers was observed for the purposes of piloting and validating the CSS version 2.0 as an elementary classroom observation measure. The sample comes from 73 public and private elementary schools located within 39 districts in New Jersey and New York that participated in the 2009 to 2010 school year. School characteristics were collected from the National Center for Educational Statistics Common Core of Data online database for the 2009 to 2010 school year (see Table 1). Teachers were stratified by grade-level assignment and included 60 kindergarten teachers, 48 first-grade teachers, 64 second-grade teachers, 60 third-grade teachers, 41 fourth-grade teachers, and 44 fifth-grade teachers. The teacher sample included predominantly Caucasian (95%) women (92%) with an average age of 39 years (SD = 11.68 years). Within the sample, the average number of students per classroom was 21 (SD = 3.94). Educational degree of the participating teachers included 40% with a bachelor degree and 60% with a master's degree. The average number of years of teaching experience was 11.91 (SD = 8.91). Years of teaching experience was conceptualized into four categories: (a) less than 3 years, (b) 4 to 9 years, (c) 10 to 19 years, and (d) 20 and more years. Similar categories were used by the U.S. Department of Education, National Center for Educational Statistics (2010) and the National Education Association (2010) in annual publications relating to public school teacher characteristics. Observations were conducted by 67 unique individuals who were either school principals or research staff (i.e., graduate students or project staff) from both the New Jersey and New York sites. A total of 44 school principals (66%) filled out the CSS on 168 teachers (53%). The principals were either Caucasian (97%) or Black (3%), and the sample was predominantly composed of women (75%) with an average age of 46 years (SD = 11.40 years). Principals reported the following educational degrees: 3% with a bachelor degree, 93% with a master's degree, and 4% with a doctoral degree.

Table 1 Characteristics of participating schools across New Jersey and New York. Characteristic Type of community Suburb: Large City: Large City: Small Rural: Distant Rural: Fringe Town: Fringe Town: Distant Type of school Public Private Students receiving free and reduced lunch 0% 1–24% 25–49% Greater than 50% Student demographic distribution Native American/Alaskan Native Asian African American/Black Hispanic/Latino White/Caucasian Hawaiian or Asian Pacific Islander Two or more ethnicities

Percentage 68.18% 7.58% 7.58% 3.03% 7.58% 4.55% 1.52% 83.56% 16.44% 17% 29% 45% 9% 1.07% 12.64% 15.51% 10.40% 59.54% b1% b1%

Note. Percentage values were calculated across all students at each school and then averaged across all schools participating in the study. In accordance with IRB procedures, classroom level student data was not collected.

686

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

The 23 research staff observers (53%) filled out the CSS on 149 teachers (47%). The research staff were composed 13 undergraduate students, 8 graduate students, and 2 of the study authors. Researcher staff were predominantly women (74%) who were Caucasian (78%), Black (4.55%), Asian (4.5%), Pacific Islander (4.5%) and Middle Eastern (8.5%). The average age of research staff was 24 years (SD = 5.53 years), and their educational degrees included 43% with an associate degree, 43% with a bachelor degree, 8.5% with a master's degree, and 4.5% with a doctoral degree.

Table 2 Descriptions of the CSS Part 1 strategy counts and Part 2 strategy rating scales. Part 1 Strategy counts

Definitions

Concept summaries

A teacher summarizes or highlights key concepts or facts taught during the lesson. Summarization statements are typically brief and clear. This teaching strategy helps students organize and recall material taught. A teacher creates opportunities for students to provide verbal academic responses (i.e., answers or responds to lesson content questions, summarizes or repeats key points, generates questions, brainstorms ideas, explains answer). A teacher directed verbal instruction that specifically requests a behavior. These commands are clear and direct, and they provide specific instructions to students. They are declarative statements (not questions), describe the desired behavior, and include no more than two steps. A teacher directed verbal instruction that is unclear when requesting a behavior. These commands are vague, may be issued as questions, and often include excess verbalizations or more than two steps. A teacher issues a verbal or nonverbal statement or gesture to provide feedback for a positive or appropriate behavior. A teacher issues a verbal or nonverbal statement or gesture to redirect inappropriate behavior. The sum of the frequency of the six teacher behaviors.

Academic response opportunities

Clear one or two step commands

Vague commands Praise statements Corrective feedback Total Part 2: Instructional strategies scales

Definitions

Total scale

The Total Instructional Strategies scale reflects the overall use of Instructional Methods and Academic Monitoring/Feedback. How classroom instruction occurs. Measures teachers' use of teacher directed or student directed methods. This includes how a teacher incorporates active learning techniques such as hands on learning and collaborative learning in the presentation of lessons as well as how a teacher delivers academic content to students. Strategies for engaging students in the lesson, creating active learners, and encouraging self-initiative in the learning process. These practices encompass direct experience, hands on instructional techniques, linking lesson content to personal experiences, and cooperative learning strategies. Methods for conveying information to students and strategies employed while teaching lesson content/ concepts. These practices include modeling, advanced organizers, summarizing, and other instructional methodology. How teachers monitor students' understanding of the material and provide feedback on their understanding. These strategies assess students' thinking and encourage students to examine their own thought processes. Teachers guide students understanding by encouraging students, affirming appropriate application of the material, and correcting misperceptions. Practices for stimulating students' metacognitive and higher order thinking abilities. They encourage students' to critically think about the lesson material (why/how analysis), generate new ideas, and examine their own thought processes. How teachers provide feedback to students' on their understanding of the material. These practices assess teacher efforts to explain what is correct or incorrect with student academic performance.

Instructional methods composite scale

Student focus learning & engagement subscale

Instructional delivery subscale

Academic monitor/feedback composite scale

Promotes student thinking subscale

Academic performance feedback subscale

Part 2: Behavioral management strategies scales

Definitions

Total scale

The Total Behavioral Management Strategies scale reflects the overall use of Proactive Methods and Behavior Feedback. How teachers respond to students appropriate and inappropriate behaviors. This includes the usage of praise to encourage positive behaviors and corrective feedback to redirect negative behaviors. Verbal and nonverbal strategies teachers use to praise students for specific appropriate behaviors in the classroom. Verbal and nonverbal strategies teacher use to redirect or correct students' inappropriate behavior in the classroom. Strategies teachers use to promote positive behaviors in the classroom and reduce the likelihood of negative behaviors. These strategies include prompts, routines, reviewing rules, and presenting instructions or requests in a clear manner. Proactive verbal and nonverbal strategies teachers use to promote positive classroom functioning and establish effective learning environments. These practices include taking actions to prevent problem behaviors from occurring, establishing clear and consistent expectations, and creating a positive atmosphere in the classroom. Strategies teachers use to communicate their behavioral requests to students and manage the movement and behavior of students during class transitions.

Behavior feedback composite scale Praise subscale Corrective feedback subscale Proactive methods composite scale

Prevention management subscale

Directives/Transitions subscale

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

687

2.2. Measure Teachers' classroom practices were measured using the Classroom Strategies Scale (CSS) Observer Form. Historically, behavioral assessment and intervention approaches have utilized classroom observations to enhance student behavior and teacher performance. (e.g., Pelham, Fabiano, & Massetti, 2005; Pelham, Greiner, & Gnagy, 1998; Rosen et al., 1984; White, 1975; Ysseldyke & Burns, 2009; Ysseldyke & Elliott, 1999). The CSS-Observer Form builds on these studies by including empirically supported instructional and behavioral management strategies in a single measure (See Table 2). Grounded in research on instructional and behavioral management practices, the CSS is composed of three parts that includes items addressing empirically-supported strategies (e.g., Bender, 2008; Gable et al., 2009; Kalis, Vannest, & Parker, 2007; Kern & Clemens, 2007; Marzano, 1998; Tomlinson & Edisonson, 2003; Walker, Colvin, & Ramsey, 1995; What Works Clearinghouse, 2012). The CSS-Observer Form is designed to be administered in an ongoing formative assessment context with the intent of helping educators' identify empirically supported teaching strategies in their classroom, facilitate educators' development of practice goals related to these strategies, and monitor progress towards achieving these goals. To facilitate these aims, the CSS-Observer Form includes a three part assessment that yields complementary and distinct information. 2.2.1. CSS Part 1 During the classroom observation period, the observer completes the Part 1 (Strategy Counts). For the Part 1 (Strategy Counts) the observer tallies the frequency of six teacher strategies (see Table 2 for a description of these strategies). Observers note each time an instructional or behavior management strategy was used and whether it was the strategy used for an individual student or group of students (i.e., two or more students). 2.2.2. CSS Part 2 The Part 2 (Strategy Rating Scales) consists of an Instructional Strategies (IS) scale and a Behavioral Management Strategies (BMS) scale that are completed after a classroom observation period (see Table 2). The IS scale includes 26 items that compose a total scale, two composite scales, and four subscales. Maximum scores are frequency scores. The Instructional Methods Composite scale (14 items producing a maximum score of 98) consists of the Instructional Delivery subscale (7 items producing a maximum score of 49) and the Student Focus Learning and Engagement subscale (7 items producing a maximum score of 49). The Academic Monitoring/Feedback Composite scale (12 items producing a maximum score of 84) consists of the Promotes Student Thinking subscale (6 items producing a maximum score of 42) and the Academic Performance Feedback subscale (6 items producing a maximum score of 42). The BMS scale includes 23 items that composes a total scale, two composite scales, and four subscales. The Behavioral Feedback Composite scale (11 items producing a maximum score of 77) consists of the Praise subscale (5 items producing a maximum score of 35) and the Corrective Feedback subscale (6 items producing a maximum score of 42). The Proactive Methods Composite scale (12 items producing a maximum score of 84) consists of the Prevention Management subscale (5 items producing a maximum score of 35) and the Directives/Transitions subscale (7 items producing a maximum score of 49). Table 2 lists IS and BMS scales and definitions. Both the Part 2 IS and BMS rating scales require observers to fill out a Frequency rating and Recommended Frequency rating. For the Frequency rating, observers rate how often teachers used specific positive instructional and behavioral management strategies on a 7-point Likert scale (1 = never used, 3 = sometimes used, and 7 = always used). After completing the Frequency rating, observers then rate the Recommended Frequency of the strategies based on the context of the lesson. For the Recommended Frequency, observers rate how often the teachers should have used each strategy on a 7-point Likert scale (1 = never used, 3 = sometimes used, and 7 = always used). The Part 2 rating scales yield Frequency scores and Discrepancy scores for each scale. Part 2 item/strategy discrepancy scores are computed as follows: |recommended frequency − frequency ratings|. In this study Part 2 discrepancy (absolute) scores were used to assess whether the observer determined if any change (regardless of direction) was needed in the teacher's classroom practices. The larger the average absolute value discrepancy score, the greater amount of change needed. For this investigation, frequency and discrepancy scores were separately analyzed. 2.2.3. CSS Part 3 Part 3 (Classroom Checklist) is completed prior to leaving the classroom. The Classroom Checklist assesses the presence of 10 specific classroom structures procedures, including the posting and specificity of rules, student accomplishments, and charts for monitoring behavioral or academic progress (see Table 6 for a description). 2.2.4. Administration and scoring At minimum a single observation can be used to complete the CSS-Observer Form assessment. In the current study, two observations for each teacher were conducted using the CSS-Observer Form. Subsequently, scores were calculated in accordance with CSS procedures for multiple observations. For Part 1, the six teacher strategies were averaged across observations 1 and 2. For Part 2, both the frequency and absolute value discrepancy scores were first calculated at the item level for the IS and BMS scales, for classroom observations 1 and 2 separately. IS and BMS scale scores were then calculated for observations 1 and 2 separately by summing these discrepancy scores of the associated items. We then added the respective scale scores from Observation 1 to the corresponding scale scores in Observation 2, and then divided by 2 to obtain the average absolute value discrepancy score across both observations.

688

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

Aggregates (e.g., sums or means) of absolute values of item discrepancy scores would approximate zero for teachers who consistently use all strategies in a set appropriately, and would gradually increase when teachers deviate from the recommended use of strategies in lessons. This type of information is helpful for identifying teachers who may need professional development and supports. In this study we chose not to aggregate signed discrepancy item scores because of the ambiguity of these aggregates: An item discrepancy score of zero indicates that a teacher's observed use of a strategy matches or equals the recommended use of that strategy. A teacher who obtains an item discrepancy score of approximately zero is using the strategy appropriately. A large positive item discrepancy score indicates that the recommended use of a strategy is much higher than the teacher's observed use of that strategy (i.e., the teacher under-uses that strategy); A large negative item discrepancy scores indicates that the recommended use is much below the teacher's observed use (i.e., the teacher over-uses that strategy). An aggregate of signed discrepancy scores, such as a sum or mean of these scores, could be zero or close to zero. 2.2.5. Evidence supporting CSS The CSS Observer Form has construct validity evidence supporting it based on extensive school personnel input on directions, items, and scales; feedback from a National Advisory Board composed of experts in the area of instruction and behavior management; decades of evidence-based instructional and behavioral management research; and analysis of its scores (see Reddy, Fabiano, Dudek, & Hsu, 2013a). The Part 2 IS and BMS total scales, composite scales, and subscales are theoretically and factor-analytically derived (using confirmatory factor analysis). The CSS Part 2 IS and BMS total scales have strong internal consistency (Cronbach alpha values of .92–.93). Inter-rater reliability data were randomly collected on 82 cases (approximately 26% of the current sample) for all three parts of the CSS. Two methods were used to estimate inter-rater reliability: (1) Pearson product moment correlations and (2) percent agreement. Pearson product moment correlations between Observer 1 and Observer 2 were calculated for each of the six teacher strategies on the CSS Part 1 and the Part 1 total score. For the Part 2 rating scales, pearson product moment correlations between Observer 1 and Observer 2 were calculated for the Part 2 IS and BMS total, composite, and subscale scores. For the Part 3 checklist, a pearson product moment correlation was calculated for the total checklist score. Percent agreement between Observer 1 and Observer 2 was calculated for the Part 1 Total Strategies, the Part 2 IS and BMS total scale scores, and the Part 3 total score. For the Part 1 percent agreement was calculated by counting the total number of cases where Observer 1 and Observer 2 scores were determined to agree and then dividing by the total number of cases in the sample. For the Part 2 IS and BMS total scores, percent agreement was calculated using a similar procedure. First, the absolute value of the difference between Observer 1 and Observer 2 scores on each of the CSS IS and BMS total scales was calculated. Scores were then determined to agree based on a difference threshold of 1 point or less per item in the scale. Other classroom observation validation research (Classroom Assessment Scoring System, CLASS; Pianta, La Paro, & Hamre, 2008) as well as large-scale studies (Kane & Staiger, 2012; NICHD Early Child Care Research Network, 2002a, 2002b) have used a difference score of 1 point or less per item to determine percent agreement. The total number of agreements for both the IS and BMS total scale score was then divided by the total number of cases in the sample to determine percent agreement. The percent agreement for the Part 3 total score followed the same procedure used in determining the Part 1 calculation. Using Cicchetti (1994) guidelines, good inter-rater reliability was found for each of the six Part 1 teacher strategies: Concept Summaries (r = .79, percent agreement 93%), Academic Response Opportunities (r = .93, percent agreement 89%), Clear Commands (r = .92, percent agreement 90%), Vague Commands (r = .80, percent agreement 90%), Praise Statements (r = .86, percent agreement 90%), and Corrective Feedback (r = .97, percent agreement 90%). The Part1 Total Strategies yielded good inter-rater reliability (r = .94; percent agreement 92%). The Part 2 (Strategy Rating Scales) IS Total and BMS Total scale scores also yielded fair to good inter-rater reliability (r = .80, percent agreement 92% and r = .72; percent agreement 88%, respectively) as did the Part 3 Classroom Checklist (r = .86; percent agreement 91%). Reliability estimates are dependent on the type of measurement and no objective threshold exists for these estimates, only commonly accepted or used values (e.g., Goodwin & Goodwin, 1999; Knapp & Brown, 1995). The inter-rater reliability estimates reported in the present study align with accepted values for other classroom observation assessments in the field such as the CLASS and the measures used in the Measures of Effecting Teacher Project (Cantrell, 2013; Kane & Staiger, 2012; Pianta et al., 2008). Inter-rater reliability results from the Measures of Effecting Teaching Project suggest reliability estimates should aim to exceed a value of .65 and estimates greater than .80 are highly reliable (Cantrell, 2013; Kane & Staiger, 2012). In addition to inter-rater reliability evidence, test–retest reliability evidence (across approximately 2 to 3 weeks) was found to be fair to good (Cicchetti, 1994). Test–retest reliability data was collected on a sample of 57 classrooms from the current study (approximately 18%). In congruence with the current study, retest sample teachers received two initial observations along with the other participants. Approximately 2 to 3 weeks later, the same principals or research staff who conducted the first administration of the CSS-Observer Form returned to the classroom to conduct an additional two observations (observations 3 and 4) for the same teacher. CSS-Observer Form score calculations for observations 3 and 4 followed the same procedures as observations 1 and 2. The averaged results of observations 1 and 2 were compared to the averaged results of observations 3 and 4 using Pearson product moment correlations and percent agreement. The same procedures used to percent agreement for the inter-reliability sample were used for the test–retest sample. Fair to good estimates were found for the Part 1 Total Strategies (r = .70, percent agreement 81%), Part 2 IS and BMS Total scales (r = .86, percent agreement 93% and r = .80, percent agreement 85%, respectively), and Part 3 Classroom Checklist (r = .77; percent agreement 81%). There is also evidence of validity for the CSS scores. The CSS-Observer Form scores were compared to the CLASS, a well-established measure of teacher and classroom quality, (Pianta et al., 2008). In a study with 125 teachers where the CSS and CLASS were completed concurrently, the CSS scales and subscales have been found to demonstrate good convergent and discriminant validity with the Classroom Assessment Scoring System domains (Reddy, Fabiano, & Dudek, 2013). Preliminary validity studies have found the CSS scores

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

689

sensitive to change following teacher consultation for improving classroom practices (Reddy & Dudek, in press). Hierarchical linear modeling revealed that the CSS IS discrepancy scores predict student mathematics and language arts state-wide testing scores (Reddy, Fabiano, Dudek, & Hsu, 2013b). Finally, differential item functioning analyses have revealed that the Part 2 Strategy Rating Scales and items are free of item bias for important teacher demographic variables (i.e., age, educational degree, and years of teaching experience (Reddy et al., 2013a).

2.3. Procedures Observers and teachers were recruited as part of a larger validation study using the CSS-Observer Form. The central administrative office for each school district was first contacted to obtain permission to conduct the study. Each individual school in a given district was then contacted to obtain permission to recruit participants and conduct the study. School principals and teachers were informed of the study through flyers and school-based presentations. All participating teachers in the current study volunteered to be observed by their participating principal or by CSS research staff. All participating principals in the current sample volunteered to receive observer training and conduct observations using the CSS-Observer Form for the teachers participating at their school. Informed consent was obtained from all participating principals, teachers, and research procedures were approved by Institutional Review Boards at both universities in New York and New Jersey sponsoring the research. All participants signed an agreement form indicating that CSS data could not be used for the purposes of evaluating teachers' job performance. Observers participated in training prior to observing teachers' strategy use and classroom practices using the CSS. Due to the diverse backgrounds and credentials within the observers in the study, multiple training procedures were available to ensure all observers had a basic level of applied practice in classroom observations. To standardize CSS implementation across sites, observers watched a DVD training video that introduced CSS observation procedures, provided an overview of how ratings are completed, and then showed several classroom examples of teachers displaying specific behaviors assessed by the CSS (e.g., praise statements and academic response opportunities). Following presentation of the DVD training video, observers received two didactic training sessions (2 h each) from a CSS Trainer/Master Coder which included discussion of definitions and criteria, and then observers individually practiced coding elementary general education classroom videos to assess observer reliability. Practice coding results were reviewed by CSS Trainer/Master Coder and specific feedback was provided to observers to further orient them to the CSS definitions and criteria. During these two training sessions, observers were also oriented to the scientific literature guiding the development of the CSS and the recommended frequencies of these strategies to ensure observers operated with the same knowledge base for judging the Recommended Frequency of the CSS Part 2. Training on the Recommended Frequency of strategies was informed by the effective instruction literature that spans over 60 years (e.g., Brophy & Good, 1986; Creemers, 1994; Gage, 1978; Hattie, 1992; Horner, Sugai, Todd, & Lewis-Palmer, 2000; Kounin, 1970; Marzano, 1998; Marzano, Pickering, & Pollock, 2001; Walberg, 1986; Wang, 1991). For example, the academic and behavioral literatures have indicated praise statements should be used frequently and consistently (e.g., Alber, Heward, & Hippler, 1999; Beaman & Wheldall, 2000; Sutherland & Wehby, 2001). In particular, praise should be used at a ratio of 3:1 to corrective feedback (i.e., reprimands). The CSS Academic Response Opportunities strategy, which comes from the opportunity to respond (OTR) literature, should be used at a rate of 3.5 per minute during active instruction (e.g., Partin et al., 2010; Stitcher et al., 2009; Sutherland et al., 2002, 2003). For the current study, observers scheduled two 30-minute observations within seven school days of one another. Observers completed the first observation using the CSS Part 1 and immediately rated the Part 2 IS and BMS items. These steps were repeated for the second observation, and after both observations were completed, the Part 3 classroom checklist was completed. Observers returned their forms independently to the study coordinators at each site.

2.4. Data analytic plan Because the 317 teachers were nested under 67 observers (principals and research staff), 2-level hierarchical linear models (HLM Level 1 units = 317 teachers; Level 2 units = 67 observers) were first used to estimate and test hypotheses about net effects of observers on frequency and discrepancy scores of instructional and behavioral management strategies. The two-level HLMs included random intercepts and fixed slopes. Large and significant intraclass correlations (ICCs) supported the use of HLMs to estimate and test hypotheses about effects of (a) grade levels, (b) years of teaching experience and (c) interactions of grade levels and years of teaching experience. As shown on Tables 7, 8, and 9, 35 HLMs were fitted to the data: 7 for Part 1, 14 for Part 2 IS and BMS scale frequency scores, and 14 for Part 2 IS and BMS scale discrepancy scores. An alpha of .05 was used for all tests of statistical significance. The Dunn–Sidak method (see Kirk, 1982, pp. 110–111) was used to determine the individual test alpha-levels that are required to maintain the family-wise error rates (for the family of the 7 planned tests of fixed effects in each HLM) below conventional alpha levels (i.e., p b .05) within each of the 35 HLMs. In particular, the individual test alpha-level controlling for family-wise (FW) error rates were b.0073 (FW b .05), respectively. Additionally, effect sizes in the form of half-standardized regression coefficients were computed (Hedges, Laine, & Greenwald, 1994). Half-standardized regression coefficients are used for continuous dependent and independent variables and allow comparisons of effects of the same regressor (e.g., grade level) on different outcome measures (e.g., CSS scales). Lindsey and Wilson's (2001) interpretation of Cohen's guidelines (ES ≤ .20 = small, ES = .50 = medium and ES ≥ .80 = large) were used. For this investigation we have interpreted ESs N .21 and b.79 as medium.

690

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

Table 3 Descriptive statistics of the CSS Part 1 and Part 2 frequency scores. Measures Part 1 Strategy counts Concept summaries Academic response opportunities Clear 1 to 2 step commands Vague commands Praise Corrective feedback Total Part 2 Strategy rating scales Instructional strategies total scale Instructional methods composite Student focused learning & engagement Instructional delivery Academic monitoring/feedback composite Promotes student thinking Academic performance feedback Behavioral management strategies total scale Proactive methods composite Prevention management Directives/transitions Behavior feedback composite Praise Corrective feedback

M

SD

Range

5.73 27.25 17.09 3.67 11.36 8.86 66.63

5.03 14.34 8.77 3.92 8.33 6.98 26.21

0–23.5 4–73 1.5–51 0–19 0–58.5 0.5–50.5 10–183

123.71 66.19 31.31 34.81 57.47 26.85 30.62 107.97 60.35 21.28 39.03 47.52 21.38 26.09

24.99 14.45 7.79 8.13 11.90 7.0 6.66 23.83 11.85 5.81 7.35 14.88 7.99 8.03

47.50–181 25–98 9–49 13–49 22–84 8–42 10.50–42 55–159.50 21–84 6.50–35 12.50–49 18–76 5–35 9–42

Maximum possible Score

182 98 49 49 84 42 42 161 84 35 49 77 35 42

% of Maximum score

68% 68% 64% 89% 68% 64% 73% 67% 72% 61% 80% 62% 61% 62%

Skewness (Std. E.)

Kurtosis (Std. E.)

1.28 0.76 0.92 1.55 1.90 2.22 0.86

(.14) (.14) (.14) (.14) (.14) (.14) (.14)

1.20 0.12 0.84 2.09 6.06 7.55 1.40

(.27) (.27) (.27) (.27) (.27) (.27) (.27)

0.24 0.12 0.20 −0.16 0.17 0.13 −0.28 0.15 −0.34 0.04 −0.64 0.12 0.00 0.04

(.14) (.14) (.14) (.14) (.14) (.14) (.14) (.14) (.14) (.14) (.14) (.14) (.14) (.14)

−0.33 −0.43 −0.56 −0.59 −0.38 −0.55 −0.32 −0.83 −0.28 −0.59 −0.11 −1.01 −1.02 −1.10

(.27) (.27) (.27) (27) (.27) (.27) (.27) (.27) (.27) (.27) (.27) (.27) (.27) (.27)

3. Results 3.1. Descriptive statistics of strategy usage Descriptive statistics for frequency scores from the CSS Part 1 (Strategy Counts) and Part 2 (Strategy Rating Scales) are reported in Table 3. For the CSS Part 1, Academic Response Opportunities were observed as the most frequently used strategy, followed by Clear 1 to 2 Step Commands. Praise Statements was the third most frequently observed strategy, followed by Corrective Feedback. Concept Summaries and Vague Commands were observed as the least frequently used strategies. Comparatively, for the Part 2 IS and BMS rating scales, it is difficult to determine which scales and subscales occurred more frequently because each scale possesses a different number of items and maximum score. Dividing the average of each scale by its maximum score allowed for the comparison of frequency. Higher percentages suggest increased frequency usage of the items for each scale. As evident in Table 3, teachers possessed similar percent of maximum scores on the IS and BMS total scales (on average 68% and 67%, respectively), suggesting similar frequency usage of instructional and behavioral management strategies. Within the IS scale, the Academic Performance Feedback (73%) and Instructional Delivery subscales (71%) possessed the highest percent of maximum score. Within the BMS scale, Directives/Transitions (73%) had the highest percent of maximum score. As shown on Tables 4 and 5, descriptive statistics are computed for the CSS Part 1 (Strategy Counts) and Part 2 (Strategy Rating Scales) by grade level (i.e., kindergarten through fifth grade) and years of teaching experience (i.e., less than 3 years; 4 to 9 years; 10 to 19 years; and 20 and more years). On Table 4, descriptive results suggest trends in increased usage in specific instructional strategies and decreased usage in behavioral management strategies with increased grade-level assignment. On Table 5, descriptive results suggest educators' strategy usage is comparable across years of teaching experience. Table 6 presents a summary of the CSS Part 3 (Classroom Checklist) for the entire sample. In general, the provision and availability of materials were the most commonly observed classroom components (e.g., materials available for completing assignments; presence of tissues and hand sanitizer). Antecedent control approaches (e.g., posting homework assignments) and progress monitoring strategies (e.g., methods for tracking student progress) were observed in fewer classrooms. Overall, when separated by grade level or years of teaching experience, the majority of the classrooms evidenced similar parameters as the entire sample.1 3.2. Strategy frequency scores by grade and years of teaching experience Table 7 presents a summary of the HLM analyses for grade level assignment and frequency of strategy usage as measured by the CSS Part 1 (Strategy Counts). Intraclass correlations (ICCs) ranged from .18 to .57, and all were statistically significant. Grade significantly related to educators' frequency of CSS Concept Summaries, Clear 1 to 2 Step Directives, Praise Statements, Corrective Feedback, and Total strategie. Findings indicate that increased grade resulted in more frequent use of Concept Summaries, β = .30, p b .001; ES = .06, and less frequent use of Clear 1 to 2 Step Directives, Praise Statements, Corrective 1

Descriptive results for the Part 3 Classroom Checklist by grade level or years of teaching experience can be obtained from contacting the first author.

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

691

Table 4 Descriptive statistics of the CSS Part 1 and Part 2 frequency scores by grade level. Measures

Part 1 Strategy counts Concept summaries Academic response opportunities Clear 1 to 2 step commands Vague commands Praise statements Corrective feedback Total Part 2 Strategy rating scales Instructional strategies total Instructional methods composite Student focus learning & engagement Instructional delivery Academic monitoring/feedback composite Promotes student thinking Academic performance feedback Behavioral management strategies total Proactive methods composite Prevention management Directives/Transitions Behavioral feedback composite Praise Corrective feedback

K N = 60 M (SD)

1 N = 48 M (SD)

2 N = 64 M (SD)

3 N = 60 M (SD)

4 N = 41 M (SD)

5 N = 44 M (SD)

5.32 26.32 19.78 3.76 16.06 11.69 75.41

(5.17) (13.32) (8.76) (3.68) (10.54) (8.63) (29.29)

5.58 27.94 19.33 3.63 14.47 11.27 74.97

(4.27) (13.94) (10.88) (4.07) (8.79) (8.63) (27.80)

5.30 29.74 18.74 3.38 11.38 9.71 70.50

(4.83) (15.47) (9.62) (3.34) (7.86) (5.23) (26.28)

5.22 28.75 15.42 4.32 9.47 8.03 62.57

(4.41) (15.78) (6.80) (4.51) (5.73) (6.18) (21.40)

7.68 24.88 14.73 3.40 9.12 6.48 59.49

(6.56) (14.13) (6.46) (4.11) (6.32) (5.62) (23.88)

5.93 24.33 13.06 3.33 6.20 5.93 52.13

121.01 65.28 30.38 34.91 55.73 23.78 31.95 112.12 61.07 21.24 39.83 51.05 23.86 27.19

(21.73) (13.92) (7.30) (8.71) (9.28) (6.00) (5.55) (23.22) (12.23) (6.17) (7.45) (13.80) (7.05) (7.94)

126.00 66.92 30.90 35.88 58.95 26.93 32.02 113.96 61.66 22.26 39.16 51.95 23.72 28.22

(27.67) (15.99) (7.99) (9.18) (12.76) (7.25) (7.31) (27.72) (13.32) (6.64) (7.76) (16.36) (8.40) (8.64)

122.48 65.74 30.75 34.99 56.74 26.77 29.98 107.36 60.43 21.15 39.28 46.93 20.91 26.02

(26.67) (14.36) (7.87) (7.87) (13.53) (7.94) (7.07) (22.98) (11.45) (5.92) (6.89) (14.87) (7.91) (8.11)

119.90 63.90 30.41 33.27 55.84 26.65 29.19 102.21 58.37 20.39 37.98 43.96 20.13 24.03

(21.42) (12.51) (7.33) (6.86) (10.29) (6.09) (5.98) (20.88) (11.32) (4.99) (7.43) (12.89) (7.72) (6.50)

131.79 69.96 34.28 35.69 61.83 30.34 31.49 107.09 60.11 21.24 39.06 46.67 20.54 25.91

(25.93) (15.79) (8.07) (8.51) (11.50) (5.66) (6.42) (24.58) (12.81) (5.55) (6.19) (4.24) (7.67) (7.65)

(4.89) (12.26) (6.87) (3.99) (4.70) (4.46) (19.00)

124.45 (26.56) 66.93 (14.77) 32.39 (8.12) 34.55 (7.86) 57.52 (13.34) 28.16 (7.32) 29.36 (7.36) 105.22 (23.35) 60.80 (10.23) 21.74 (5.55) 39.03 (7.35) 44.16 (16.23) 18.64 (8.38) 25.28 (9.11)

Feedback, and Total strategies, β = −1.5, p b .001, ES = −.17; β = −1.78, p b .001, ES = .21; β = −1.29, p b .001, ES = −.19; and β = −4.69, p b .001, ES = −.18, respectively. For reference, half-standardized regression coefficients (ES) can be interpreted as follows: an increase of one year in grade level is associated with 0.06 standard deviation increase (on average) in Concept Summaries scores (Hedges et al., 1994). Likewise, an increase of one year in grade level is associated with .17 standard deviation decrease (on average) in Clear 1 to 2 Step Directives scores. ES results suggest small positive effects of grade level on CSS Part 1 scores (Lindsey & Wilson, 2001).

Table 5 Descriptive statistics of the CSS Part 1 and Part 2 frequency scores by years of teaching experience. Measures

Part 1 Strategy counts Concept summaries Academic response opportunities Clear 1 to 2 step commands Vague commands Praise statements Corrective feedback Total Part 2 Strategy rating scales Instructional strategies total Instructional methods composite Student focus learning & engagement Instructional delivery Academic monitoring/feedback composite Promotes student thinking Academic performance feedback Behavioral management strategies total Proactive methods composite Prevention management Directives/Transitions Behavioral feedback composite Praise Corrective feedback

3 or less years N = 55 M (SD) 5.32 28.46 16.52 4.07 11.77 10.98 68.98

(4.84) (14.49) (9.15) (3.86) (6.88) (8.06) (25.66)

123.95 66.83 31.58 35.25 57.12 26.16 30.96 107.88 59.45 21.28 38.16 48.44 21.95 26.48

(20.72) (12.07) (6.60) (7.30) (10.02) (6.41) (5.75) (21.27) (10.83) (5.24) (7.22) (13.46) (7.09) (7.60)

4–9 years N = 95 M (SD) 6.50 27.56 16.87 4.04 10.97 8.51 66.37

(5.40) (15.79) (7.39) (4.23) (7.65) (8.06) (25.20)

123.22(24.22) 65.18 (14.08) 30.91 (7.87) 34.15 (7.90) 57.98 (11.57) 27.41 (6.73) 30.57 (6.61) 105.88 (22.51) 59.58 (11.70) 21.03 (7.32) 38.39(7.60) 46.24 (13.76) 21.03 (7.32) 25.23 (7.99)

10–19 years N = 97 M (SD) 5.30 28.61 18.50 3.40 11.66 8.83 69.51

20+ years N = 65 M (SD)

(4.74) (14.99) (9.95) (3.64) (10.13) (6.47) (30.82)

5.89(5.09) 23.64(10.19) 15.59 (8.49) 3.28 (4.06) 11.39 (7.80) 7.79 (4.65) 61.03 (20.23)

123.26(27.34) 66.24 (15.38) 30.81(8.10) 35.43 (8.50) 57.02 (13.37) 26.64 (7.59) 30.38 (7.43) 108.27 (25.53) 61.20 (12.52) 21.27 (6.38) 39.81 (7.34) 46.95 (16.12) 20.91 (9.14) 26.04 (7.97)

126.10(26.34) 67.75 (15.60) 32.76 (8.08) 34.85 (8.62) 58.15 (11.85) 26.86 (7.11) 31.28 (6.25) 110.92(25.44) 61.50 (11.91) 21.76 (8.53) 39.74 (7.02) 49.42 (15.98) 22.30 (7.98) 27.12 (8.63)

692

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

Table 6 Descriptive statistics of the CSS Part 3 classroom checklist. Part 3 classroom checklist items

Classrooms with checklist items present

Learning materials/resources 1. Learning materials (e.g., pencils, rulers) and resources (e.g., Internet, encyclopedia, dictionary, books) to complete assignments are available to students. 2. Learning materials and areas in the classroom are labeled.

98.7% 64.4%

Classroom structure/organization 3. A procedure or routine exists for students to organize their desks, backpacks, or learning materials. 4. Student work, artwork, and accomplishments are displayed in the classroom. 5. Methods for tracking student academic and/or behavioral progress (e.g., homework tracking chart, rule-following chart, sticker/star chart) are posted. 6. Tissues and hand sanitizers are available to students. 7. Classroom lesson or activity schedule is posted. 8. Assignments (e.g., homework, readings, tests) are clearly posted.

86.4% 81.1% 65.1% 98.1% 74.1% 57.8%

Classroom rules 9. Classroom rules are posted. 10. Classroom rules specify positive behaviors that students should do rather than not do.

82.6% 75.7%

Note. Results for the Part 3 Classroom Checklist by grade level or years of teaching experience can be obtained from contacting the first author.

As shown on Table 7, years of teaching experience effects and the grade level by years of teaching experience effects were, in general (but with one exception), not significantly related to teachers' use of CSS Part 1 strategies. The exception was teachers' use of Praise Statements. Frequency of use of praise declined with increasing grade levels in years of teaching experience groups,

Table 7 HLM analysis with CSS Part 1 strategy counts. Part 1 strategy counts

Grade

β (SE) Concept summaries

0.30 (0.07)

Z 4.38*

−0.42 (0.30)

−1.38

−1.5 (0.18)

−8.52*

Vague commands

−0.13 (0.07)

−1.78

Praise statements

−1.78(0.17)

−10.31*

Academic response opportunities

Clear 1 to 2 step commands

Years of teaching experience (effect coded dummy variables)

Grade-by-years-of-teaching experience interaction

pa

β (SE)b

Z

pa

β (SE)

c

Z

pa

b.001

−0.99 (0.52) 0.44 (0.48) 0.29 (0.46) 2.10 (2.30) −1.44 (2.09) 1.96 (2.05) −0.56 (1.40) −0.12 (1.26) 2.42 (1.24) 0.74 (0.54) −0.44 (0.50) 0.09 (0.48) −1.17 (1.30) −0.33 (1.17) 3.08 (1.15) 2.31 (1.93) 0.21 (1.06) −0.21(1.05) 0.99 (3.97) −0.72 (3.61) 7.29 (3.54)

−1.90 0.91 0.62 0.91 −0.69 0.95 −0.40 −0.09 1.94 1.36 −0.89 0.20 −0.90 −0.28 2.66 1.94 0.20 −0.20 0.25 −0.20 2.05

.057 .358 .533 .361 .488 .339 .686 .924 .052 .173 .374 .840 .367 .778 .008 .052 .838 .838 .802 .842 .040

0.30 0.09 −0.33 −0.49 1.13 −0.71 0.45 0.36 −0.52 −0.20 0.21 0.01 0.64 0.08 −1.02 −0.10 −0.21 −0.07 1.03 1.24 −2.72

(0.18) (0.16) (0.15) (0.81) (0.69) (0.67) (0.49) (0.41) (0.40) (0.19) (0.16) (0.16) (0.46) (0.38) (0.38) (0.42) (0.35) (0.34) (1.40) (1.19) (1.16)

1.66 0.60 −2.14 −0.60 1.63 −1.05 0.92 0.87 −1.28 −1.08 1.26 0.08 1.38 0.21 −2.67* −0.24 −0.61 −0.20 0.73 1.03 −2.34

.096 .544 .032 .545 .102 .291 .356 .384 .198 .279 .204 .930 .165 .827 .007 .803 .541 .840 .464 .300 .019

.167

b.001

.075

b.001

Corrective feedback

−1.29 (0.15)

−8.22 *

b.001

Total

−4.69 (0.52)

−8.87 *

b.001

Note. a p Values for individual tests; the Dunn–Sidak method was used to maintain the family-wise error rate (FW) below .05; * denotes FW b .05. b The three lines in each box of this column correspond to three effects-coded dummy variables representing Years of Teaching Experience groups. The first line β is an estimate of the distance of the intercept of the regression line of teachers with 3 or fewer years of experience, from the average intercept of all Years of Teaching Experience groups at Grade = 0 (i.e., kindergarten); the second line β is an estimate of the distance of the intercept of the regression line of teachers with 4 to 9 years of experience, from the average intercept of all Years of Teaching Experience groups at Grade = 0 (i.e., kindergarten); the third line β is an estimate of the distance of the intercept of the regression line of teachers with 10 to 19 years of experience, from the average intercept of all Years of Teaching Experience groups at Grade = 0 (kindergarten). c The three lines in each box of this column correspond to three interaction effects, each involving an effects-coded dummy variable for Years of Teaching Experience. The first line β in each box is an estimate of the difference between (a) the effect of Grade Level on the usage of a strategy for teachers with 3 or less years of experience, and (b) the average effect of Grade Level of all Years of Teaching Experience groups. The 2nd line β in each box is an estimate of the difference between (a) the effect of Grade Level on the usage of a strategy for teachers with 4 to 9 years of experience, and (b) the average effect of Grade Level of all Years of Teaching Experience groups. The 3rd line β in each box is an estimate of the difference between (a) the effect of Grade Level on the usage of a strategy for teachers with 10 to 19 years of experience, and (b) the average effect of Grade Level of all Years of Teaching Experience groups.

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

693

but the decline was significantly larger for teachers with 10 to 19 years of teaching experience than the average decline for all teaching experience groups. Table 8 presents a summary of the HLM analyses for grade level and strategy usage as measured by the CSS Part 2 (Strategy Rating Scale) frequency ratings. ICCs ranged from .40 to .55, and all were statistically significant. HLM analyses indicated that grade level significantly relates to educators' use of instructional and behavioral management strategies as measured by the CSS Part 2 (Strategy Rating Scales). For the CSS IS scale, findings indicated that increased grade level resulted in higher scores (increased strategy usage) on the Student Focus Learning and Engagement scale, β = .47, p b .001, ES = .06), Academic Monitoring/Feedback composite, β = .45, p = .006, ES = .04), and its associated Promotes Student Thinking scale, β = .82, p b .001, ES = .11. Also, increased grade level resulted in lower scores (decreased strategy usage) on the Academic Performance Feedback scale, β = − .37, p b .001; ES = .06. As evident in Table 8, none of the tests on the effects of years of teaching experience or the grade level by years of teaching experience interaction effects were statistically significant. For the BMS scales, findings indicated that increased grade level resulted in lower scores (decreased strategy usage) on the BMS Total, β = − 1.67, p b .001; ES = .07; Behavior Feedback Composite, β = − 1.41, p b .001, ES = − .01, and its associated Praise scale, β = − .99, p b .001, ES = − .12, and Corrective Feedback scale, β = − .45, p b .01, ES = − .05. Additionally, the HLM results indicated that years of teaching experience and the grade level by years of teaching experience interaction dummy variables were not significantly related to strategy usage on the BMS scales. Due to marginal maximum likelihood estimation difficulties encountered with HLMs for IS Instructional Delivery and BMS Directives/Transitions scales that included interaction terms, the interaction terms were excluded from HLMs for these two scales. Thus, for these two scales no estimates or hypothesis tests for grade level by years of teaching experience interaction effects are available. 3.3. Strategy discrepancy scores by grade level and years of teaching experience Table 9 presents a summary of the HLM analyses for grade-level assignment and discrepancy scores as measured by the CSS Part 2 (Strategy Rating Scale). HLM analyses indicated that in general grade-level assignment significantly relates to educators' discrepancy scores of instructional and behavioral management strategies. For the CSS IS scale, findings indicated that increased grade-level assignment resulted in higher discrepancy scores (increased need for change) on the Academic Performance Feedback scale (β = .20, p = .006, ES = .06). As evident in Table 9, none of the tests on the effects years of teaching experience, or grade level by years of teaching experience interaction effects were significant. For the BMS scales, findings indicated that increased grade-level assignment resulted in higher discrepancy scores (increased need for change) on the Behavior Feedback Composite (β = .61, p b .001; ES = .08) and its associated Praise scales (β = .46, p b .001, ES = .10). HLMs results revealed teachers' discrepancy scores of behavioral management strategies were not found to be related to years of teaching experience and the grade level by years of teaching experience interaction dummy variables. 4. Discussion This investigation examined general education teachers' use of classroom instructional and behavioral management practices in elementary school as measured by the CSS-Observer Form and the relation of educators' grade-level assignment, years of teaching experience, and interaction of grade and years of teaching experience on strategy usage using HLM analyses. Overall, general education teachers' frequency of general classroom practices were lower than rates recommended by the instructional and behavioral management research literature (e.g., Good & Grouws, 1977; Pfiffner et al., 1985; Stitcher, Lewis, Richter, Johnson, & Bradley, 2006). Although years of teaching experience and the interaction between grade level and years of teaching experience did not relate to strategy usage as measured by the CSS (with the exception of Part 1 Praise Statements), educators' grade-level assignment was found to relate to the frequency at which these strategies were employed. Results offer directions for school-based practice. 4.1. Teachers' natural strategy usage Examples of the provision of prompts in the opportunity to respond (OTR) literature encompass the CSS Part 1 category Academic Response Opportunities (e.g., Partin et al., 2010; Stitcher et al., 2009; Sutherland et al., 2003). For this CSS Part 1strategy, an average total of 27.25 prompts per 30 min—a rate of 0.91 prompts per minute—was found. This finding is in contrast to research by Englert (1983) and Sutherland et al. (2003) that recommend special educators use (as an optimum rate) 3.5 OTR prompts per minute for improving student outcomes. Stitcher et al. (2009) also reported similar results (i.e., 2.61 OTR prompts per minute) in a sample of 35 general education elementary school teachers. The present results suggest that fewer than half the number of prompts occur in general education settings relative to the recommendations for special educators, and this larger sample suggests more modest use of prompts than that reported by Stitcher et al. (2009). Overall, Praise statements were found to occur at a greater frequency than Corrective Feedback. Although this result is ideal, the observed ratio of praise to corrective feedback across grade levels (approximately 1:1) was less than the longstanding recommended ratios of 3:1 and 4:1 for improving student behavior and academic performance (e.g., Good & Grouws, 1977; Pfiffner et al., 1985; Stitcher et al., 2009). Comparing the ratio of the observed Praise to the total amount of behavioral requests (i.e., Corrective Feedback, Clear 1 or 2 Step Commands, and Vague Commands) showed teachers praising at a ratio of approximately two praise statements for every five demands. Similarly, the ratio of observed feedback (i.e., Praise and Corrective

694

Table 8 HLM analysis with CSS Part 2 strategy rating scales frequency scores. Part 2 Instructional strategies scale

Grade

β (SE)

Years of teaching experience (effect coded dummy variables) pa

Z

Total scale

0.74 (0.33)

2.21

.027

Instructional methods composite

0.29 (0.20)

1.44

.148

0.47 (0.10)

4.53 *

Student focus learning & engagement

Academic monitoring/feedback composite

−0.18 (0.23)

−.78

.434

0.45 (0.16)

2.74*

.006

0.82 (0.09)

8.38*

b.001

−0.37 (0.10)

−3.68*

b.001

Part 2 Behavioral management strategies scales Total scale

−1.67 (0.44)

−3.79*

b.001

Proactive methods composite

−0.28 (0.25)

−1.12

.261

Prevention management

−0.08 (0.12)

−0.67

.499

−0.18(0.22)

−0.81

.416

−1.41 (0.26)

−5.41*

b.001

Praise

−0.99 (0.14)

−6.81*

b.001

Corrective feedback

−0.45 (0.15)

−2.90*

.004

Promotes student thinking

Academic performance feedback

Directives/Transitions

Behavioral feedback composite

β (SE)

b

Z

pa

β (SE)

c

Z

pa

−1.85 −2.23 1.08 −0.69 −2.91 0.97 0.08 −1.41 −0.19 0.33 −0.34 0.10 −1.13 0.48 0.12 −0.66 0.23 −0.43 −0.42 0.10 0.57

(3.34) (3.11) (2.94) (2.02) (1.88) (1.78) (1.03) (0.96) (0.91) (0.77) (0.63) (0.63) (1.65) (1.57) (1.45) (0.98) (0.90) (0.86) (1.00) (0.92) (0.88)

−0.55 −0.71 0.36 −0.34 −1.54 0.54 0.08 −1.46 −0.21 0.43 −0.54 0.15 −0.68 0.31 0.08 −0.68 0.26 −0.49 −0.41 0.11 0.65

.580 .474 .713 .733 .122 .584 .936 .143 .830 .661 .586 .875 .493 .751 .932 .497 .794 .619 .675 .906 .511

1.01 1.09 −0.97 0.55 1.05 −0.62 0.04 0.51 −0.19

(1.19) (1.03) (0.98) (0.72) (0.62) (0.59) (0.37) (0.32) (0.30)

0.84 1.05 −0.99 0.75 1.69 −1.05 0.13 1.59 −0.64

.397 .292 .320 .448 .091 .290 .895 .111 .517

0.45 0.08 −0.35 0.32 0.14 −0.08 0.10 −0.02 −0.27

(0.59) (0.50) (0.48) (0.35) (0.30) (0.28) (0.35) (0.30) (0.29)

0.76 0.15 −0.72 0.93 0.47 −0.29 0.29 −0.06 −0.94

.445 .874 .466 .350 .633 .770 .772 .946 .343

−4.56 −0.63 1.96 −1.97 −0.42 0.65 −1.06 0.43 −0.12 −0.56 −0.07 0.29 −2.55 0.06 0.83 −1.87 0.37 0.67 −0.76

(3.30) (3.03) (2.97) (1.88) (1.70) (1.69) (0.91) (0.83) (0.84) (0.73) (0.60) (0.61) (1.95) (1.78) (1.74) (1.08) (0.99) (0.96) (1.15)

−1.38 −0.21 0.65 −1.05 −0.25 0.38 −1.16 0.52 −0.15 −0.77 −0.12 0.48 −1.30 0.03 0.47 −1.72 0.37 0.70 −0.66

.166 .834 .510 .293 .803 .698 .244 .598 .878 .440 .903 .630 .191 .972 .633 .084 .706 .483 .509

2.13 0.05 −0.46 0.61 0.18 −0.18 0.43

(1.16) (1.00) (0.97) (0.66) (0.56) (0.55) (0.32)

1.82 0.05 −0.47 0.92 0.31 −0.32 1.34

.068 .960 .636 .353 .750 .742 .179 .640 .895

(0.69) (0.59) (0.57) (0.38) (0.32) (0.31) (0.40)

2.13 −0.28 −0.33 2.37 0.02 −0.77 1.45

.033 .776 .738 .018 .979 .438 .147

−0.39 (1.05) 0.21 (1.02)

−0.37 0.20

.711 .838

−0.15 (0.34) 0.00 (0.33)

−0.44 0.02

.659 .981

d

d

1.47 −0.16 −0.19 0.91 0.00 −0.24 0.59

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

Instructional delivery

b.001

Grade-by-years-of-teaching-experience interaction

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

695

Feedback) to observed number of OTR prompts (i.e., Academic response opportunities, Clear 1 to 2 Step Commands, and Vague Commands) was approximately 1:4, suggesting teachers in this investigation provided feedback for only about 25% of the total OTR prompt opportunities presented. Findings of low rates of observed praise in this study are consistent with prior research (e.g., Gunter & Denny, 1998; Shores, Gunter, & Jack, 1993; Sutherland & Wehby, 2001; Sutherland et al., 2002). The CSS Part 2 IS and BMS scales assessed the frequency of specific evidence-based instruction and behavior management strategies used. Comparing mean scores on the eight subscales to the maximum score for each revealed that teachers utilized evidence-based strategies approximately 60% and 70% of the time. This finding is a positive one given that the frequency at which teachers implement the strategies associated with these eight subscales has been linked in the effective instruction literature to positive student outcomes (e.g., Creemers, 1994; Gable et al., 2009; Marzano, 1998; Tomlinson & Edisonson, 2003; Wang, Haertel, & Walhberg, 1993; Wenglinsky, 2002).

4.2. Grade level and years of teaching experience Using HLM analyses, the present investigation found that grade-level assignment related to general education kindergarten to fifth-grade teachers' use some of the instructional and behavioral management strategies. Teachers assigned to lower grades utilized the CSS Part 1 teacher behavior of praise at a greater frequency compared to those teachers assigned to the upper grades. Further analyses of the CSS Part 2 Praise subscale revealed the teachers at lower grades exhibiting greater frequency than teachers at upper grade levels. Similarly, the strategies associated with the IS Part 2 Academic Performance Feedback subscale, which measured aspects of praise specifically related to academics, occurred at greater frequencies for teachers in the lower grades compared to teachers in the upper grades. HLMs results also revealed that upper grade teachers had higher discrepancy scores (i.e., greater need for change) in Academic Performance Feedback, Behavioral Feedback, and Praise than lower grade teachers. Overall, findings suggest elementary school teachers' use praise statements less as students become older, replicating previous research with a contemporary sample (Brophy & Good, 1986; White, 1975). Teachers assigned to lower grades were observed using corrective feedback more often on the CSS Part 1 Corrective Feedback and Part 2 Corrective Feedback subscale than those assigned to upper grades. Because Praise and Corrective Feedback are complimentary strategies that guide student behavior, it is not surprising to find that the lower grades had increased use of praise and corrective feedback compared to the upper grades. However, in this study, the ratio of praise to corrective feedback was consistent (i.e., 1:1) across grade levels. Overall, educators in this study were observed delivering higher rates of commands (verbal requests as measured on the CSS Part 1) to students in lower grades relative to students in higher grades. From a developmental perspective, it is possible that younger students had less familiarity with classroom and school routines and may therefore require more verbal guidance by their teachers. Also, these findings may reflect educators' higher behavioral expectations of older students in elementary school to navigate more independently through instructional contexts. Consistent with these findings, teachers assigned to lower grades have been found to utilize more developmentally appropriate, “child-focused” practices (i.e., partner activities, small groups, individual learning centers and experiential learning) compared to higher grade teachers (e.g., Bredekamp, 1989; Buchannan, Burts, White, & Charlesworth, 1998; Stipek & Byler, 1997). Because student behavior rates were not collected as part of this study, it is not possible to know whether decreased usage of BMS strategies are in response to improved student behavior or to some change in teacher expectations and practice. Given that research has found improved student functioning when BMS strategies are employed (Fabiano et al., 2007), a combination of factors may be contributing to these findings. In this study, teachers' usage of metacognitive and critical thinking strategies, as measured by the Part 2 Promotes Student's Thinking subscale, increased by grade level. Thus, teachers implement metacognitive thinking and critical thinking strategies to greater degrees as children become older. In contrast to the current findings, Santuli (1991) did not observe grade-level differences while investigating second-grade teachers and fifth-grade teachers usage of metacognitive suggestions (e.g., comments that encourage students to reflect on their learning process) versus direct strategies (e.g., goal-oriented activities students can perform to

Notes to Table 8 Note. a p-values for individual tests; the Dunn–Sidak method was used to maintain the family-wise error rate (FW) below .05; * denotes FW b .05. b The three lines in each box of this column correspond to three effects-coded dummy variables representing Years of Teaching Experience groups. The first line β is an estimate of the distance of the intercept of the regression line of teachers with 3 or less years of experience, from the average intercept of all Years of Teaching Experience groups at Grade = 0 (i.e., kindergarten); the second line β is an estimate of the distance of the intercept of the regression line of teachers with 4 to 9 years of experience, from the average intercept of all Years of Teaching Experience groups at Grade = 0 (i.e., kindergarten); the third line β is an estimate of the distance of the intercept of the regression line of teachers with 10 to 19 years of experience, from the average intercept of all Years of Teaching Experience groups at Grade = 0 (kindergarten). c The three lines in each box of this column correspond to three interaction effects, each involving an effects-coded dummy variable for Years of Teaching Experience. The first line β in each box is an estimate of the difference between (a) the effect of Grade Level on the usage of a strategy for teachers with 3 or less years of experience, and (b) the average effect of Grade Level of all Years of Teaching Experience groups. The 2nd line β in each box is an estimate of the difference between (a) the effect of Grade Level on the usage of a strategy for teachers with 4 to 9 years of experience, and (b) the average effect of Grade Level of all Years of Teaching Experience groups. The 3rd line β in each box is an estimate of the difference between (a) the effect of Grade Level on the usage of a strategy for teachers with 10 to 19 years of experience, and (b) the average effect of Grade Level of all Years of Teaching Experience groups. d No estimates or hypothesis tests for (grade level × years of teaching Experience) interaction effects are available due to marginal maximum likelihood estimation difficulties encountered with HLMs.

696

Table 9 HLM analysis with CSS Part 2 strategy rating scales discrepancy scores. Part 2 Instructional strategies scale

Grade

β (SE)

Years of teaching experience (effect coded dummy variables) pa

Z

0.48 (0.29)

1.62

.104

Instructional methods composite

0.24 (0.17)

1.45

.146

Student focus learning & engagement

0.14 (0.09)

1.53

.123

Instructional delivery

0.10 (0.09)

1.04

.294

0.21 (0.14)

1.50

.131

Promotes student thinking

0.01 (0.08)

0.12

.901

Academic performance feedback

0.20 (0.07)

2.71*

.006

Part 2 Behavioral management strategies scales Total scale

0.56 (0.29)

1.90

.056

Proactive methods composite

−0.05 (0.15)

−0.31

.750

Prevention management

−0.04 (0.07)

−0.54

.586

Directives/Transitions

−0.01 (0.10)

−0.12

.903

Academic monitoring/feedback composite

0.61 (0.16)

3.76*

b.001

Praise

0.46 (0.10)

4.38*

b.001

Corrective feedback

0.13 (0.07)

1.73

Behavioral feedback composite

.083

β (SE)

b

Z

pa

β (SE)

3.92 0.64 −1.54 1.64 1.06 −1.26 0.55 0.50 −0.47 1.04 0.67 −0.83 1.42 −0.28 −0.29 0.83 0.01 −0.24 0.57 −0.28 −0.06

(2.23) (2.05) (2.00) (1.27) (1.17) (1.14) (0.68) (0.62) (0.61) (0.71) (0.66) (0.64) (1.09) (0.99) (0.97) (0.64) (0.58) (0.57) (0.57) (0.52) (0.51)

1.38 0.31 −0.77 −1.29 0.91 −0.11 0.81 0.80 −0.77 1.45 .1.02 −1.30 1.30 −0.29 −0.30 1.30 0.02 −0.42 0.98 −0.54 0.11

.165 .754 .439 .196 .362 .265 .413 .423 .441 .146 .306 .192 .191 .770 .760 .193 .983 .673 .323 .585 .905

5.23 −1.35 −2.25 2.76 −0.63 −1.10 1.53 −0.48 −0.53 1.20 −0.13 −0.51 2.44 −0.67 −1.19 1.09 −0.16 −0.66 1.32 −0.48 −0.49

(2.22) (2.02) (2.03) (1.19) (1.08) (1.09) (0.57) (0.51) (0.51) (0.77) (0.70) (0.69) (1.23) (1.11) (1.09) (0.80) (0.72) (0.71) (0.58) (0.53) (0.52)

2.34 −0.66 −1.11 2.31 −0.58 −1.01 2.67 −0.94 −1.03 1.55 −0.18 −0.73 1.98 −0.60 −1.08 1.35 −0.02 −0.93 2.24 −0.90 −0.94

.018 .503 .266 .020 .556 .309 .0074 .345 .300 .120 .852 .463 .047 .544 .276 .174 .822 .350 .024 .362 .347

c

Z

pa

−0.71 (0.79) −0.65 (0.68) 0.37 (0.65) −0.33 (0.45) −0.56 (0.38) 0.30 (0.37) −0.15 (0.24) −0.29 (0.20) 0.15 (0.19) −0.16 (0.25) −0.29 (0.21) 0.16 (0.21) −0.36 (0.38) −0.15 (0.32) 0.08 (0.31) −0.17 (0.22) −0.21 (0.19) 0.03 (0.18) −0.19 (0.20) 0.05 (0.17) 0.05 (0.16)

−0.90 −0.96 0.57 −0.74 −1.44 0.80 −0.64 −1.44 0.76 −0.65 −1.35 0.77 −0.95 −0.47 0.26 −0.76 −1.21 0.19 −0.93 0.33 0.30

.366 .333 .566 .458 .147 .418 .516 .149 .446 .509 .174 .437 .339 .638 .788 .442 .262 .848 .352 .737 .761

−1.44 (0.78) 0.22 (0.66) 0.72 (0.66) −0.69 (0.42) 0.10 (0.35) 0.37 (0.35) −0.47 (0.20) 0.08 (0.17) 0.23 (0.16) −.019 (0.27) 0.01 (0.23) 0.11 (0.22) −0.74 (0.43) 0.11 (0.36) 0.36 (0.36) −0.37 (0.28) −0.01 (0.23) 0.22 (0.23) −0.36 (0.20) 0.11 (0.17) 0.14 (0.17)

−1.82 0.33 1.09 −1.62 0.29 1.04 −2.34 0.49 1.41 −0.71 0.06 0.49 −1.70 0.30 1.01 −1.30 −0.07 0.96 −1.74 0.66 0.82

.067 .738 .271 .103 .764 .297 .018 .623 .157 .476 .946 .621 .088 .759 .310 .190 .943 .333 .081 .507 .406

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

Total scale

Grade-by-years-of-teaching-experience interaction

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

697

complete a task) during mathematics instruction (Moely, Santulli, & Obach, 1995). No differences were found between the secondand fifth-grade teachers frequency of metacognitive suggestions although fifth-grade teachers did use more direct strategies. It was also apparent from our results that teachers of higher grades utilized more instructionally-focused strategies (e.g., concept summaries) relative to teachers of younger grade. Similarly, Moely et al. (1992) examined the use of metacognitive and memory knowledge instructional practices for 69 kindergarten to sixth-grade teachers. In contrast to the current study and Santuli (1991), Moely et al. (1992) found teachers using cognitive strategies more often for grades 2 and 3 compared to lower and higher grades. Thus, differences across studies suggest some variation in the use of these strategies; future research that employs multiple measures and multiple grade levels is needed to determine the use of these specific strategies and their contributions to student outcomes. Interestingly, the present study confirmed teachers' usage of metacognitive strategies in the lower grades, yet it underscores a significant shift in the frequency with which teachers implement metacognitive and critical thinking strategies with increased grade. Metacognition emerges during preschool and continues to develop throughout adolescence (Fisher, 1987, 1998). As students become older, their metacognitive abilities become stronger, thus teachers may be more apt to place metacognitive demands on students or use developmentally appropriate metacognitive learning activities. Another explanation for this shift in educators' practices in upper grades may be related to state-wide testing requirements. For elementary school, state-wide testing occurs in third grade through fifth grade and in general focus on students' ability to critically think in the academic areas of literacy, mathematics, science, and social studies in most states. Thus, the observed grade level effects could be due to greater emphasis in teaching students the skills necessary to pass these tests. Grade level effects were also present in the Student Focused Learning and Engagement subscale with increased usage associated with increased grade level. These findings are surprising given the research in the late 1980s and 1990s to implement developmentally appropriate (i.e. “child-centered”) strategies focused on the primary grades of kindergarten to third grade and preschool programs (e.g., Abbot-Shim & Sibley, 1997; Bredekamp, 1989; Bredekamp & Copple, 1997; Goldstein, 1997; Gronlund, 1995). Although the associated body of research notes differences in teachers' use of developmentally appropriate strategies by grade level (Abbot-Shim & Sibley, 1997; Buchannan et al., 1998), with the lower grades utilizing more strategies, this research focuses on the primary grades kindergarten to grade 3 and preschool programs. At first glance, the present study findings may seem to contradict the literature, but one must consider that the initial developmentally appropriate and child-centered literature focused exclusively on the primary grades kindergarten to third grade in order to bring the instructional techniques used in primary grades of kindergarten to third grade that are more in line with preschool programs. Therefore, there is insufficient evidence that developmentally appropriate strategies occur in greater frequencies in the primary grades kindergarten to third grade versus higher grades. Secondly, the revised developmentally appropriate guidelines set forth by the National Association for the Education of Young Children (Bredekamp & Copple, 1997) addressed the need for teachers to utilize both child centered learning and traditional techniques (e.g., direct instruction) for kindergarten through eighth grade. Thus, it is well within the spectrum of expectations to find upper grade teachers using child-focused strategies. Overall, no association was found between years of teaching experience and either (a) frequency of use of instructional or behavioral management strategies or (b) appropriateness of use of these strategies. Thus, teachers in this study consistently used instructional and behavioral management strategies across years of teaching experience. Research on the moderating effect of years of teaching experience has primarily focused on student academic outcomes (Monk, 1994; Wang et al., 1993) not teacher professional practice (e.g., Ghaith & Yaghi, 1997; Guskey, 1988). Related to the present study's findings, Guskey (1988) found teachers' willingness to use new instructional practices was not moderated by their years of experience, whereas Ghaith and Yaghi (1997) found years of experience was negatively associated with teachers' willingness to adopt new practices. 4.3. Limitations and future directions The teachers in the study come from only two geographic regions in the Northeast and from kindergarten through fifth grades. Participants also included only general education teachers. The study also did not collect detailed information on educators' prior education, training, or professional development. Thus, these results may not generalize to other geographic regions, grade levels, teachers with particular training or professional development experiences, or special education settings. Further, these results Notes to Table 9 Note. a p-Values for individual tests; the Dunn–Sidak method was used to maintain the family-wise error rate (FW) below .05; * denotes FW b .05. b The three lines in each box of this column correspond to three effects-coded dummy variables representing Years of Teaching Experience groups. The first line β is an estimate of the distance of the intercept of the regression line of teachers with 3 or less years of experience, from the average intercept of all Years of Teaching Experience groups at Grade = 0 (i.e., kindergarten); the second line β is an estimate of the distance of the intercept of the regression line of teachers with 4 to 9 years of experience, from the average intercept of all Years of Teaching Experience groups at Grade = 0 (i.e., kindergarten); the third line β is an estimate of the distance of the intercept of the regression line of teachers with 10 to 19 years of experience, from the average intercept of all Years of Teaching Experience groups at Grade = 0 (kindergarten). c The three lines in each box of this column correspond to three interaction effects, each involving an effects-coded dummy variable for Years of Teaching Experience. The first line β in each box is an estimate of the difference between (a) the effect of Grade Level on the usage of a strategy for teachers with 3 or less years of experience, and (b) the average effect of Grade Level of all Years of Teaching Experience groups. The 2nd line β in each box is an estimate of the difference between (a) the effect of Grade Level on the usage of a strategy for teachers with 4 to 9 years of experience, and (b) the average effect of Grade Level of all Years of Teaching Experience groups. The 3rd line β in each box is an estimate of the difference between (a) the effect of Grade Level on the usage of a strategy for teachers with 10 to 19 years of experience, and (b) the average effect of Grade Level of all Years of Teaching Experience groups.

698

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

represent a sampling of teacher behavior—one hour of instructional time split across two lessons. However, it is important to note that the observation procedures used in the current study are consistent with observational practices commonly conducted in schools by elementary school principals and school personnel. The observational data are also limited to the specific operational definitions used in the coding scheme—observational codes with different foci may yield different results. Teachers were aware of the observer's presence in the classroom, which may have influenced their behaviors due to reactivity or demand characteristics. In this study procedures were in place to reduce the impact of the observer on teacher behavior (i.e., teachers and observers assigned a written agreement that CSS scores could not be used for teacher performance evaluations, observations were announced, and observers did not interact with teachers or students while observing. Despite these efforts, this study did not examine the influence of observers on teacher practices and thus, the results must be viewed in light of this potential limitation. This study did not assess the relation between teacher behavior (or changes in teacher behavior) to student academic or behavior outcomes. Thus, it remains unknown whether increased use of CSS strategies result in changes in student outcomes. Further, alternative measurement approaches are needed to study whether teacher behaviors intended to modify student behavior (academic or social) actually did so (see for example the Student Behavior Teacher Response Observation code which addresses the dependencies between teacher and student behaviors; Pelham, Greiner, & Gnagy, 2008; Vujnovic et al., in press). Finally, this study did not examine why teachers use specific strategies or sets of strategies. Further studies are needed to determine the reasons teachers use or do not use strategies known to be related to effective instruction and behavior management. 4.4. Implications for practice The present results have implications for school psychologists, general education teachers, general education training programs, and professional development efforts. Results in this investigation suggest good news: in a sample of over 300 general education teachers, there was consistent evidence that general education teachers used best practices in two half-hour samples of their instruction time. On the other hand, rates of use were modest in some cases, and group averages were lower than recommendations by research, suggesting recommendations for teachers made decades ago remain aspirational (e.g., the overall ratio of praise to corrective feedback was close to 1:1 rather than the recommended 3:1 or greater). The reasons for the discrepancy between professional recommendations are not clear. It may be that general education teachers are not learning effective strategies for instruction and behavior management in their educational programs, or they may have learned these strategies and drifted from best practice. It is interesting to find that teachers use fewer meta-cognitive strategies and concept summaries with younger students compared to older students, even though meta-cognitive instructional strategies and concept summaries have been found useful for all ages including preschoolers (e.g., Fisher, 1987, 1998). Similarly, teachers provide fewer praise statements as children progress through school, possibly due to increased expectations of student independence and self-management. Yet, teachers could praise student behaviors that represent independence and self-management in an effort to shape and promote such behaviors. Educators' modest rates of strategy usage offer opportunities for school psychologists to engage in collaborative consultation aimed at improving teachers' Tier 1 practices. Overall, methods to help teachers use and sustain the use of these strategies during a single school year and for successive school years are needed. The present study indicated that across multiple instructional and behavioral management strategies, years of teaching experience overall was not related to strategy usage or discrepancy in strategy usage. This is an important finding for individuals responsible for professional development. Specifically, professional development efforts will likely need to be targeted across faculty regardless of experience. Findings also suggest a measure such as the CSS scores may be a valuable tool to provide individualized teacher feedback and follow-up support that is tailored to a teacher's repertoire of current practice (Reddy & Dudek, in press; Reddy, Fabiano, Barbarasch, & Dudek, 2012). An additional advantage of a measure such as the CSS is that it can be re-administered in an ongoing fashion to document a teacher's use of specific strategies over time and across content areas to serve as a means of progress monitoring implementation. Additionally, obtaining teachers' input on their use of best practices may promote self-reflection, collaboration, and communication with consultants (Reddy & Dudek, in press). 5. Conclusion Overall, this investigation presents findings related to kindergarten to fifth-grade general educators' use of instructional and behavior management strategies. Results suggest teachers are using many best practice approaches for promoting student learning and managing their classrooms, yet teachers do have areas that may warrant improvement. This study offers a snapshot of contemporary general education practice, and it yields useful information for principals, school psychologists, and directors of curriculum and instruction charged with ensuring all students receive optimal educational opportunities. References Abbot-Shim, M., & Sibley, A. (1997). Developmentally appropriate practices across grade levels. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL. Abramowitz, A. J., O'Leary, S. G., & Rosen, L. A. (1987). Reducing off-task behavior in the classroom: A comparison of encouragement and reprimands. Journal of Abnormal Child Psychology, 15, 153–163. Acker, M. M., & O'Leary, S. G. (1987). Effects of reprimands and praise on appropriate behavior in the classroom. Journal of Abnormal Child Psychology, 15(4), 549–557.

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

699

Adey, P., & Shayer, M. (1993). An exploration of long-term far-transfer effects following an extended intervention programme in the high school science curriculum. Cognition and Instruction, 11(1), 1–29. Alber, S. R., Heward, W. L., & Hippler, B. J. (1999). Teaching middle school students with learning disabilities to recruit positive teacher attention. Exceptional Children, 65, 253–270. Bales, B. L. (2006). Teacher education policies in the United States: The accountability shift since 1980. Teaching and Teacher Education, 22, 395–407. Bangert-Drowns, R. L., Hurley, M. M., & Wilkinson, B. (2004). The effects of school-based writing-to-learn interventions on academic achievement: A meta-analysis. Review of Educational Research, 74, 29–58. Beaman, R., & Wheldall, K. (2000). Teacher's use of approval and disapproval in the classroom. Educational Psychology, 20, 431–446. Bender, W. N. (2008). Differentiating instruction for students with learning disabilities: Best teaching practices for general and special educators (2nd ed.) CityThousand Oaks, CA: Corwin Press. Bredekamp, S. (Ed.). (1989). Developmentally appropriate practice in early childhood programs serving children from birth through age 8. Washington, DC: National Association for the Education of Young Children. Bredekamp, S., & Copple, C. (Eds.). (1997). Developmentally appropriate practice in early childhood programs (Rev. ed.). Washington, DC: National Association for the Education of Young Children. Brophy, J. (1998). Motivating students to learn. New York, NY: McGraw-Hill. Brophy, J., & Alleman, J. (1991). A caveat: Curriculum integration isn't always a good idea. Educational Leadership, 49(2), 66. Brophy, J. E., & Good, T. (1986). Teacher behavior and student achievement. In M. C. Wittrock (Ed.), Handbook of research in teaching (pp. 328–375) (3rd ed.). New York, NY: Macmillian. Buchannan, D. C., Burts, J. B., White, V. F., & Charlesworth, R. (1998). Predictors of the developmental appropriateness of the beliefs and practices of first, second, and third grade teachers. Early Childhood Research Quarterly, 13, 459–483. Cantrell, S. (2013). Ensuring fair and reliable measures of effective teaching. Bill & Melinda Gates Foundation. Cicchetti, D. V. (1994). Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment, 6, 284–290. Creemers, B. P. M. (1994). The effective classroom. London, UK: Cassell. Duncan, A., Gurria, A., & van Leeuwen, F. (2011). Uncommon wisdom on teaching. Retrieved from the internet July 22, 2011 from http://www.huffingtonpost. com/arne-duncan/uncommon-wisdom-on-teachi_b_836541.html Englert, C. S. (1983). Measuring special education teacher effectiveness. Exceptional Children, 50, 247–254. Fabiano, G. A., Pelham, W. E., Gnagy, E. M., Burrows-MacLean, L., Coles, E. K., & Robb, J. A. (2007). The single and combined effects of multiple intensities of behavior modification and multiple intensities of methylphenidate in a classroom setting. School Psychology Review, 36, 195–216. Fisher, R. (Ed.). (1987). Problem solving in primary schools. Oxford, UK: Blackwell. Fisher, R. (1998). Thinking about thinking: Developing metacognition in children. Early Child Development and Care, 141, 1–15. Fletcher, J. M., Lyon, G. R., Fuchs, L. S., & Barnes, M. A. (2007). Learning disabilities: From identification to intervention. New York, NY: Guilford Press. Forehand, R., & Long, N. (1996). Parenting the strong-willed child. Chicago, IL: Contemporary Books. Gable, R. A., Hester, P. H., Rock, M. L., & Hughes, K. G. (2009). Back to basics: Rules, praise, ignoring and reprimands revisited. Intervention in School and Clinic, 44, 195–205. Gage, Nathaniel L. (1978). The scientific basis of the art of teaching. New York: Teachers College Press. Ghaith, G., & Yaghi, H. (1997). Relationships among experience, teacher efficacy, and attitudes toward the implementation of instructional innovation. Teaching and Teacher Education, 13, 451–458. Goldstein, L. S. (1997). Teaching with love: A feminist approach to early childhood education. New York, NY: Peter Lang. Good, T., & Grouws, D. (1977). Teaching effects: A process–product study in fourth grade mathematics classrooms. Journal of Teacher Education, 28, 49–54. Goodwin, L. D., & Goodwin, W. L. (1999). Measurement myths and misconceptions. School Psychology Quarterly, 14, 408–427. Gronlund, N. E. (1995). How to write and use instructional objectives (5th ed.)Englewood Cliffs, NJ: Prentice Hall. Gunter, P. L., & Denny, R. K. (1998). Trends and issues in research regarding academic instruction of students with emotional behavioral disorders. Behavioral Disorders, 24, 44–50. Guskey, T. R. (1988). Teacher efficacy, self-concept, and attitudes toward the implementation of instructional innovation. Teaching and Teacher Education, 4, 63–69. Hall, R. V., Panyan, M., Rabon, D., & Broden, M. (1968). Instructing beginning teachers in reinforcement procedures which improve classroom control. Journal of Applied Behavior Analysis, 1, 315–322. Hattie, J. A. (1992). Measuring the effects of schooling. Australian Journal of Education, 36, 5–13. Haywood, H. C. (2004). Thinking in, around, and about the curriculum: The role of cognitive education. International Journal of Disability, Development and Education, 51(3), 231–252. Hedges, L. V., Laine, R. D., & Greenwald, R. (1994). An exchange: Part 1: Does money matter? A meta-analysis of studies of the effects of differential school inputs on student outcomes. Educational Researcher, 23, 5–14. Hines, C. V., Cruickshank, D. R., & Kennedy, J. J. (1985). Teacher clarity and its relationship to student achievement and satisfaction. American Educational Research Journal, 22, 87–99. Horner, R. H., Sugai, G., Todd, A. W., & Lewis-Palmer, T. (2000). Elements of behavioral support plans: A technical brief. Exceptionality, 8, 205–215. Kalis, T. M., Vannest, K. J., & Parker, R. (2007). Praise counts: Using self-monitoring to increase effective teaching practices. Preventing School Failure, 51, 20–27. Kane, T. J., & Staiger, D. O. (2012). Gathering feedback for teaching: Combining high-quality observations with student surveys and achievement gains. MET Research Paper. Seattle, Washington: Bill & Melinda Gates Foundation (Retrieved July 16, 2012, from http://www.metproject.org/downloads/MET_Gathering_ Feedback_Research_Paper.pdf.) Kern, L., & Clemens, N. (2007). Antecedent strategies to promote appropriate classroom behavior. Psychology in the Schools, 44, 65–75. Kirk, R. E. (1982). Experimental design (2nd ed.)Belmont, CA: Brooks/Cole Publishing Company. Knapp, T. R., & Brown, J. K. (1995). Ten measurement commandments that often should be broken. Research in Nursing & Health, 18, 465–469. Kounin, J. S. (1970). Discipline and group management in classrooms. New York, NY: Holt, Rinehart, and Winston. Lindsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. CA, Sage: Thousand Oaks. Madsen, C. H., Becker, W. C., & Thomas, D. R. (1968). Rules, praise, and ignoring: Elements of elementary classroom control. Journal of Applied Behavior Analysis, 1, 139–150. Marzano, R. J. (1998). A theory-based meta-analysis of research on instruction. Aurora, CO: Mid-continent Research for Education and Learning (Eric Document Reproduction Service No. ED 427 087). Marzano, R. J., Pickering, D. J., & Pollock, J. E. (2001). Classroom instruction that works: Research-based strategies for increasing student achievement. Alexandria, VA: Association for Supervision and Curriculum Development. Mevarech, Z. R., & Kramarski, B. (1997). Improve: A multidimensional method for teaching mathematics in heterogeneous classrooms. American Educational Research Journal, 34(2), 365–394. Moely, B. E., Hart, S. S., Leal, L., Santulli, K. A., Rao, N., Johnson, T., et al. (1992). The teacher's role in facilitating memory and study strategy development in the elementary school classroom. Child Development, 63, 653–672. Moely, B. E., Santulli, K. A., & Obach, M. S. (1995). Strategy instruction, metacognition, and motivation in the elementary school classroom. In F. E. Weinert, & W. Schneider (Eds.), Memory performance and competencies: Issues in growth and development (pp. 301–321). Mahwah, NJ: Erlbaum. Monk, D. (1994). Subject area preparation of secondary mathematics and science teachers and student achievement. Economics of Education Review, 12, 125–145. National Center for Education Statistics (2010). Common Core of Data. Washington, DC: U.S. Department of Education, Institute of Education Sciences. Retrieved from http://nces.ed.gov/ccd/districtsearch/

700

L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

National Education Association (2010). National Education Association. Retrieved from the http://www.nea.org/ NICHD Early Child Care Research Network (2002a). Child care and children's development prior to school entry. American Education Research Journal, 39, 133–164. NICHD Early Child Care Research Network (2002b). The interaction of child care and family risk in relation to child development at 24 and 36 months. Applied Developmental Science, 6, 144–156. O'Leary, K. D., Kaufman, K. F., Kass, R. E., & Drabman, R. S. (1970). The effects of loud and soft reprimands on the behavior of disruptive students. Exceptional Children, 37, 145–155. Partin, T., Robertson, R., Maggin, D., Oliver, R., & Wehby, J. (2010). Using teacher praise and opportunities to respond to promote appropriate student behavior. Preventing School Failure, 54, 172–178. Pelham, W. E., Fabiano, G. A., & Massetti, G. M. (2005). Evidence-based assessment of attention-deficit/hyperactivity disorder in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34, 449–476. Pelham, W. E., Greiner, A. R., & Gnagy, E. M. (1998). Summer treatment program for ADHD: Program manual. Buffalo, NY: CTADD. Pelham, W. E., Greiner, A. R., & Gnagy, E. M. (2008). Student behavior teacher response observation code manual. Unpublished observation code manual. Pfiffner, L. J., Rosen, L. A., & O'Leary, S. G. (1985). The efficacy of an all-positive approach to classroom management. Journal of Applied Behavioral Analysis, 18, 257–261. Pianta, R. C., La Paro, K. M., & Hamre, B. K. (2008). Classroom Assessment Scoring System [CLASS] manual: Pre-K. Baltimore, MD: Brookes Publishing. Reddy, L., & Dudek, C. (in press). Teacher progress monitoring of instructional and behavioral management practices: An evidence-based approach to improving classroom practices. International Journal of School and Educational Psychology. Reddy, L., Fabiano, G., Barbarasch, B., & Dudek, C. (2012). Behavior management of students with Attention-Deficit/Hyperactivity Disorders using teacher and student progress monitoring. In L. M. Crothers, & J. B. Kolbert (Eds.), Understanding and managing behaviors of children with psychological disorders: A reference for classroom teachers (pp. 17–47). New York, New York: Continuum International Publishing. Reddy, L., Fabiano, G., & Dudek, C. (2013). Concurrent validity of the Classroom Strategies Scale—Observer Form. Journal of Psychoeducational Assessment, 31, 258–270. Reddy, L., Fabiano, G., Dudek, C., & Hsu, L. (2013a). Development and construct validity of the Classroom Strategy Scale-Observer Form. School Psychology Quarterly. Reddy, L. A., Fabiano, G., Dudek, C. M., & Hsu, L. (2013b). Predictive validity of the Classroom Strategies Scale-Observer Form on statewide testing. School Psychology Quarterly. Reddy, L. A., Kettler, R. J., & Kurz, A. (submitted for publication). School-wide educator evaluation for improving school capacity and student achievement in high poverty schools: Year 1 of the school system improvement project. (submitted for publication). Rosen, L. A., O'Leary, S. G., Joyce, S. A., Conway, G., & Pfiffner, L. J. (1984). The importance of prudent negative consequences for maintaining the appropriate behavior of hyperactive students. Journal of Abnormal Child Psychology, 12, 581–604. Rosenshine, B., & Stevens, R. (1986). Teaching functions. In M. C. Witrock (Ed.), Handbook of research on teaching (pp. 376–391) (3rd ed.). New York, NY: Macmillan. Santuli, K. A. (1991). Teachers' role in facilitating students strategic and metacognitive processes during the representational, solution, and evaluation phase of mathematics problem solving. (Dissertation Abstracts International), 52 (10). (pp. 5559), 5559 (University Microfilms No. AAC92-09661). Shores, R. E., Gunter, P. L., & Jack, S. L. (1993). Classroom management strategies: Are they setting events for coercion? Behavioral Disorders, 2(18), 92–102. Stipek, D. J., & Byler, P. (1997). Early childhood education teachers: Do they practice what they preach? Early Childhood Research Quarterly, 12, 305–325. Stitcher, J. P., Lewis, T. J., Richter, M., Johnson, N. W., & Bradley, L. (2006). Assessing antecedent variables: The effects of instructional variables on student outcomes through in-service and peer coaching professional development models. Education and Treatment of Children, 29, 665–692. Stitcher, J. P., Lewis, T. J., Whittaker, T. A., Richter, M., Johnson, N. W., & Trussell, J. R. (2009). Assessing teacher use of opportunities to respond and effective classroom management strategies: Comparisons among high-and low-risk elementary school. Journal of Positive Behavior Interventions, 11, 68–81. Sugai, G., & Horner, R. H. (2002). The evolution of discipline practices: School-wide positive behavior supports. Child and Family Behavior Therapy, 24, 23–50. Sugai, G., & Horner, R. H. (2007). Is school-wide Positive Behavioral Support an evidence-based practice? Downloaded from the world wide web October 24, 2007, http://pbis.org/files/101007evidencebase4pbs.pdf Sutherland, K. S., Adler, N., & Gunter, P. L. (2003). The effect of varying rates of opportunities to respond to academic requests on the classroom behavior of students with EBD. Journal of Emotional and Behavioral Disorders, 11, 239–248. Sutherland, K. S., & Wehby, J. H. (2001). Exploring the relationship between increased opportunities to respond to academic requests and the academic and behavioral outcomes of students with EBD: A review. Remedial and Special Education, 22, 113–121. Sutherland, K. S., Wehby, J. H., & Yoder, P. J. (2002). Examination of the relationship between teacher praise and opportunities for students with EBD to respond to academic requests. Journal of Emotional and Behavioral Disorders, 10, 5–13. Taylor, B. M., Pearson, P. D., Peterson, D. S., & Rodriguez, M. C. (2003). Reading growth in high-poverty classrooms: The influence of teacher practices that encourage cognitive engagement in literacy learning. Elementary School Journal., 104, 3–28. Thomas, D. R., Becker, W. C., & Armstrong, M. (1968). Production and elimination of disruptive classroom behavior by systematically varying teacher's behavior. Journal of Applied Behavior Analysis, 1, 35–45. Tomlinson, C. A., & Edisonson, C. C. (2003). Differentiation in practice: A resource guide for differentiating curriculum, grades K-5. Alexandria, VA: Association for Supervision and Curriculum Development. Vujnovic, R. K., Fabiano, G. A., Pelham, W. E., Greiner, A., Waschbusch, D. A., Gera, S., et al. (in press). The Student Behavior Teacher Response (SBTR) System: Preliminary psychometric properties of an observation system to assess teachers' use of effective behavior management strategies in preschool classrooms. Education and Treatment of Children (in press). Walberg, H. J. (1986). Synthesis of research on teaching. In M. Wittrock (Ed.), Handbook of Research on Teaching (3rd ed.). New York, NY: Macmillan. Walker, H. M., & Buckley, N. K. (1968). The use of positive reinforcement in conditioning attending behavior. Journal of Applied Behavior Analysis, 1, 245–252. Walker, H. M., Colvin, G., & Ramsey, E. (1995). Antisocial behavior in school: Strategies and best practices. Pacific Grove, CA: Brooks/Cole. Walker, H. M., & Eaton-Walker, J. E. (1991). Coping with noncompliance in the classroom: A positive approach for teachers. Austin, TX: Pro-Ed. Wang, M. C. (1991). Productive teaching and instruction: Assessing the knowledge base. Phi Delta Kappan, 71, 470–478. Wang, M. C., Haertel, G. D., & Walhberg, H. J. (1993). Toward a knowledge base for school learning. Review of Educational Research, 63, 249–294. Ward, M. H., & Baker, B. L. (1968). Reinforcement therapy in the classroom. Journal of Applied Behavior Analysis, 1, 323–328. Wenglinsky, H. (February 13). How schools matter: The link between teacher classroom practices and student academic performance. Education Policy Analysis Archives, 10(12), (Retrieved November 2, 2005, from: http://epaa.asu.edu/epaa/v10n12/) White, M. A. (1975). Natural rates of teacher approval and disapproval in the classroom. Journal of Applied Behavioral Analysis, 8, 367–372. What Works Clearinghouse (2012). What works clearinghouse. Downloaded from the internet on April 15, 2012 at http://ies.ed.gov/ncee/wwc/ Ysseldyke, J., & Burns, M. (2009). Functional assessment of instructional environments for the purpose of making data-driven instructional decisions. In T. Gutkin, & C. Reynolds (Eds.), The handbook of school psychology (pp. 410–433) (4th ed.). Hoboken, NJ: Wiley. Ysseldyke, J., & Elliott, J. (1999). Effective instructional practices: Implications for assessing educational environments. In C. Reynolds, & T. Gutkin (Eds.), The handbook of school psychology (pp. 497–518) (3rd ed.). New York, NY: Wiley.

Instructional and behavior management practices implemented by elementary general education teachers.

This investigation examined 317 general education kindergarten through fifth-grade teachers' use of instructional and behavioral management strategies...
328KB Sizes 0 Downloads 0 Views