Research in Developmental Disabilities 35 (2014) 529–536

Contents lists available at ScienceDirect

Research in Developmental Disabilities

The effects of in-service training alone and in-service training with feedback on data collection accuracy for direct-care staff working with individuals with intellectual disabilities Jared Jerome a, Howard Kaplan b, Peter Sturmey a,* a b

Queens College and the CUNY Graduate Center, United States PSCH Inc., United States

A R T I C L E I N F O

A B S T R A C T

Article history: Received 1 November 2013 Accepted 6 November 2013 Available online 28 December 2013

Three residential staff aged 22–38 years participated in this study which measured the accuracy of their data collection, following instruction, in-service, and in-service plus feedback. The experimenter trained them to collect data on targeted maladaptive behavior of one consumer at one time of the day. Following the in-service and the in-service plus feedback trainings, the experimenters assessed whether data collection accuracy increased for that consumer at that time and whether these improved data collection skills generalized to the other consumers and different times. The experimenter used a multiple-baseline-across-participants design to demonstrate experimental control. All three staff improved their data-collection-accuracy from instruction to in-service, and then from in-service to in-service plus feedback. Additionally, improved data collection generalized to two different consumers and two separate time periods. Future research should extend these findings of this study to measuring the effects of more accurate data collection on other functional dependent variables such as accuracy of staff implementation of behavior plans, frequency of maladaptive behavior and amount of prescribed psychotropic medications. ß 2013 Elsevier Ltd. All rights reserved.

Keywords: Data accuracy Direct care staff Reliability Behavioral skills training Feedback

1. Introduction One of the primary responsibilities of staff working in residential settings with people with intellectual disabilities who exhibit challenging behavior is to collect and record frequency, duration, and intensity data accurately. If staff do not accurately record maladaptive behavior for these individuals, psychologists or behavior specialists may incorrectly write formal behavior plans for them, and psychiatrists may prescribe incorrect or unnecessary or wrong dosages of medications. Thus, the accuracy of caregiver data is very important. In an attempt to address this issue, Hrydowy and Martin (1994) investigated a staff management package to increase and maintain behavioral training skills of direct-care staff. The experimenters monitored the behavior of three direct-care staff while the staff conducted a prevocational program with 27 adults with developmental disabilities. The intervention was an easy-to-apply checklist used weekly by a supervisor to give feedback to direct-care staff which stated appropriate ways to increase behavioral skills training. Following interactions between direct-care staff and consumers, supervisors provided positive verbal feedback when staff used methods listed in the checklists and explained what they could have done following

* Corresponding author. Tel.: +1 7189973234. E-mail address: [email protected] (P. Sturmey). 0891-4222/$ – see front matter ß 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.ridd.2013.11.009

530

J. Jerome et al. / Research in Developmental Disabilities 35 (2014) 529–536

instances in which they did not follow the methods listed on the checklist. Use of the checklist during a weekly morning work session in a multiple-baseline-across-participants-design led to immediate improvement in staff performance during daily morning work sessions. The experimenters also demonstrated generalization to daily afternoon work sessions with a different group. Further increases in staff performance occurred when the experimenters added the management procedure to afternoon work sessions. After experimenters decreased the management procedure from weekly to biweekly use, they were able to maintain staff performance over 4 months. When performance of direct-care staff improved, on-task behavior of most participants also substantially increased. Direct-care staff preferred the staff management package to the methods of supervision they were provided with before the study, and the training unit adopted the checklist for continuous use at the conclusion of the study. In another study addressing staff performance, Parsons and Reid (1995) trained supervisors to provide feedback on the teaching skills of direct-care staff working with adults with developmental disabilities and examined the supervisors’ use of the feedback on the maintenance of staff teaching performance. One male and nine female direct-care staff members in a residential facility participated. During baseline, experimenters observed supervisors while they trained direct-care staff. The experimenters did not provide any feedback to the supervisors in regards to their teaching or feedback skills. The first intervention, the teaching-skills-program, involved four hours of classroom training followed by observations of supervisors’ teaching skills with consumers Experimenters provided supervisors with immediate feedback regarding their teaching skills in the same manner that supervisors would be required to provide feedback to the direct-care staff they train. Observation and feedback continued until the supervisors met the criteria of 80% correct performance. Experimenters then observed 7 supervisors while they provided feedback to direct-care staff. Six supervisors did not meet criteria for correct feedback delivery following the implementation of the teaching-skills-program intervention. The second intervention was a feedback program to train supervisors to observe a teaching session systematically and provide feedback to the staff regarding teaching proficiency. The feedback program consisted of four hours of classroom instruction designed to familiarize supervisors with procedures for monitoring staff teaching skills, and a description of eight components for giving feedback. After the experimenters provided verbal and written descriptions of the eight components for giving feedback, and the supervisors and experimenters role played the techniques. Like in baseline and the teaching skills program, experimenters observed feedback skills, but unlike the other two conditions, experimenters provided feedback on the correct performance of the eight components. Finally, the experimenters recorded follow-up data 42 and 82 days later. To demonstrate experimental control, experimenters used a multiple-probe-across two groups of supervisors to assess the effects of the teaching skills program, and the same design across three groups of supervisors to assess the effects of the feedback program. Correct teaching behavior across all supervisors increased from 64% to 93% accuracy over baseline to the teaching-skillsprogram intervention. From baseline to the feedback program, correct teaching behavior across all supervisors increased from 41% to 86% accuracy. The follow-up probes indicated maintenance of feedback skills to well above pre-training levels, although the study did not provide the data. Petscher and Bailey (2006) examined whether a treatment package of prompting and self-monitoring with accuracy feedback would improve the accuracy of staff use of token economies. Three women with less than one year of experience working at their current job participated. The study took place in a self-contained classroom with 9 boys and 2 girls with behavioral problems, who ranged in age from 10 to 14 years. During baseline, experimenters collected data on accuracy of token economy implementation by staff under routine classroom conditions based on the training they had received from the school system. The experimenters did not tell the staff members why they were in the classroom and discreetly observed and recorded their behavior on clipboards and data sheets. During the first training session, the experimenter met with participants and explained the goals, procedures, dependent variables, and expectations of the study by modeling and giving written post-tests that required participants to identify antecedents and appropriate responses. The session lasted approximately 30 min and ended when all participants had answered all posttest questions correctly. The second intervention involved the use of prompting, self-monitoring, and accuracy feedback to increase staff’s appropriate use of token economies. The experimenter gave each participant a buzzer to be used as a tactile prompt to perform three separate tasks: managing disruptions, bonus-point delivery, and prompting appropriate behavior. When the experimenter observed a situation in which it was appropriate for one of these techniques to be implemented, she signaled the participant as a reminder to do so. After each session, the experimenter handed out a form to each participant on which they were to write how accurate they were in implementing the correct technique at the correct time. The experimenter then recorded accuracy of self-monitoring for each participant and provided feedback for any missed opportunities. When participants both responded to prompts and accurately assessed their own performance during 100% of sessions, the experimenter removed the prompts and measured the maintenance of the behaviors. A multiple-baseline-across-behaviors-design was used to demonstrate experimental control. All three participants did not respond to disruptions, bonus-point delivery, and prompting appropriate behavior by staff in baseline or following the in-service training. All three participants did, however, show an increase in correct responding on all three dependent variables following the prompting, self-monitoring, and accuracy feedback package and continued responding during the maintenance phase. The study demonstrated that inservice training alone is insufficient to train staff to appropriately manage the responses for which a subsequent time-out procedure is needed. Mozingo, Smith, Riordan, Reiss, and Bailey (2006) evaluated the effects of a staff training and management package on accuracy of frequency-recording of problem behavior by staff in a residential care facility for adults with a developmental disability. Eight direct-care instructors (DCIs) aged 22–60 years participated. During baseline, DCIs followed their usual

J. Jerome et al. / Research in Developmental Disabilities 35 (2014) 529–536

531

routines while experimenters and supervisory staff gave performance feedback without a specific protocol. During the first intervention, the experimenters conducted a 45–60 min lecture on frequency data collection. The experimenter then gave each participant one 8 cm by 13 cm index card with accurate definitions and space to enter data for each consumer with target behaviors. Participants then collected data on the occurrence of a behavior and the time during which they occurred. During the second intervention the experimenter was present during data collection and provided feedback. At the onset of each session, the experimenter observed the observation area for 6 min and recorded each target behavior that occurred on his or her own set of index cards. The experimenter then approached each DCI one at a time, compared the DCI’s data entries to his or her own entries and provided the following feedback: (a) a statement of agreement that the target behavior did or did not occur, (b) praise for agreements, (c) prompts to practice by copying the supervisor’s entry if there were disagreements, and (d) reminders to continue to make entries throughout the work shift. The final condition involved experimenter presence without feedback. During this condition the supervisor entered the observation area, made entries on his or her index cards, and left at the end of 6 min without providing feedback. The dependent variable was percentage agreement between each DCI and the experimenters regarding the time, code, and frequency of consumer target behavior. In-service training had no effect on 5 of the 8 DCIs’ frequency-recording performance. Experimenter presence and feedback resulted in data collection improvements for the 5 DCIs who did not respond to the initial training. The study demonstrated maintenance of the improvements in data collection during experimenter presence without feedback, although the study requires additional data to confirm this finding. One implication of the findings was that experimenter or supervisor presence already inherent in the treatment environment did not lead to accurate frequency recording. Therefore, the introduction of staff management techniques was necessary. Additionally, in-service training by itself was ineffective in improving performance, suggesting that skill deficits alone were not the major factor that contributed to DCI baseline or post-in-service performance. Future research should extend Mozingo et al. (2006) by: (a) demonstrating that improved data collection can be generalized across different consumers at different times in the day; and (b) addressing the importance of accurate data collection on the overall well being of individuals with intellectual disabilities. Therefore, the aims of this study were to (a) examine first whether an in-service intervention, and then an in-service plus feedback intervention affect data-recording accuracy for direct-care staff in a residential facility for adults with intellectual disabilities and (b) determine if these data recording skills generalize to different times of the day and different target behaviors of different individuals with intellectual disabilities. 2. Method 2.1. Participants Three direct-care counselors (DCCs), working in a residential facility for adults with intellectual disabilities participated. Latisha was 22 years old and had worked in the field of adults with intellectual disabilities for four months. Renee was 25 years old and had worked in the field for one year. Sandra was 38 years old and had worked in the field for 13 years. They recorded data on the target behaviors of two adults with intellectual disabilities. Beth was a 32 year old and Matt was a 53 year old, both diagnosed with profound intellectual disabilities. 2.2. Setting The study was conducted in a residential home for adults with intellectual disabilities administered by an agency with group homes and day treatment programs for adults with intellectual disabilities located around the metropolitan area. The residence housed 13 adults with intellectual disabilities. It has seven bedrooms, a kitchen, living room, dining area, and a finished basement with staff offices. The DCCs recorded behavioral data in all parts of the residence as they were observed by the experimenter. 2.3. Pre-experimental inter-observer agreement Two observers (the first author and a manager who works for the agency where the study was conducted) separately observed two consumers. These were the same two consumers who the DCCs would observe later on. The experimenters chose these consumers based on the high frequency of their target behaviors. Experimenters observed consumers who historically had averaged at least two occurrences of their target behavior over the course of an 8-h shift. Choosing consumers with a high frequency of target behavior provided a larger data sample. The two observers independently observed each of the consumers during a 15 min observation period. This observation period was defined as a session. The session was divided into 15, 1 min intervals. Observers recorded the target behavior using a frequency-within-interval recording method on a data sheet with 15 boxes, with each box representing 1 min. The two observers marked a check in the given box if the consumer’s target behavior occurred and an ‘‘X’’ in the box if it did not occur during this interval. After each session was completed, observers compared their data sheets using point-by-point agreement, and calculated their total number of agreements and disagreements. Inter-observer agreement (IOA) was sufficient when there was at least 90% agreement for both overall, occurrence, and nonoccurrence data points for three

532

J. Jerome et al. / Research in Developmental Disabilities 35 (2014) 529–536

consecutive sessions. The two observers agreed 100% of the time across three consecutive sessions for overall, occurrence and non-occurrence data. The experimenters used these data as a comparison for subsequent data collection. 2.4. Dependent variable and data collection The dependent variable was percentage of accurate data collection by DCCs. Percentage of accurate data collection was calculated by comparing the data collected by the DCCs to the observers’ data collected during the same session, for the same target behavior and resident. The experimenters compared the data throughout the remainder of the study in an identical way to the data comparison between observers described earlier in the pre-experimental IOA condition. During at least 44% of session across all phases, both expert observers recorded data along with the DCC to ensure that the IOA remained reliable. 2.5. Instructions The experimenter gave a frequency-within-interval data recording sheet to one of the participating DCCs and said: ‘‘You will be recording data on (consumer Name) for the next 15 min. This 15 min session will be divided into 15, 1 min intervals. Each box on this data sheet represents a 1 min interval. If (Name) engages in his/her target behavior at least once within each minute interval you will mark a check in the corresponding box. If (consumer Name) does not engage in his/her target behavior during the minute interval, you will record an ‘‘X’’ in the corresponding box. When the session is over, please place the data sheet in this binder.’’ The experimenter or manager also recorded data in the same way during each session and did not provide any further instruction or feedback to the DCCs. All three DCCs recorded data across two different consumers, across two different time periods. For each individual session, however, the DCCs recorded data on one consumer at one time. After each of the three DCCs finished recording data, the experimenter compared the DCC’s data sheet to their own and calculated percentage accuracy. The experimenter conducted this procedure for all three DCCs until stable baselines were established. 2.6. In-service During the in-service phase the experimenter read aloud to each DCC and the operational definition of the target behavior. The experimenter also informed the DCC how important it is to record data accurately by explaining that the manner in which the consumers are handled by professionals, their behavior plans, and their prescribed medications depended on how accurately their behavioral data are recorded. The experimenter provided the same instructions on how to collect data using a frequency-within-interval recording method as they did in the instruction phase and then asked the DCCs to repeat back the operational definition of the target behavior accurately and the reasons why it is important for them to record data accurately. The in-service phase last between 5 and 15 min. Then, both the DCC and the experimenter recorded the frequency of the target behavior within intervals for the next 15 min. 2.7. In-service plus feedback During in-service plus feedback, the experimenter in-serviced the DCCs in the exact same manner as during the first intervention; however, this time, DCCs only recorded data on one consumer at one point in time. After the DCC had accurately recorded data during 80% of the session for 3 consecutive sessions, training was completed and the DCC began recording data on both consumers at both points in time. Again, both the DCC and the experimenter recorded the frequency of the target behavior within intervals for the next 15 min. The experimenter and the DCC then compared their data sheets in the same manner as described in the pre-experimental IOA phase. The experimenter then: (a) provided praise for all agreements between the two sheets; (b) discussed the discrepancies and why they might have occurred where there was not agreement between the data sheets; (c) provided overall feedback on how well the DCC recorded the data (what their percentage of accurate recording was); (d) reminded the DCCs to record each instance of target behavior immediately after it occurs; and (e) gave further verbal praise to the DCC for their participation and successful recording. 2.8. Dependent variables The experimenter recorded three facets of one measure as the dependent variable: The percentage of accurate data collection across three staff member as measured by (a) overall point-by-point agreement between the observers and participating staff, (b) their occurrence agreement, and (c) their non-occurrence agreement. 2.9. Experimental design and generalization The experimenter used a multiple-baseline-across-participants design to demonstrate experimental control. Sessions took place at two specific times during the day; between 5:00–5:15 PM and 7:00–7:15 PM. These times were

J. Jerome et al. / Research in Developmental Disabilities 35 (2014) 529–536

533

randomized using a random numbers table to control for order effects. The order of data collection on the two consumers was also randomized in the same way. The data collection on Matt from 5:00 to 5:15 PM was the training probe, and the data collection on Matt from 7:00 to 7:15 PM, Beth from 5:00 to 5:15 PM, and Beth from 7:00 to 7:15 PM were the generalization probes. When a specific DCC or consumer was not available at a specific time, the experimenter used the random numbers table to determine another data collection combination. Finally, a DCC never recorded data on any one consumer for more than one session on any given day. Over the course of the study each DCC recorded data on each consumer the same number of times. The fact that the DCCs recorded data during different periods of the day and that they recorded data on two different consumers demonstrated that the data-collection skills learned during the interventions generalized to different time periods and different target behaviors of different individuals.

Fig. 1. Percentage of overall agreement of accurate data collection in training and generalization settings as a function of sessions, during instruction, in-service, and in-service plus feedback across three direct-care staff members.

534

J. Jerome et al. / Research in Developmental Disabilities 35 (2014) 529–536

3. Results and discussion 3.1. Overall agreement During instruction, Latisha had an overall agreement of accurate data collection of 57%, range 40–73%. During in-service, she had an overall agreement of accurate data collection of 85%, range 80–93%. During in-service plus feedback, she had an overall agreement of accurate data collection of 94%, range 87–100%. During instruction, Renee had an overall agreement of accurate data collection of 52%, range 27–87%. During in-service, she had an overall agreement of accurate data collection of 89%, range 80–100%. During in-service plus feedback, she had an overall agreement of accurate data collection of 98%, range 93–100%. During instruction, Sandra had an overall agreement of accurate data collection of 77%, range 60–100%. During

Fig. 2. Percentage of occurrence agreement of accurate data collection in training and generalization settings as a function of sessions, during instruction, in-service, and in-service plus feedback across three direct-care staff members.

J. Jerome et al. / Research in Developmental Disabilities 35 (2014) 529–536

535

in-service, she had an overall agreement of accurate data collection of 98%, range 93–100%. During in-service plus feedback, she had an overall agreement of accurate data collection of 100%. All three staff members increased the accuracy of their data collection from instruction to in-service, and then from in-service to in-service plus feedback (Fig. 1). 3.2. Occurrence agreement During instruction, Latisha had an occurrence agreement of accurate data collection of 43%, range 14–67%. During inservice, she had an occurrence agreement of accurate data collection of 69%, range 63–80%. During in-service plus feedback, she had an occurrence agreement of accurate data collection of 90%, range 63–100%. During instruction, Renee had an occurrence agreement of accurate data collection of 20%, range 0–38%. During in-service, she had an occurrence agreement of accurate data collection of 75%, range 63–100%. During in-service plus feedback, she had an occurrence agreement of accurate data collection of 98%, range 89–100%. During instruction, Sandra had an occurrence agreement of accurate data

Fig. 3. Percentage of non-occurrence agreement of accurate data collection in training and generalization settings as a function of sessions, during instruction, in-service, and in-service plus feedback across three direct-care staff members.

536

J. Jerome et al. / Research in Developmental Disabilities 35 (2014) 529–536

collection of 59%, range 45–73%. During in-service, she had an occurrence agreement of accurate data collection of 96%, range 83–100%. During in-service plus feedback, she had an occurrence agreement of accurate data collection of 100%. All three staff members increased the accuracy of their data collection from instruction to in-service, and then from in-service to in-service plus feedback (Fig. 2). 3.3. Non-occurrence agreement During instruction, Latisha had a non-occurrence agreement of accurate data collection of 41%, range 18–75%. During in-service, she had a non-occurrence agreement of accurate data collection of 81%, range of 67–91%. During in-service plus feedback, she had a non-occurrence agreement of accurate data collection of 94%, range 82–100%. During instruction, Renee had a non-occurrence agreement of accurate data collection of 43%, range 15–87%. During in-service, she had a non-occurrence agreement of accurate data collection of 84%, range 73–100%. During in-service plus feedback, she had a non-occurrence agreement of accurate data collection of 99%, range 90–100%. During instruction, Sandra had a non-occurrence agreement of accurate data collection of 67%, range 40–100%. During in-service, she had a nonoccurrence agreement of accurate data collection of 97%, range 88–100%. During in-service plus feedback, she had a non-occurrence agreement of accurate data collection of 100%. All three staff members increased the accuracy of their data collection from instruction to in-service, and then from in-service to in-service plus feedback (Fig. 3). This study demonstrated that while in-service trainings alone are a good method to increase accuracy of data collection by direct-care staff on adults with intellectual disabilities, much like Parsons and Reid (1995), Petscher and Bailey (2006), and Mozingo et al. (2006), when feedback is added to the in-service, accuracy of data collection improved further. Agencies should still use in-servicing alone if there is not enough time to conduct an in-service plus feedback intervention immediately upon hiring of staff, but should eventually use instructions, in-servicing and feedback as a means to prepare staff for data collection. This study also demonstrated that the improved data collection skills obtained during the training session with one consumer at one particular time period generalized to two other consumers at two different time periods. This finding is important because agencies rarely have the time or capacity to train staff in all different situations, and our data indicate that one situation may be sufficient to prompt accurate recording. Future research should extend the finding of this study to determine what the benefits of improving accuracy of staff data collection are for the consumers. Future research should determine another dependent variable that could measure whether the improved data collecting skills of the staff led to changes in prescribed psychotropic medications or changes in overall occurrence of maladaptive behavior. Future research should also focus on target behaviors that do not occur frequently, but may be more severe in intensity. This type of examination may require changes in the style of data collection. References Hrydowy, E. R., & Martin, G. L. (1994). A practical staff management package for use in a training program for persons with developmental disabilities. Behavior Modification, 18, 66–88. Mozingo, D. B., Smith, T., Riordan, M. R., Reiss, M. L., & Bailey, J. S. (2006). Enhancing frequency recording by developmental disabilities treatment staff. Journal of Applied Behavior Analysis, 39, 253–256. Parsons, M. B., & Reid, D. H. (1995). Training residential supervisors to provide feedback for maintaining staff teaching skills with people who have severe disabilities. Journal of Applied Behavior Analysis, 28, 317–322. Petscher, E. S., & Bailey, J. S. (2006). Effects of training, prompting, and self-monitoring on staff behavior in a classroom for students with disabilities. Journal of Applied Behavior Analysis, 39, 215–226.

The effects of in-service training alone and in-service training with feedback on data collection accuracy for direct-care staff working with individuals with intellectual disabilities.

Three residential staff aged 22-38 years participated in this study which measured the accuracy of their data collection, following instruction, in-se...
1003KB Sizes 0 Downloads 0 Views