Psychological Assessment 2014, Vol. 26, No. 3, 947-957

© 2014 American Psychological Association 1040-3590/14/$ 12.00 DOI: 10.1037/a0036612

Feasibility of Text Messaging for Ecological Momentary Assessment of Marijuana Use in College Students Michael M. Phillips, Kristina T. Phillips, Trent L. Lalonde, and Kristy R. Dykema University of Northern Colorado

Measuring self-reported substance use behavior is challenging due to issues related to memory recall and patterns of bias in estimating behavior. Limited research has focused on the use of ecological momentary assessment (EMA) to evaluate marijuana use. This study assessed the feasibility of using short message service (SMS) texting as a method of EMA with college-age marijuana users. Our goals were to evaluate overall response/compliance rates and trends of data missingness, response time, baseline measures (e.g., problematic use) associated with compliance rates and response times, and differences between EMA responses of marijuana use compared to timeline followback (TLFB) recall. Nine questions were texted to participants on their personal cell phones 3 times a day over a 2-week period. Overall response rate was high (89%). When examining predictors of the probability of data missingness with a hierarchical logistic regression model, we found evidence of a higher propensity for missingness for Week 2 of the study compared to Week 1. Self-regulated learning was significantly associated with an increase in mean response time. A model fit at the participant level to explore response time found that more time spent smoking marijuana related to higher response times, while more time spent studying and greater “in the moment” academic motivation and craving were associated with lower response times. Significant differences were found between the TLFB and EMA, with greater reports of marijuana use reported through EMA. Overall, results support the feasibility of using SMS text messaging as an EMA method for college-age marijuana users. Keywords: marijuana, cannabis, ecological momentary assessment, text messaging, compliance

Traditionally, data collection in the area of substance use has relied heavily on biological measures, such as urine screens, and on self-report and recollection of substance use over specific time periods (e.g., 30+ days). Problems that arise in these latter types of assessments include low ecological validity, issues related to memory recall, and patterns of bias in estimating behavior. Alter­ natively, ecological momentary assessment (EMA) has been uti­ lized for research purposes to gather data in the moment (Shift­ man, 2009a; Lukasiewicz et al., 2007) within a participant’s actual environment, using multiple assessments over time (e.g., a severalweek period) to provide more real-time information (Shiftman, Stone, & Hufford, 2008). By exploring naturally occurring settings and how these environments influence the behavior in question, a more thorough examination of trends in psychological phenomena

and behavioral patterns can be attained (Ferguson & Shiffman, 2011). One major benefit of EMA lies in the nature of “in the moment” data collection. Traditional retrospective self-report measures of­ ten rely on memory recall, which can be problematic, particularly in the area of substance use research. Time can degrade participant recall, leading to incomplete, inaccurate, or biased information (Ferguson & Shiffman, 2011). Global assessments of substance use behaviors can be limiting due to inaccurate reports of behav­ iors and other bias, such as the tendency for “heaping” or “digit bias,” where participants report estimates of their substance use around rounded values (Shiffman, 2009b). For example, when examining cigarette consumption behaviors among 232 smokers, Shiffman (2009b) found that both participant global self-reports (i.e., the average number of cigarettes smoked at baseline) and timeline followback (TLFB) calendar recall data included digit bias. Participant EMA data showed random distributions of smok­ ing behavior, while the global self-reports and the TLFB data were clumped around rounded estimates much more often than would be expected. Though significant, the correlation between EMA and TLFB reports of cigarette consumption on a day-to-day basis was only 0.29. When comparing daily cigarette consumption reported through EMA and TLFB, participants reported smoking an aver­ age of 2.5 less cigarettes per day through EMA. However, partic­ ipants reported more smoking through EMA on one third of the days monitored. Variations in carbon monoxide measures were associated with self-reported EMA cigarette consumption but not TLFB, demonstrating better validity for reports of momentary information with EMA. Using Shiffman’s (2009b) data, Griffith,

This article was published Online First April 21, 2014. Michael M. Phillips and Kristina T. Phillips, School of Psychological Sciences, University of Northern Colorado; Trent L. Lalonde, Applied Statistics and Research Methods, University of Northern Colorado; Kristy R. Dykema, School of Psychological Sciences, University of Northern Colorado. We would like to thank our research assistants from the Motivation and Addiction Research Lab at the University of Northern Colorado for their time and diligence in data collection. Correspondence concerning this article should be addressed to Michael M. Phillips, School of Psychological Sciences, Campus Box 94, University of Northern Colorado, Greeley, CO 80639. E-mail: Michael.Phillips@ unco.edu

947

948

PHILLIPS, PHILLIPS, LALONDE, AND DYKEMA

Shiffman, and Heitjan (2009) compared EMA and TLFB reports of cigarette consumption using a Bland-Altman analysis (Altman & Bland, 1983). Similarly, cigarette counts were greater through TLFB compared to EMA. The authors concluded that EMA and TLFB measures are not really equivalent for assessing heavy cigarette smoking and that participants likely failed to report some cigarette use through both self-report methods. Additional studies have examined the validity of EMA for mea­ suring substance use and other behaviors and mood states, though few have compared EMA with established, valid, and reliable measures. A recent study by Serre and colleagues (2012) compared EMA reports of substance use, anxiety, and mood with baseline measures assessing substance use severity (Addiction Severity Index [ASI]; McLellan et al., 1992), anxiety (Beck Anxiety Inventory [BAI]; Beck, Epstein, Brown, & Steer, 1988), and depression (Beck Depression Inventory [BDI]; Beck, Ward, Mendelson, Mock, & Erbaugh, 1961) among treatment-seeking tobacco, marijuana, opiate, and alcohol users. The 2-week EMA data, assessed with personal digital assistant (PDA) devices, was generally consistent with the baseline measures and showed significant correlations between substance use and ASI scores, state anxiety and BAI scores, and state mood and BDI scores. Participants were given 20 min to respond to signal prompts. Overall response rate was good (83%), though the duration of EMA assess­ ment completion (i.e., the time it took to respond to all of the EMA questions) decreased over the 2-week EMA monitoring period, pos­ sibly due to a practice or fatigue effect (Serre et al., 2012). Substance use research utilizing EMA has focused almost ex­ clusively on tobacco/cigarette and alcohol use. Illicit drugs have been examined less frequently, often due to concerns about com­ pliance with time-consuming protocols and equipment breakage or loss (Shiffman, 2009a). A study with homeless crack cocaine users in treatment (Freedman, Lester, McNamara, Milby, & Schum­ acher, 2006) followed participants for 2 weeks and demonstrated good compliance (86%) with the extensive 2-week protocol, with only one cell phone out of 30 not returned. Epstein and colleagues (Epstein et al., 2010a, 2010b, 2009; Preston et al., 2009) followed cocaine and heroin users enrolled in methadone treatment for a lengthy period (up to 20 weeks) using PDAs with random and event-contingent prompts. Relatively good compliance was re­ ported with random prompts (75%). Other studies have included polysubstance users, including those using ecstasy and other sub­ stances (Hopper et al., 2006), and treatment samples consisting of cocaine, opiate, marijuana, and alcohol users (Johnson et al., 2009). Across these studies, there appears to be minimal concerns about equipment loss or completion of the protocol. Despite marijuana being the most commonly used illicit drug in the United States (Substance Abuse and Mental Health Services Administration [SAMHSA], 2012), limited research has been con­ ducted using EMA with marijuana users. Out of the select studies that have been done, only one assessed feasibility (Serre et al., 2012), and it demonstrated an 80% compliance rate among mari­ juana users completing a 2-week EMA protocol. The majority of EMA marijuana studies have focused on patterns of marijuana use, predictors of use and craving, and the influence of in-the-moment affect (mood, anxiety) on marijuana use (Buckner, Crosby, Silgado, Wonderlich, & Schmidt 2012; Buckner et al., 2011; Johnson et al., 2009; Shrier, Walls, Kendall, & Blood 2012; Tournier, Sorbara, Gindre, Swendsen, & Verdoux, 2003).

Regardless of type of substance or other behavior assessed, EMA data collection ranges from low-tech (e.g., paper-and-pencil diaries) to high-tech (electronic diaries, PDAs, interactive voice response or IVR systems, and cell phones) options, with some studies combining methods. Each method introduces unique ben­ efits and limitations as methods of assessment and data collection tools. Paper diaries often require participants to complete a number of written entries per day (Trull & Ebner-Priemer, 2009). Typi­ cally, participants are asked to follow a set schedule and fill out the provided assessments according to the schedule or complete the diary when the behavior of interest occurs (self- or event-initiated). Paper diaries have limitations due to lack of compliance with the schedule. Rather than completing entries per the research protocol when a specific phenomenon occurs, participants have been known to fill out the assessments at a different time using recall informa­ tion (Ferguson & Shiffman, 2011; Stone, Shiffman, Schwartz, Broderick, & Hufford, 2003). In one study assessing chronic pain, researchers found that participants self-reported completing their paper diary entries 90% of the time, but upon examining the actual opening and closing of the diary binder through a fitted sensor, the researchers found that compliance was only 11% (Stone, Shiff­ man, Schwartz, Broderick, & Hufford, 2002; Stone et al., 2003). To improve compliance, some researchers have tried prompting or signaling participants to complete paper diary assessments (Brod­ erick, Schwartz, Shiffman, Hufford, & Stone, 2003). Unfortu­ nately, although this can increase compliance, compliance rates do not reach levels seen with electronic diaries (Broderick et al., 2003). Furthermore, participants may be dishonest about when they complete assessments. One study utilizing signal-contingent paper-based recordings with alcohol-dependent clients found that 70% of debriefed participants admitted to lying about the time and date they completed the assessment (Litt, Cooney, & Morse, 1998). This information is problematic given that EMA is designed to measure behavioral and affect variables in as close to the moment as possible. Using electronic devices such as PDAs, Palm-Pilots, and smart­ phones for EMA can provide a host of benefits, including the use of signal prompting and an automatic date and time stamp when entries are made (Trull & Ebner-Priemer, 2009). Many of the features provided by such electronic devices can affect compliance with the EMA protocol, which impacts data quality and generalizability. Researchers using EMA with substance-using popula­ tions have generally reported adequate to high compliance with PDAs. Using PDAs with heavy drinkers, Collins et al. (1998) achieved close to 85% compliance with an 8-week protocol, while a 2-week EMA protocol by Hufford and colleagues (2002) yielded 86% compliance with college student drinkers. Smoking research (e.g., Shiffman, Paty, Gnys, Kassel, & Hickcox, 1996) has shown similar compliance rates. Piasecki et al. (2011) found that alcoholtobacco co-users responded to 79% of random prompts over a 21-day EMA period. Use of PDA technology with cocaine users (Epstein & Preston, 2010b) yielded compliance rates between 77% and 81%, with higher participant compliance occurring during periods of abstinence. Research assessing marijuana use via PDAs has found lower compliance, with one study with college students yielding compliance rates around 62% (Buckner et al., 2011) for random prompts and another with adolescent/young adult commu­ nity residents (Shrier et al., 2012) demonstrating a response rate of 70%. A study by Serre and colleagues (2012) in France found that

FEASIBILITY OF TEXTING FOR EMA

treatment-seeking marijuana users responded to 80% of PDA prompts. Cell phone technology has been utilized in more recent work. In one study with alcohol/tobacco users, researchers used cell phones paired with interactive voice response (IVR) systems and yielded an EMA compliance rate of 65% (Holt, Litt, & Cooney, 2012). With IVR, participants are typically called by the system at ran­ dom times throughout the day to respond to questions by phone using a numeric keypad. Another EMA feasibility study (Freed­ man et al., 2006) utilized computer-automated cell phone inter­ views with homeless adults in treatment for crack cocaine misuse and found 86% compliance. Although not a substance use study, Courvoisier, Eid, and Lischetzke (2012) examined EMA compli­ ance rates of mood reports utilizing a cell phone-based voice response system with Swiss university students and university graduates. A total response rate of 75% over the 7-day EMA period was calculated, with compliance rates varying throughout the course of the week. Limited substance use studies have utilized short message ser­ vice (SMS) texting as a method of EMA data collection with cell phones. Kuntsche and Robert (2009) used SMS texting and Inter­ net assessment to examine drinking behavior among young Swiss adults. Though an overall response rate was not calculated, the researchers were able to retain 75% of participants across the study. Berkman, Dickenson, Falk, and Lieberman (2011) used SMS texting with participants’ personal cell phones to examine cigarette lapses, craving, and mood among 31 heavy smokers trying to quit. An overall response- rate of 84% was calculated, with the majority of participant responses (80%) sent within 23 min of receiving the EMA text. Although compliance rates are generally adequate when using electronic devices for EMA, other issues impacting data quality should be considered. Because EMA is time consuming, research­ ers should consider the burden threshold associated with specific protocols. Asking participants to respond to multiple prompts with numerous questions daily likely impacts their willingness to re­ spond accurately or at all. Researchers must consider the length of expected responses so that participants are not overwhelmed. As discussed by Courvoisier et al. (2012), compliance might be re­ lated to individual differences in participants (e.g., gender, person­ ality traits) or measurement times (day of week or time of day). Because EMA results in substantial amounts of data, it is inevita­ ble that most participants will have some degree of missing data (Smyth & Stone, 2003). If certain patterns are noted for missing data, these need to be corrected for in the data analysis. Issues of burden and data missingness might be better controlled with specific types of EMA and technology. Because substance misuse is common among young adults, there are considerable advantages to the use of cell phones, especially SMS texting, as a method of EMA. It is estimated that approximately 94% of Amer­ icans use a mobile phone, with almost two thirds owning a smart or multimedia phone, and 85% utilizing text messaging (Nielsen Mobile, 2013). At least 98% of college students report owning a cell phone, with mean usage of approximately 4 hr per day (Diamanduros, Jenkins, & Downs, 2007). When examining tex­ ting, Raacke and Bonds-Raacke (2011) found that over 88% of college students had a cell phone plan that contained SMS texting services, with the majority having a plan that contained unlimited texting. Participants reported sending an average of 40 texts and

949

receiving an average of 44 texts on any given day. It is possible that using text messaging as a form of EMA with young adults may increase response rates and accuracy simply because the population is very familiar with this method of communication. Certain limitations may be better addressed by utilizing SMS texting with participants’ personal cell phones. Purchasing electronic equipment (e.g., PDAs, pagers, cell phones) can be costly to research­ ers, and there is always the risk that participants may misplace, lose, or damage research equipment, which would require further funds dedicated to replacing equipment. If researchers must purchase equip­ ment, this also limits the number of individuals who can participate in a study at any given time. Providing an electronic device to partici­ pants that they don’t normally carry runs the risk of them forgetting that device, which can lead to missing data. Using a participant’s personal cell phone could address replacement issues and possibly improve compliance and response times. Past substance abuse studies have found a range of different response/compliance rates across EMA data collection methods, with rates ranging from roughly 62% to 86%. Few studies have examined the use of EMA with marijuana users, and none (to our knowledge) have utilized SMS texting with participants’ personal cell phones. The goal of the present study was to assess the feasibility of SMS texting as a means of EMA with college student marijuana users. We aimed to examine response/compliance rates, response times, any patterns of data missingness, baseline measures that could be used to predict compliance or response times, and the difference between EMA and TLFB when reporting marijuana use.

Method Participants Forty-eight participants were recruited between March 2011 and April 2012 from a midsized western university. To be eligible to participate, students had to (1) be over the age of 18, (2) be enrolled at the university for a minimum of one prior semester, (3) report using marijuana at least 2 days per week, (4) report that their last marijuana use was within the last week, (5) test positive on a marijuana urine screen, and (6) own a cell phone with text messaging capabilities and understand that standard text messaging rates would apply if they did not have an unlimited texting plan. Data for this study were collected before the passing of Amendment 64 legalizing recreational use of marijuana in the state of Colorado. Participation in the study was anonymous, in that participants were not asked for last name or university identification number. However, for the EMA protocol, researchers did have access to participants’ phone numbers, which were kept confidential. Upon conclusion of all EMA texts, participants’ cell phone numbers were deleted from the web-based text-messaging service and any paper documents used for contact purposes were destroyed. Of the 48 participants, one participant was excluded from all data analyses. This participant stopped responding halfway through the study due to his cell phone becoming disconnected and did not present for the follow-up assessment. The overall study completion rate was 98%. The final sample consisted of 29 fe­ males and 18 males with an average age of 19.74 (SD = 2.22). Although age ranged from 18 to 33, all participants except one were between 18 and 22. The racial/ethnic breakdown of the sample was 81% Caucasian, 4% African American, 4% Latino/

950

PHILLIPS, PHILLIPS, LALONDE, AND DYKEMA

Hispanic, 4% Native American, 4% Other, and 2% Asian. Partic­ ipants were on average sophomore status (range was freshmen to seniors) at the university, with 49% living on campus in the residence halls. There was a wide variation of majors represented (17% education; 15% science, nursing, or pre-health; 15% psy­ chology; 15% other social science; 11% business/marketing; 6% undeclared; 21% other). Mean cumulative grade point average (GPA) was 2.85 (SD = 0.69), with a range from 0.80 to 4.00. Demographics from study participants were representative of the campus community where the data were collected.

Procedure Participants were recruited through flyers posted around cam­ pus, announcements made in lower level undergraduate psychol­ ogy and science courses, and an e-mail sent to all students living in the residence halls. A brief screening interview was used to determine prospective participants who were eligible for the study. All eligible participants were then scheduled for a baseline ap­ pointment at a separate time where they completed informed consent followed by a single-panel marijuana urine dip test (Red­ wood Toxicology Laboratory). Participants then met with a trained research assistant to complete a structured interview and a series of self-report measures. The initial baseline appointment lasted ap­ proximately 60 min. At the end of the baseline appointment, participants were trained on the EMA protocol and were informed that they would receive three text messages randomly throughout the day for the next 14 days. Prior to leaving the lab, participants were sent a practice text message to verify that they received it and to address any potential questions about the protocol. Participants were asked to respond to a series of signal-contingent questions in the form of text messages (also known as short message service; SMS), with 42 prompts sent over a 2-week period. Partici­ pants were signaled on their personal cell phones three times per day randomly within three time blocks (8:00 a.m.-12:00 p.m., 12:30 p.m.-4:30 p.m., and 5:00 p.m.-10:00 p.m.) with the same nine questions, which were sent through two text messages (due to length). A texting schedule for the 14-day EMA period was developed for each participant by randomly selecting an EMA time (in 30-min increments) using a randomization chart. Texts were sent through a text messaging service (www.redoxygen.com). Using this service, participant responses were time-stamped and later downloaded by the researchers. Participants were not sent a reminder text if they did not respond to the initial text message. The next text message was sent at the scheduled time in the next time block based on the randomization schedule. To establish feasibility and get a sense of general respon­ siveness, all text message responses received from participants were counted as a response as long as they were received before the next prompt was sent. EMA questions (see Appendix for questions and SMS short­ hand) focused on participants’ current activity, academic motiva­ tion, craving for marijuana, marijuana use and frequency since last text message, social setting where marijuana was last used (i.e., alone or with others), and learning behaviors (e.g., time spent studying). The term marijuana was not used in any of the text messages, and participants were asked to use smoking as their reference to marijuana use when responding to the signals. Partic­ ipants were given a small laminated card with the nine questions in their entirety along with the shortened SMS version that would be

sent for each text message. Participants were asked to respond to all nine questions for each texting instance by numbering their responses to correspond with the questions. All participants received their first text message for the EMA protocol at a random time the morning after their baseline appoint­ ment. A class schedule was collected from each participant to ensure that texts were not sent during class meeting times; how­ ever, text messages were sent with no regard to participants’ work schedule. Participants were instructed to respond to each text message immediately when possible and appropriate. Following the 14-day EMA portion of the study, participants were scheduled for a follow-up appointment and sent a text message reminder the day before their appointment. Participants returned to the lab within 1 week of their last text message (average 5.96 days) and met with a research assistant to complete the 30-day TLFB assess­ ment of their marijuana use. Though research assistants were not blind to overall study goals, they were not provided with details surround­ ing the researchers’ hypotheses on EMA-TLFB comparisons and did not have knowledge of how participants responded to marijuana use questions from their EMA responses. Once the TLFB was completed, participants received a $30 gift card as compensation for their partic­ ipation. Compensation was not contingent upon type or rate of text­ messaging responses.

Measures Demographics. Participant gender, age, race/ethnicity, year at university, major, and living situation were self-reported through a questionnaire at the baseline appointment. Participants were asked to sign in to their unofficial university transcript so the interviewer could verify their cumulative and past semester GPA. Substance use. Marijuana use was assessed using several validated measures and interview questions designed by our re­ search group (discussed below). A single-panel marijuana urine dip test (Redwood Toxicology Laboratory) was used to confirm marijuana use for participant eligibility. In addition, DSM-IV cannabis abuse and dependence criteria according to the Diagnos­ tic and Statistical Manual o f Mental Disorders (4th ed.; DSM-IV; American Psychiatric Association, 1994) were assessed using a modified version of the Structured Clinical Interview for DSM-IV (SCID; First, Spitzer, Gibbon, & Williams, 2002). Cannabis Use Disorder Identification Test Revised (CUDIT-R; Adamson et aL, 2010). The CUDIT-R is an eight-item measure shown to have high internal consistency (a = .91) and discriminant validity (Adamson et aL, 2010), though we found a lower Cronbach’s alpha for the scores on this measure in the current study (a = .68). Items assess the frequency of cannabis use behaviors and conse­ quences on a 0 - 4 scale. Scores range from 0-32, with a preliminary cutoff score above 13 indicating possible problematic use. Rutgers Marijuana Problem Index (RMPI; White, Labouvie, & Papadaratsakis, 2005). Adapted from the Rutgers Alcohol Problem Index, the RMPI contains 23 items that examine conse­ quences associated with the use of marijuana (e.g., went to work high, neglected responsibilities) on a 0 -3 scale. Data from a past study (Simons et al., 1998) indicates that the RMPI is internally consistent (a = .86). We found a similar Cronbach’s alpha for scores on this measure in the current study (a = .84). Marijuana use measure. Created for use in this study, inter­ view questions assessed marijuana use frequency (e.g., number of

FEASIBILITY OF TEXTING FOR EMA

days used in last month, number of times used per day), primary method of ingestion (smoking via pipe, joint, etc., or oral use), history of marijuana use, and reasons for any medical prescription usage (legal in state of Colorado) during the baseline assessment. Timeline followback (TLFB; Sobell & Sobell, 1996). A TLFB calendar was completed at the 2-week follow-up to examine recall of daily number of marijuana use instances over the last 30 days. As a reliability check, we compared total daily marijuana use instances on this measure to EMA reports of marijuana instances for consistency. Brief Self-Control Scale (BCS; Tangney, Baumeister, & Boone, 2004). This 13-item measure examines individual differ­ ences in self-control and has been shown to be associated with higher GPA and better psychological adjustment (Tangney et al., 2004). The items for this measure (e.g., “I do certain things that are bad for me, if they are fun”) are endorsed on a 5-point scale ranging from 1 (Not like me at all) to 5 (Very much like me). Cronbach’s alpha was .73 for scores on this measure in the current study. Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich, Smith, Garcia, & McKeachie, 1991). The MSLQ includes 15 subscales and is designed to examine student motiva­ tional orientation and learning strategies. For the purposes of this study, we examined the Metacognitive Self-Regulation subscale (MSLQ-SR; a = .79), which consists of 12 items, on a 7-point scale ranging from 1 (Not at all true o f me) to 7 (Very true o f me). Items focus on students’ planning, monitoring, and regulating of academic behaviors in a particular course. We adapted the ques­ tions to focus on all coursework instead of just one course (a = .75 for scores on this measure in the current study).

Data Analysis Compliance focused on the proportion of participant responses to the 42 text messages sent over the 2-week protocol. For each participant, the total number of texts sent, the total number of texts received, and the time between text sent and text received were recorded overall, by time of day, by day of the week, by day of the EMA study (i.e., Day 1 through 14), and by week of the EMA study (i.e., Week 1 or 2). Time to respond focused on the length of time that it took participants to respond to all nine questions once they received the signal-contingent prompt, which were responded to in a block (i.e., not item by item). In order to establish general responsiveness, we counted all responses that were received before the next prompt was sent to participants. For all analyses, SAS Version 9.3 was used. EMA response time. We report descriptive statistics (means, standard deviations, medians) to summarize response times. Mean response time was modeled using correlated log-linear regression with age, gender, MSLQ-SR, BCS, RMPI total, and CUDIT-R, while adjusting for week of participation in the study (i.e., Week 1 or 2) and day of the week. A Poisson relationship was applied to account for the skewness in response time, and generalized esti­ mating equations (GEE) was used to estimate parameters. A sec­ ond log-linear model was fit, using average response time (per participant) as the response, and aggregated EMA predictors (total time spent studying, total time spent smoking, average craving, and average academic motivation) to examine whether aggregated EMA data showed any association with response time at the participant level.

951

EMA compliance/response rate and data missingness. We report descriptive statistics (means, percentages) to summarize re­ sponse rates. To model the probability of nonresponse, a hierarchical logistic regression model was fit to assess the significance of proba­ bility of missingness across age, gender, MSLQ-SR, BCS, RMPI total, and CUDIT-R, day of the week, and week of participation in the study, similar to the model of Courvoisier et al. (2012). An additional normal error term was included at the participant level to account for repeated observation of individuals. A second model was fit, using total number of responses (per participant) as the response, and aggregated EMA predictors: total time spent studying, total time spent smoking, average craving, and average academic motivation. For this model a Poisson distribution was applied to account for the skewness in number of responses. EMA versus TLFB reports of marijuana use. A compari­ son of EMA and TLFB reports of marijuana use was made using descriptive statistics such as mean EMA total usage, mean TLFB total usage, and the mean difference (TLFB - EMA). Correlation between the totals was calculated, and the distribution of the difference in total reported values was investigated.

Results Substance Use Based on baseline interview data, participants were active mar­ ijuana users, with 53% reporting use of marijuana at least once daily (34% of the total sample reported using more than once per day). At baseline, participants self-reported using marijuana an average of 25.43 days (SD = 5.42) out of the last 30 days, with an average use of two times daily (M = 2.22, SD = 1.22). Similarly, TLFB data collected at the 2-week follow-up indicated that the average number of marijuana use instances was 2.19 per day. Mean age of first marijuana use was 15.29 (SD = 0.25) years, with average age of weekly use at 17.42 (SD = 1.78). In terms of DSM -IV diagnoses, 30 individuals (64%) met criteria for cannabis dependence, and 10 (21%) met criteria for cannabis abuse. All participants reported smoking marijuana, with most (62%) using a small pipe as their primary method to smoke. An additional 21% reported smoking primarily with a bong, water pipe, or bubbler, and 6% reported smoking marijuana as a joint or blunt. Two participants reported holding a current prescription for med­ ical marijuana, which is legal in the state of Colorado. Both of these participants reported having a valid prescription due to pain but also noted that their use was recreational.

EMA Response Time Due to the nature of EMA and the goal of gathering data “in the moment,” mean and median response times were calculated (see Tables 1 and 2). During their baseline appointment, participants were instructed to respond as quickly as possible to the text messages from the study. The overall mean response time was 44.41 min (Mdn = 1 1 min). Average response times by day ranged from 39.65 min for Saturday to 49.24 min for Tuesday (Mdns ranged from 7.5 min on Monday to 16.50 min on Saturday). In general, fastest median re­ sponses were received on Monday and Friday; slowest median re­ sponses were received on Saturday and Sunday. Saturday showed the lowest mean response time but the largest median response time,

PHILLIPS, PHILLIPS, LALONDE, AND DYKEMA

952 Table 1

Response Rates and Response Times (in Minutes) by Day o f Data Collection Day in study

Response rate

Mean response time

Minimum response time

Median response time

Maximum response time

Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7 Day 8 Day 9 Day 10 Day 11 Day 12 Day 13 Day 14 Overall

89.05 95.68 91.49 91.43 92.14 85.93 86.33 87.77 87.68 86.33 87.94 91.18 85.51 85.00 88.83

46.19 34.50 50.59 47.88 44.50 45.27 36.50 37.97 40.67 50.78 47.15 60.91 43.19 35.45 44.41

0 0 0 1 1 0 1 0 0 1 0 0 0 0 0

13.0 7.0 11.0 18.0 10.0 8.0 9.0 16.0 14.0 17.5 11.0 19.0 10.5 14.0 11.0

578 266 970 528 383 710 234 289 344 450 416 786 413 211 970

indicating fewer unusually long response times but a greater number of moderately long response times. We gathered data on response time throughout the study to examine trends. Mean response time was 43.60 min (Mdn = 10.00 min) for the first week of the study and 45.24 min (Mdn = 14.00 min) for the second week. Mean response times across the 14 days of data collection ranged from 34.50 min for Day 2 to 60.91 min for Day 12 (Mdns from 7.00 min on Day 2 to 19.00 min on Day 12). Response time by time of day varied, with a mean time of 65.17 min for the morning texts (Mdn = 26.5 min), 33.51 min for the afternoon texts (Mdn = 8.00 min), and 34.27 min for the evening texts (Mdn = 7.00 min). The percentage o f responses received within 5, 15, 30, 60, and 120 min of the signal-contingent prompt were calculated (38.1%, 53.9%, 64.7%, 76.8%, and 88.1%, respectively). A small number of participants (0.8%) responded after a 4-hr period of receiving the prompt. A correlated log-linear regression model was fit using gender, age, CUDIT-R, BCS, RMPI total, and MSLQ-SR, adjusted for day of the week and week of participation in the study, to predict mean response time. Day of the week was included using dummy vari­ ables for all days except Monday, the reference day. GEE was applied using the exchangeable working correlation structure to account for the repeated observation o f participants, and a Poisson mean-variance relationship was applied to account for the skew­ ness in response times. In this model, Wald statistics using the empirical standard error showed that there was significance for the MSLQ-SR (p = .031, z = 2.08), with higher values of MSLQ-SR associated with higher response times, and for age (p = .012, z = —2.53), with older participants demonstrating lower response times. However, when the one participant over age 22 was re­ moved from the analysis, age no longer showed significance (p = .128, z = —1.52), suggesting this participant represents an influ­ ential observation. No other variables showed significance. A second log-linear model was fit at the participant level, using average response time per participant as the response and includ­ ing as predictors total time spent studying, total time spent smok­ ing, average craving, and average academic motivation. All vari­

ables showed significance (p < .001), x 2(l) > 190. An increase in total minutes smoking was associated with higher response times, while increases in total minutes studying, average motivation, and average craving were all associated with lower response times. The estimated effect of craving leading to lower response times may be attributed to the collinearity that exists between craving, motiva­ tion, and minutes smoking. However, variance inflation factors for these independent variables all remained less than 1.5, suggesting the associations among independent variables are not affecting inferences in the model.

EMA Compliance/Response Rate and Data Missingness Over the 14-day EMA period, a total of 1,942 texts were sent out to the 47 participants, and 1,725 responses were received, resulting in an overall response rate o f 88.83%. Response rates were examined throughout the course o f the EMA study. Close to one fifth o f participants (19.15%) responded to every text message prompt sent, while almost half (48.94%) missed two or less. When examining response rate across the 14-day period, compliance ranged from 85.00% to 95.68% (see Table 1), with the highest response rate occurring on Day 2 o f the 14-day EMA period and the lowest response rate occurring on Day 14 o f the EMA period. Average response rate over the first week o f the EMA trial was 90.32%, while over the second week it was 87.33%. We calculated the response rate by day of the week (see Table 2) and time of day to investigate further trends in EMA response. Participants began the EMA protocol on different days of the week. Wednesday yielded the highest response rate, at 91.54%, and Friday the lowest, at 85.93%. The weekend response rate (Saturday and Sunday) was lower, at 87.57%, compared to the weekday response rate (89.34%). When examining participant responses by time of day, we found rates of 88.30% for the morning texts, 88.55% for the afternoon texts, and 89.67% for the evening texts. In addition to the descriptive compliance rates reported by week, day of data collection, day o f the week, and time of day, a hierarchical logistic regression model was fit to assess the signif­ icance of probability of missingness across gender, age, CUDIT-R, BCS, RMPI total, MSLQ-SR, day o f the week, and week of the study, similar to the model of Courvoisier et al. (2012). An additional normal error term was included at the participant level to account for repeated observation of individuals. Day o f the week was included using dummy variables for all days except Monday,

Table 2

Response Rates and Response Times (in Minutes) by Day o f the Week Day of the week

Response rate

Mean response time

Minimum response time

Median response time

Maximum response time

Monday Tuesday Wednesday Thursday Friday Saturday Sunday

91.40 89.38 91.54 90.48 85.93 88.38 86.74

41.27 49.24 46.29 43.46 41.58 39.65 47.14

0 0 0 0 0 0 0

7.5 12.5 10.0 12.0 8.0 16.5 15.5

272 970 710 786 578 272 411

FEASIBILITY OF TEXTING FOR EMA

the reference day. According to this model, none of the baseline measures were associated with a significant change in probability of missingness. The week of participation (i.e., first or second week) showed evidence of significance (p = .033), t(1507) = -2.14, with a higher probability of missingness during the second week of the study. Day of the week did not show significance. A second model was fit, using total number of responses (per participant) as the response and including as predictors total time spent studying, total time spent smoking, average craving, and average academic motivation. A Poisson log-linear regression model was applied due to the skewed count. No variables showed significance at the .05 level. Thus, the data were determined to be missing at random.

EMA versus TLFB Reports of Marijuana Use We compared EMA data with TLFB data to determine whether participants reported a similar number of marijuana use occur­ rences. The average number of marijuana use instances was cal­ culated for each of the 14 days of EMA (totaled by day) and then compared to TLFB daily totals reported at the follow-up for the same 14-day period that overlapped with the EMA period. When examining whether participants reported uniformity across each day, only one participant reported the same number of marijuana instances every day on the TLFB. However, no participants re­ ported uniform daily marijuana consumption based on the 14-day EMA data. Several analyses were conducted to investigate trends in partic­ ipant reports of the number of daily marijuana use occurrences. First, we examined the total marijuana use instances for the TLFB and EMA reported across the entire 14-day period. Total EMA values over the 14 days ranged from 7 to 127, with an overall average of 35.25 (SD = 22.59) occurrences across participants. Total TLFB values over the same 14 days ranged from 5 to 98, with an overall average of 30.61 (SD = 20.62) occurrences across participants. The Pearson correlation between the totals was sig­ nificant (r = .851, p < .01, n = 39). Next we calculated the percentage agreement between the TLFB and EMA responses for the 630 direct daily comparisons. A total of 180 responses were exact matches (28.57%). Most of the inconsistencies (65.8%) showed that participants reported smoking less on the TLFB. Interestingly, 26% of the reported occurrences on the TLFB were different from those on the EMA by three or more smoking occurrences, while approximately 6% of the reported occurrences differed by five or more. We examined days that participants reported no marijuana use on the TLFB and found that these reports did not match with EMA reports of marijuana use 16% of the time. We examined the difference between TLFB totals and EMA totals (TLFB - EMA) for each participant. These differences ranged from -53 (underestimated in the TLFB) to 22 (overesti­ mated in the TLFB), with an average o f -8.66 (SD = 12.30). This average suggests the mean difference is significantly different from zero (p < .01), f(39) = —4.39. The total values matched for only two participants; in 29 cases the TLFB total was smaller, while in eight cases the EMA total was smaller. Overall, our findings suggest that participants reported less marijuana use through the TLFB compared to the EMA.

953

We also examined the relationship between days elapsed be­ tween the end of the EMA and the completion of the TLFB at the follow-up, and the size of the difference between the TLFB and the EMA. The Pearson correlation showed no significance (r = .02, p = .88, n = 39), and the scatterplot showed little trend. This lack of association suggests that, as the elapsed days grow, the differ­ ences between EMA and TLFB values do not follow a discernible trend. Overall this suggests little association between time elapsed and the difference found between the TLFB and the EMA.

Discussion Based on our review of the literature, this is the first study to use SMS texting as a form of EMA exclusively with heavy marijuana users. Our data suggest that text messaging with university mari­ juana users can result in high compliance, with few noted differ­ ences. Overall, we found that compliance averaged almost 89%, with a range of 85% to 96% across the 2-week EMA period. Prior EMA studies using PDAs with adolescent and young adult mari­ juana users have reported compliance rates from 62% to 80% (e.g., Buckner et al., 2011; Serre et al., 2012; Shrier et ah, 2012). Compared to these studies, we had an overall higher compliance rate, but all three of these studies had more extensive signal prompting, and some limited the time period in which participants could respond. Shrier et ah (2012) and Serre et ah (2012) had four to six prompts daily for 2 weeks, and Buckner et ah (2011) used six daily semi-random prompts, end-of-day assessments, and event-contingent assessments (i.e., each time they were about to use marijuana during the 2-week period). For random prompts, Serre and colleagues allowed a 20-min window for participant responses, while Shrier et ah allowed up to 15 min. Buckner et ah requested that participants complete assessments within 1 hr, though it is unclear whether participants were able to respond after the 1-hr window. In comparison, we did not limit response time, and we counted all responses returned before the next prompt was sent (similar to Berkman et ah, 2011), which is a limitation when considering overall response rate. We aimed to gain a better understanding of participants’ overall responsiveness to EMA text messaging and did not limit the cutoff time for a response. Over 75% of all responses were within 1 hr of the prompt, and the median response time over the 2-week assessment was 11 min. Similar to Courvoisier et ah (2012), we collected 42 time points and found that our response rates dipped toward the end of the study, with the last 2 days being the lowest at an 85% rate. Unlike Courvoisier et ah, the 42 time points we collected were over a 14-day time period (three times per day) instead of a 7-day period. Other studies have texted participants over longer time spans (e.g., 8 weeks; Collins et al., 1998), with as many as eight prompts per day (Berkman et ah, 2011; Freedman et ah, 2006). When exam­ ining predictors of the probability of data missingness, we found evidence of a higher propensity for missingness for Week 2 of the study compared to Week 1. Other measures assessing gender, age, marijuana-related consequences or problems, self-control, aca­ demic self-regulation, and day of the week did not impact the probability of missingness. There is likely a relationship between the number of prompts and compliance rates, and finding a good balance that doesn’t overwhelm participants is key. Further re­ search is needed to explore the burden threshold for EMA data and the potential slide in compliance toward the end of EMA studies.

954

PHILLIPS, PHILLIPS, LALONDE, AND DYKEMA

It is possible that all EMA studies may have a dip in response rate toward the end of the study, but this needs further exploration. Our findings on compliance were likely impacted by our sample and participants’ comfort with the technology. Most college stu­ dents own a cell phone, carry their phone with them at all times, and text-message (Diamanduros, Jenkins, & Downs, 2007; Nielsen Mobile, 2013; Raacke & Bonds-Raacke, 2011). Using partici­ pants’ personal cell phones likely had some benefit in our study, as participants did not have to carry an additional device, making the protocol more convenient. Considering the expense to researchers associated with cell phones, PDAs, or other computer devices (e.g., tablets), this was a benefit that made it possible to complete the work at minimal cost. However, this does limit researchers to only those participants who own a cell phone with text-messaging capabilities, which impacts generalizability when considering broader community samples. It is important to consider ethical implications associated with text messaging on sensitive topics (e.g., drug use), as participants may not be as careful about keeping their personal phone out of view from others. We asked participants to keep their messages private and informed them of the possible violation to confidenti­ ality if they shared their personal study information with others. Although we did not receive any concerns about privacy from participants, this is an important issue that should be considered for future studies. For example, participants could be instructed on how to set up security protections (e.g., password lock, fingerprint reader) on their cell phones. When considering sensitive topics, such as illicit substance use, careful consideration should be placed on how to word text messages. We did not reference marijuana in our text prompts, and we asked participants to refrain from using the term or other slang references in their responses. Due to the skewness of the mean data (based on several outli­ ers), we also examined median response times over the 2-week period. Descriptive data showed that participants responded in a timely fashion and that the fastest median responses were received on Mondays and Fridays. Median values ranged from 7.0 to 19.0 min across all days of the study. Median response time was the longest on Saturdays even though it had the shortest mean re­ sponse time, which indicated fewer extremes on this day of the week. It isn’t clear why participants responded more slowly on certain days compared to others. One might speculate that partic­ ipants were busier on days when they responded less quickly, but the patterns we found don’t really indicate any specific trend. Participants responded more quickly at the beginning of the 2-week period compared to the end, but this difference was not found to be significant. Descriptively, we found that participants responded less quickly in the morning compared to the afternoon and evening time points. When examining whether any baseline variables predicted over­ all response time, the MSLQ-SR and age were found to be signif­ icant predictors. However, when one outlier (i.e., the 33-year-old) was excluded from the analysis, age was no longer a significant predictor of response time. We expected that participants scoring high on a measure of self-regulation for learning (MSLQ-SR) might respond more quickly to EMA prompts. We found the opposite—participants who reported greater levels of regulating their learning tended to respond slower to the prompts. It is possible that these students were simply prioritizing other school activities (e.g., studying) over our study. Past research has found

that self-regulated learners tend to have greater behavioral control when it comes to minimizing distractions for their learning when compared to their peers (Wolter & Taylor, 2012). Thus, partici­ pants with greater self-regulated learning might have been better at minimizing distractions in their environment (e.g., text messages) for the sake of their learning. This finding would need to be explored in greater depth. When examining EMA predictors of response time at the participant level, an increase in the total minutes smoked was found to be associated with a higher response time. It is possible that greater time spent smoking marijuana may distract partic­ ipants or contribute to less focus on the research study. Simi­ larly, more time spent studying and greater academic motiva­ tion and craving were all associated with lower response time. The finding that greater craving was associated with lower response time did not make intuitive sense, as one would expect that craving would disrupt cognitive focus and lower response time. A potential explanation for this finding is that multicollinearity between these EMA variables contributed to this re­ sult. Multicollinearity doesn’t reduce the predictive power of the overall model but makes the interpretation of any specific variable uninterruptable due to statistical noise. Our comparison between EMA and TLFB reports of daily marijuana use instances found that, although total reports of the number of times participants used marijuana were correlated, the daily occurrences were not well aligned. Only 29% of the instances matched exactly, and many instances were off by three or more occurrences. Our data suggest that TLFB reports tend to indicate less marijuana usage than do EMA reports. Our findings differ slightly from those of Shiffman (2009b) and Griffith and col­ leagues (2009), which compared TLFB counts of cigarette use with EMA reports among participants enrolled in smoking cessa­ tion treatment. In those studies (comparing EMA and TLFB data), cigarette counts were greater through TLFB compared to EMA. It is possible that cigarette recall is different from marijuana use recall. In addition, our participants were active marijuana users not enrolled in treatment who smoked primarily through a small pipe (vs. a joint). Treatment-seeking populations may have greater motivation to report less substance use when completing TLFB with a researcher or clinic staff member. Despite these differences, our study did similarly find that TLFB and EMA are not equiva­ lent. Past research has shown that TLFB is reliable and valid (Fals-Stewart, O’Farrell, Freitas, McFarlin, & Rutigliano, 2000; Hjorthpj, Hjorthpj, & Nordentoft, 2012), but when resources al­ low, EMA may produce better data for examining daily occur­ rences (Shiffman, 2009b).

Limitations This study has limitations. The time period of the EMA was limited to 2 weeks so as not to overburden participants, but longer periods (e.g., 3 weeks) may have provided more rich data. Partic­ ipants were texted only during waking hours and not while in class, which could have impacted compliance. We did not send reminder prompts when participants did not respond to a text. We also did not limit the time considered for a response. Generally, median values suggested participants responded quickly, but future studies may want to consider a cutoff number of minutes (e.g., 30 or 60 min) for data to be considered “in the moment.”

FEASIBILITY OF TEXTING FOR EMA

The sample was small and focused exclusively on college student marijuana users who were compensated for their participation. Uni­ versity students as a whole are more highly educated and often have greater financial support. The campus we recruited from has 36% first generation students, with 22% of the student population identified as both first-generation and low income (i.e., based on Pell grant eligi­ bility). We limited participation to heavy marijuana users, and our participants reported using marijuana almost daily. Even though there are significant benefits to using SMS texting for momentary data, there are also several drawbacks. SMS text messages typically have a capacity o f 160 characters, though this often depends on the cell phone carrier and changes in technolog­ ical capacity. For our study, this meant that we had to use short­ hand for our questions and limit the number o f questions asked. Though we provided participants with a laminated business card with the full and shorthand version o f the questions, it is possible that participants may not have carried the card with them consis­ tently or could have forgotten what the shorthand meant. Second, because SMS texts have to be kept brief, it becomes more difficult to ask several questions to address a particular construct. With PDAs, there is increased space to ask a range of questions. Also, questions can be formatted with skip patterns when particular questions do not apply. When considering comparisons between EMA and TLFB, we acknowledge the limitation of not being able to quantify marijuana use with a biological measure (e.g., a more sensitive urine screen) to compare degree o f marijuana use to the EMA and the TLFB. However, Shiffman (2009b) used a biochemical measure to assess cigarette smoking and found that carbon monoxide readings were associated with EMA cigarette consumption but not TLFB. In the current study, both EMA and TLFB measures were based on self-report. We encourage the use of biological measures in future studies to better determine the validity o f either method.

Conclusion Overall, our findings support the feasibility o f using SMS tex­ ting as a form of EMA with college-age marijuana users. W e found little to predict the missingness o f our data (i.e., similar to Courvoisier et al., 2012), which is encouraging for future EMA studies. Future research could benefit from utilizing EMA to gain a better understanding o f various factors related to marijuana use.

References Adamson, S. J., Kay-Lambkin, F. J., Baker, A. L., Lewin, T. J., Thornton, L., Kelly, B. J., & Sellman, J. D. (2010). An improved brief measure of cannabis misuse: The Cannabis Use Disorders Identification TestRevised (CUDIT-R). Drug and Alcohol Dependence, 110, 137-143. doi: 10.1016/j.drugalcdep.2010.02.017 Altman, D. G., & Bland, J. M. (1983). Measurement in medicine: The analysis of method comparison studies. Statistician, 32, 307-317. doi: 10.2307/2987937 American Psychiatric Association. (1994). Diagnostic and statistical man­ ual o f mental disorders (4th ed.). Washington, DC: Author. Beck, A. T., Epstein, N., Brown, G., & Steer, R. A. (1988). An inventory for measuring clinical anxiety: Psychometric properties. Journal o f Consulting and Clinical Psychology, 56, 893-991. doi:10.1037/0022-006X.56.6.893 Beck, A. T., Ward, C., Mendelson, M., Mock, J., & Erbaugh, J. (1961). An inventory for measuring depression. Archives o f General Psychiatry, 42, 667675.

955

Berkman, E. T., Dickenson, J., Falk, E. B., 8c Lieberman, M. D. (2011). Using SMS text messaging to assess moderators of smoking reduction: Validating a new tool for ecological measurement of health behaviors. Health Psychology, 30, 186-194. doi:10.1037/a0022201 Broderick, J. E., Schwartz, J. E., Shiffman, S., Hufford, M. R., & Stone, A. A. (2003). Signaling does not adequately improve diary compliance. A nnals o f B ehavioral M edicine, 26, 139-148. doi:10.1207/ S 15324796ABM2602_06 Buckner, 1. D., Crosby, R. D„ Silgado, J., Wonderlich, S. A., & Schmidt, N. B. (2012). Immediate antecedents of marijuana use: An analysis from ecological momentary assessment. Journal o f Behavior Therapy and Experimental Psychiatry, 43, 647-655. doi: 10.1016/j.jbtep.2011.09.010 Buckner, J. D., Zvolensky, M. J., Smits, J. A. J., Norton, P. J., Crosby, R. D., Wonderlich, S. A., & Schmidt, N. B. (2011). Anxiety sensitivity and marijuana use: An analysis from ecological momentary assessment. Depression and Anxiety, 28, 420-426. doi:10.1002/da.20816 Collins, R. L., Morsheimer, E. T., Shiffman, S., Paty, J. A., Gnys, M., & Papandonatos, G. (1998). Ecological momentary assessment in a behav­ ioral drinking moderation program. Experimental and Clinical Psycho­ pharmacology, 6, 306-315. doi: 10.1037/1064-1297.6.3.306 Courvoisier, D. S., Eid, M., & Lischetzke, T. (2012). Compliance to a cell phone-based ecological momentary assessment study: The effect of time and personality characteristics. Psychological Assessment, 24, 713-720. doi: 10.1037/a0026733 Diamanduros, T., Jenkins, S. J., & Downs, E. (2007). Analysis of technol­ ogy ownership and selective use among undergraduates. College Student Journal, 41, 970—976. Epstein, D. H., Marrone, G. F., Heishman, S. J., Schmittner, J., & Preston, K. L. (2010a). Tobacco, cocaine, and heroin: Craving and use during daily life. Addictive Behaviors, 35, 318-324. doi:10.1016/j.addbeh.2009.11.003 Epstein, D. H., & Preston, K. L. (2010b). Daily life hour by hour with and without cocaine: An ecological momentary assessment study. Psychop­ harmacology, 211, 223-232. doi:10.1007/s00213-010-1884-x Epstein, D. H., Willner-Reid, J., Vahabzadeh, M., Mezghanni, M., Lin, J.-L., & Preston, K. L. (2009). Real-time electronic-diary reports of cue exposure and mood in the hours before cocaine and heroin craving and use. Archives o f General Psychiatry, 66, 8 8 -9 4 . doi:10.1001/ archgenpsychiatry.2008.509 Fals-Stewart, W„ O’Farrell, T. J., Freitas, T. T., McFarlin, S. K„ & Rutigliano, P. (2000). The timeline followback reports of psychoactive substance use by drug-abusing patients: Psychometric properties. Jour­ nal o f Consulting and Clinical Psychology, 68, 134-144. doi:10.1037/ 0022-006X.68.1.134 Ferguson, S. G., & Shiffman, S. (2011). Using the methods of ecological momentary assessment in substance dependence research— Smoking cessation as a case study. Substance Use & Misuse, 46, 87-95. doi: 10.3109/10826084.2011.521399 First, M. B., Spitzer, R. L., Gibbon, M., & Williams, J. B. W. (2002). Structured Clinical Interview fo r DSM-TV-TR Axis 1 Disorders, Research Version, Patient Edition (SCID-I/P). New York, NY: Biometrics Re­ search, New York State Psychiatric Institute. Freedman, M. J., Lester, K. M., McNamara, C., Milby, J. B., & Schum­ acher, J. E. (2006). Cell phones for ecological momentary assessment with cocaine-addicted homeless patients in treatment. Journal o f Sub­ stance Abuse Treatment, 30, 105-111. doi:10.1016/j.jsat.2005.10.005 Griffith, S. D., Shiffman, S., & Heitjan, D. F. (2009). A method compar­ ison study of timeline followback and ecological momentary assessment of daily cigarette consumption. Nicotine & Tobacco Research, 11, 1368 -1373. doi: 10.1093/ntr/ntp 150 Hjorthpj, C. R., Hjorthpj, A. R., & Nordentoft, M. (2012). Validity of timeline follow-back for self-reported use of cannabis and other illicit substances— Systematic review and meta-analysis. Addictive Behaviors, 37, 225-233. doi:10.1016/j.addbeh.2011.11.025

956

PHILLIPS, PHILLIPS, LALONDE, AND DYKEMA

Holt, L. L. Litt, M. D., & Cooney, N. L. (2012). Prospective analysis of early lapse to drinking and smoking among individuals in concurrent alcohol and tobacco treatment. Psychology o f Addictive Behaviors, 26, 561-572. doi: 10.1037/a0026039 Hopper, J. W., Su, Z., Looby, A. R., Ryan, E. T., Penetar, D. M., Palmer, C. M., & Lukas. S. E. (2006). Incidence and patterns of polydrug use and craving for ecstasy in regular ecstasy users: An ecological momentary assessment study. Drug and Alcohol Dependence, 85, 221-235. doi: 10.1016/j.drugalcdep.2006.04.012 Hufford, M. R., Shields, A. L., Shiffman, S., Paty, J., & Balabanis, M. (2002). Reactivity to ecological momentary assessment: An example using undergraduate problem drinkers. Psychology o f Addictive Behav­ iors, 16, 205-211. doi: 10.1037/0893-164X. 16.3.205 Johnson, E. I„ Barrault, M., Nadeau, L., & Swendsen, J. (2009). Feasibility and validity of computerized ambulatory monitoring in drug-dependent women. Drug and Alcohol Dependence, 99, 322-326. doi: 10.1016/j .drugalcdep.2008.06.010 Kuntsche, E., & Robert, B. (2009). Short message service (SMS) technol­ ogy in alcohol research— A feasibility study. Alcohol and Alcoholism, 44, 423-428. doi:10.1093/alcalc/agp033 Litt, M. D., Cooney, N. L., & Morse, P. (1998). Ecological momentary assessment (EMA) with treated alcoholics: Methodological problems and potential solutions. Health Psychology, 17, 48-52. doi: 10.1037/ 0278-6133.17.1.48 Lukasiewicz, M., Fareng, M., Benyamina, A., Blecha, L., Reynaud, M., & Falissard, B. (2007). Ecological momentary assessment in addiction. Expert Review o f Neurotherapeutics, 7, 939-950. doi: 10.1586/ 14737175.7.8.939 McLellan, A. T., Kushner, H., Metzger, D., Peters, R., Smith, I„ Grissom, G., . . . Argeriou, M. (1992). The fifth edition of the addiction severity index. Journal o f Substance Abuse Treatment, 9, 199-213. doi: 10.1016/ 0740-5472(92)90062-S Nielsen Mobile. (2013). The mobile consumer: A global snapshot. New York, NY: Nielsen Wire. Piasecki, T. M., Jahng, S., Wood, P. K., Robertson, B. M., Epler, A. J., Cronk, N. J., . . . Sher, K. J. (2011). The subjective effects of alcoholtobacco co-use: An ecological momentary assessment investigation. Journal o f Abnormal Psychology, 120, 557-571. doi:10.1037/a0023033 Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1991). A manual fo r the use o f the Motivated Strategies fo r Learning Question­ naire (MSLQ). Ann Arbor, MI: National Center for Research to Improve Post-Secondary Teaching. Preston, K. L., Vahabzadeh, M., Schmittner, J., Lin, J.-L., Gorelick, D. A., & Epstein, D. H. (2009). Cocaine craving and use during daily life. Psychopharmacology, 207, 291-301. doi:10.1007/s00213-009-1655-8 Raacke, J„ & Bonds-Raacke, J. (2011). An investigation of the dimensions of SMS communication use by college students. Individual Differences Research, 9, 210-218. Serre, F., Fatseas, M., Debrabant, R., Alexandre, J. M., & Swendsen, J. (2012). Ecological momentary assessment in alcohol, tobacco, cannabis and opiate dependence: A comparison of feasibility and validity. Drug and Alcohol Dependence, 126, 118-123. doi:10.1016/j.drugalcdep.2012.04.025 Shiffman, S. (2009a). Ecological momentary assessment (EMA) in studies of substance use. Psychological Assessment, 21, 486-497. doi:10.1037/ a0017074

Shiffman, S. (2009b). How many cigarettes did you smoke? Assessing cigarette consumption by global report, time-line follow-back, and eco­ logical momentary assessment. Health Psychology, 28, 519-526. doi: 10.1037/a0015197 Shiffman, S., Paty, J. A., Gnys, M., Kassel, J. D., & Hickcox, M. (1996). First lapses to smoking: Within-subjects analyses of real-time reports. Journal o f Consulting and Clinical Psychology, 64, 366-379. doi: 10.1037/0022-006X.64.2.366 Shiffman, S., Stone, A. A., & Hufford, M. (2008). Ecological momentary assessment. Annual Review o f Clinical Psychology, 4, 1-32. doi: 10.1146/annurev.clinpsy.3.022806.091415 Shrier, L. A., Walls, C. E„ Kendall, A. D„ & Blood, E. A. (2012). The context of desire to use marijuana: Momentary assessment of young people who frequently use marijuana. Psychology o f Addictive Behav­ iors, 26, 821-829. doi:10.1037/a0029197 Simons, J., Correia, C. J., Carey, K. B., & Borsari, B. E. (1998). Validating a five-factor marijuana motives measure: Relations with use, problems, and alcohol motives. Journal o f Counseling Psychology, 45, 265-273. doi: 10.1037/0022-0167.45.3.265 Smyth, J. M., & Stone, A. A. (2003). Ecological momentary assessment research in behavioral medicine. Journal o f Happiness Studies, 4, 35-52. doi: 10.1023/A: 1023657221954 Sobell, L. C., & Sobell, M. B. (1996). Timeline followback user’s guide: A calendar method fo r assessing alcohol and drug use. Toronto, Ontario, Canada: Addiction Research Foundation. Stone, A. A., Shiffman, S., Schwartz, J. E., Broderick, J. E., & Hufford, M. R. (2002). Patient non-compliance with paper diaries. British Med­ ical Journal, 324, 1193-1194. doi: 10.1136/bmj.324.7347.1193 Stone, A. A., Shiffman, S., Schwartz, J. E., Broderick, J. E., & Hufford, M. R. (2003). Patient compliance with paper and electronic diaries. C ontrolled C linical Trials, 24, 182-199. doi: 10.1016 /S 0 1972456(02)00320-3 Substance Abuse and Mental Health Services Administration. (2012). Results from the 2011 National Survey on Drug Use and Health: Summary o f national findings (NSDUH Series H-44, HHS Publication No. (SMA) 12-4713). Rockville, MD: Author. Tangney, J. P., Baumeister, R. F., & Boone, A. L. (2004). High self-control predicts good adjustment, less pathology, better grades, and interper­ sonal success. Journal o f Personality, 72, 271-324. doi: 10.111 l/j.00223506.2004.00263.x Toumier, M., Sorbara, F., Gindre, C., Swendsen, J. D., & Verdoux, H. (2003). Cannabis use and anxiety in daily life: A naturalistic investiga­ tion in a non-clinical population. Psychiatry Research, 118, 1-8. doi: 10.1016/S0165-1781(03)00052-0 Trail, T. J., & Ebner-Priemer, U. W. (2009). Using experience sampling methods/ecological momentary assessment (ESM/EMA) in clinical as­ sessment and clinical research: Introduction to the special section. Psy­ chological Assessment, 21, 457-462. doi:10.1037/a0017653 White, H. R., Labouvie, E. W., & Papadaratsakis, V. (2005). Changes in substance use during the transition to adulthood: A comparison of college students and their noncollege age peers. Journal o f Drug Issues, 35, 281-306. doi: 10.1177/002204260503500204 Wolter, C. A., & Taylor, D. J. (2012). A self-regulated learning perspective on student engagement. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook o f research on student engagement (pp. 635-652). New York, NY: Springer.

FEASIBILITY OF TEXTING FOR EMA

957

Appendix Ecological Momentary Assessment Questions in Full Format and Text-Message Abbreviation 1. What was the MAIN activity you were doing when we texted you? Text message abbreviation: Doing now? 2. When was the last time you smoked? (day/time) Text message abbreviation: Day/time last smokd? 3. How many times have you smoked since we last contacted you? Text message abbreviation: # times smokd since last txt? 4. When you smoked LAST, were you ALONE or with OTHERS? Text message abbreviation: Alone/others?

Text message abbreviation: Mins spnt since last text on sch work? 7. Since the last time we texted you, estimate how much time (in minutes) you spent smoking? Text message abbreviation: Mins spnt since last txt smokng? 8. Please rate your current craving or desire to smoke at this exact moment on a scale of 1-10, with 1 being “no cravings” and 10 being “extremely intense cravings.” Text message abbreviation: Craving l(none) - 10(high)

5. How many classes have you missed today? (answer “99” if no class today or if class is later) Text message abbreviation: # classes missd?

9. How motivated do you currently feel to focus on school work? Rate your motivation on a scale of 1-10, with 1 being “not at all” and 10 being “extremely motivated.” Text message abbreviation: Motiv sch wrk l(none) - 10(high)

6. Since the last time we texted you, estimate how much time (in minutes) you spent doing school work (e.g., reading, writing papers, or other homework assignments)?

Received June 6, 2013 Revision received February 14, 2014 Accepted March 4, 2014 ■

Copyright of Psychological Assessment is the property of American Psychological Association and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Feasibility of text messaging for ecological momentary assessment of marijuana use in college students.

Measuring self-reported substance use behavior is challenging due to issues related to memory recall and patterns of bias in estimating behavior. Limi...
8MB Sizes 2 Downloads 5 Views