Proffered Paper

Development, Construction, and Content Validation of a Questionnaire to Test Mobile Shower Commode Usability Emma L. Friesen, B Engineering (MfgSys)(Hons), B Business (Mktg), M ProfEdTrain (WorkVocEdTrain), 1,2,3 Deborah G. Theodoros, BSpThyHons, PhD,1,2 and Trevor G. Russell, B Physiotherapy, PhD1,2 School of Health and Rehabilitation Sciences, The University of Queensland, St. Lucia, Australia; Centre for Research Excellence in Telehealth, The University of Queensland, St. Lucia, Australia; 3 NSW Paediatric Spinal Outreach Service, Northcott, North Parramatta, Australia

1

2

Background: Usability is an emerging domain of outcomes measurement in assistive technology provision. Currently, no questionnaires exist to test the usability of mobile shower commodes (MSCs) used by adults with spinal cord injury (SCI). Objective: To describe the development, construction, and initial content validation of an electronic questionnaire to test mobile shower commode usability for this population. Methods: The questionnaire was constructed using a mixedmethods approach in 5 phases: determining user preferences for the questionnaire’s format, developing an item bank of usability indicators from the literature and judgement of experts, constructing a preliminary questionnaire, assessing content validity with a panel of experts, and constructing the final questionnaire. Results: The electronic Mobile Shower Commode Assessment Tool Version 1.0 (eMAST 1.0) questionnaire tests MSC features and performance during activities identified using a mixed-methods approach and in consultation with users. It confirms that usability is complex and multidimensional. The final questionnaire contains 25 questions in 3 sections. The eMAST 1.0 demonstrates excellent content validity as determined by a small sample of expert clinicians. Conclusion: The eMAST 1.0 tests usability of MSCs from the perspective of adults with SCI and may be used to solicit feedback during MSC design, assessment, prescription, and ongoing use. Further studies assessing the eMAST’s psychometric properties, including studies with users of MSCs, are needed. Key words: assistive technology device, AT, mobile shower commodes, outcome measures, telehealth, telerehabilitation, usability

I

n spinal cord injury (SCI) rehabilitation, mobile shower commodes (MSCs) are often prescribed to facilitate showering, bowel care, and mobility.1-7 Generally, an MSC is a waterproof chair on wheels, with a seat designed to allow hand access to the perianal region for digital stimulation and stool removal.4,6 Studies on MSCs for adults with SCI have identified concerns about MSC usability, safety, dissatisfaction, and nonuse.1,3,4,6,7 Despite this, few instruments exist to guide design, assessment, and prescription of MSCs for adults with SCI.2,4,5 Increasingly, researchers, policymakers, and funders of assistive technology (AT) such as MSCs are demanding evidence of outcomes achieved through provision.1,8-10 This includes obtaining user feedback during AT design and selection11-13 Corresponding author: Ms Emma Friesen, PhD Candidate, PO Box 3015, Putney, NSW 2112 Australia; e-mail: emma.friesen@uqconnect. edu.au Supplementary material: The online version of this article (doi: 10.1310/sci2101-77) contains an eAppendix.

and after short- or long-term use.1,10,12,13 Some jurisdictions classify AT as Class 1 medical devices and require evidence of outcomes for device registration, product monitoring, and ongoing market surveillance.11,12,14,15 In recent years, usability has emerged as an exclusive domain of outcomes in medical11,12 and AT8,10,16 devices research. Conceptually, usability draws from human factors and ergonomics literature, focusing on interactions between the user, the device or product, and the environment during performance of tasks or activities.8,10,12 The International Organisation for Standardisation (ISO) defines usability as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction, in a specified context of use” (ISO

Top Spinal Cord Inj Rehabil 2015;21(1):77–86 © 2015 Thomas Land Publishers, Inc. www.thomasland.com doi: 10.1310/sci2101-77

77

78

Topics in Spinal Cord Injury Rehabilitation/Winter 2015

9241-11:1998, definition 3.1).17 For AT devices, goals relate to the performance of activities when using the device.10 Effectiveness is the user’s perceived consistency in performing activities using the AT device, while efficiency is the perceived ease, comfort, speed, accuracy, and effort with which activities are performed using the AT device.10 Satisfaction with the AT device is derived from the effectiveness and efficiency with which the user can participate in activities.10 The usability of AT devices is therefore a function of the characteristics, functions and features of the AT device, and performance of the device across the user’s activity needs, abilities, skills, expectations, and contexts of use.10 Usability of AT is generally assessed through usability inspections and usability testing. Usability inspections are undertaken by expert nonusers of the product or device, 12 such as clinicians (eg, occupational therapist, physiotherapist, and speech pathologists), rehabilitation engineers, AT designers, suppliers and manufacturers, and representatives of funding organizations (eg, insurance companies, not-for-profit charities, and government programs).15 They can occur, for example, during clinical assessments in AT device service delivery. Usability testing focuses on users of the device,12 such as a person with a disability or impairment, and paid or unpaid caregivers.15 Questionnaires are often used to solicit feedback from the users’ point of view.12,14,16 Questionnaires are generally quick and cost-effective to administer11,12 and are useful where time and resources for follow-up are limited.1,3,8,18 They may be deliverable via telerehabilitation and telehealth platforms4,6,18-20 to allow completion in the user’s home. 1,3,20 Studies show that adults with SCI can accurately report use of AT,3 suggesting that complex and multidimensional aspects of MSC use can be captured in this way. Researchers contend that questionnaires should be AT device-specific,8,9,11 validated for welldefined user groups,8,9,11 and developed with input from AT device consumers.9,11,12,21 Currently, no validated questionnaires exist to assess usability of MSCs for adults with SCI.4 The aim of this study is to develop and construct a questionnaire for

assessing MSC usability for adults with SCI, based on the definition from ISO 9241-11:1998.17 Methods The questionnaire was developed and constructed using a mixed-method approach in 5 phases: (1) determining user preferences for the questionnaire’s format,12,21 (2) developing an item bank of usability indicators,22,23 (3) constructing a preliminary questionnaire, 23 (4) assessing content validity,12,23 and (5) constructing the final questionnaire.23 The methods are described in terms of the study’s 5 phases. Phase 1: Determine preferences for questionnaire format

Data regarding a possible format for a new questionnaire were collected as part of a larger qualitative interview study on use, performance, and features of MSCs for adults with SCI.6 Full details of the study are reported elsewhere.6 Briefly, 7 adults with SCI and 8 occupational therapists (OTs) with expertise in SCI rehabilitation were recruited through advertisements in Australian SCI- and AT-related listservs. 6 During the interviews, participants were asked to comment on 3 possible formats for a new questionnaire: (1) a list of MSC features that users rate on a scale of 1 to 5, (2) a list of MSC features from which users could choose and rate a smaller subset on a scale of 1 to 5, or (3) a questionnaire requiring users to specify goals or outcomes of MSC use across functional activities. Participants were invited to suggest alternative formats for a questionnaire. Qualitative data were analyzed using 6 stages of thematic qualitative analysis (familiarizing oneself with the data, generating initial codes, searching for themes, reviewing themes, defining and naming themes, and producing a report),24 using Nvivo 9 (QSR Corporation, Melbourne, Australia). Phase 2: Develop an item bank

Two methods were employed to develop an item bank of usability indicators: a review of



Testing Mobile Commode Usability

79

Table 1.  Recruitment strategies and inclusion criteria for phase 4  

Recruitment strategies

Inclusion criteria

Expert clinicians in rehabilitation for adults with SCI

• E-mail invitation sent to the ARATA listserv. • Five (5) or more years of experience • Advertisement in Accord, the journal of Spinal Cord Injuries prescribing AT for adults with SCI Australia • No restrictions on clinical discipline • E-mail invitation sent to the Australian Spinal OT listerv • Direct approach to experts who posted MSC-related advice to the Australian Spinal OT listserv

Notes: ARATA = Australian Rehabilitation and Assistive Technology Association; MSC = mobile shower commode; OT = occupational therapy; SCI = spinal cord injury.

research,22,23 and expert judgement solicited as part of the interviews described in phase 1.22 Published research including an exploratory literature review4 and qualitative interview study6 conducted by the authors, and a clinical assessment instrument developed by an SCI rehabilitation service,2 was reviewed and scrutinized for possible usability indicators. Data were extracted into an Excel spreadsheet (Microsoft Corporation, Redmond, WA). Expert judgments were solicited during the interviews described in phase 1.6 Participants were considered experts in MSCs based on experiences as either users of MSCs for 3 or more years or as expert clinicians working in SCI rehabilitation.6 Participants were asked to rate features of ideal MSCs identified in published studies,4,6 using a scale of 1 (not important) to 5 (very important). Quantitative data from a descriptive study where adults with SCI and caregivers rated features of ideal MSCs were also extracted into the Excel spreadsheet.25 After removing duplicates and combining similar items, possible usability indicators were sorted according to the 2 components of AT device usability: characteristics, functions, and features of MSCs; and MSC performance in activities and contexts of use.10,16 Phase 3: Construct the preliminary questionnaire

A preliminary questionnaire was drafted using qualitative data from phase 1, the item bank developed in phase 2, and consultation between the authors. Consideration was given to balancing

the importance of usability indicators with the overall length of the questionnaire and expected time for completion.8,12,21 Phase 4: Establish content validity

Content validity is defined as the degree to which items in an instrument adequately reflect the domain of content being measured.23 Five content experts were considered adequate for assessing content validity in developing clinical instruments.26 A panel of clinicians with expertise in SCI rehabilitation were recruited as content experts to assess the questionnaire’s content validity.16,23 Content experts were identified using the recruitment strategies and inclusion criteria shown in Table 1. Content experts were given a copy of the preliminary questionnaire and a statement regarding the questionnaire’s development.23 A standardized content evaluation questionnaire was administered via SurveyMonkey (www. surveymonkey.com). The content evaluation questionnaire, developed by Arthanat et al,16 comprised questions assessing the validity of items across the domains of comprehensiveness, item clarity, ease of administration, and scale clarity and 6 questions on overall validity. All items were measured on a 5-point semantic differentiation scale from 1 = strongly disagree to 5 = strongly agree. Opportunities for written comments were also provided.16 Content validity indices (CVIs) were calculated to measure the proportion of agreement amongst the content experts using a 3-step process. First,

Location

Capital city

Capital city Capital city Regional city Regional city Capital city Capital city Capital city

Private practice

Specialist SCI inpatient service Specialist SCI inpatient service Specialist community service Specialist community service Specialist community service Specialist community service Specialist community service

OT

OT OT OT OT OT OT OT

15

5 12 5 9 5 5 7

Statement of ethics

A university medical research ethics committee approved the study protocol. All requirements concerning ethical use of human volunteers outlined by the National Health and Medical Research Council in Australia and the university were followed during the course of this research. All participants gave written informed consent. Note: AS = adult with SCI; EC = expert clinician; OT = occupational therapist.

EC 2 EC 3 EC 4 EC 5 EC 6 EC 7 EC 8 T5 T5 C3-4 C5 C2 T8 Male Male Male Female Male Female AS 2 AS 3 AS 4 AS 5 AS 6 AS 7

59 49 49 32 37 42

T9 Female AS 1

Pseudonym

25

Gender

6 4 30 3 8 27

Regional city, capital city Regional city Capital city Capital city Capital city Regional city Regional city

EC 1

Using results from phase 4, we constructed the final questionnaire.

8

Location/s

Clinical experience, Clinical Pseudonym years qualification Time post SCI, years

item response ratings in the standardized content evaluation questionnaire were converted to the content index Likert scale as follows: 1 = strongly disagree = highly invalid; 2 = invalid; 3 = somewhat valid; 4 = valid; 5 = strongly agree = highly valid.16, 26 Next, the proportion of responses for each rating on the content index Likert scale were calculated for all items. Finally, the proportion of responses rated either valid or highly valid for each item were summed to give the CVIs.16, 26 Written comments were grouped into 3 categories for review: content, format, and general.16 Phase 5: Construct the final questionnaire

Self-reported SCI level Age, years

Adults with SCI

Table 2.  Participant characteristics for interviews (phases 1 and 2), extracted from Friesen et al6

Type of employment

Topics in Spinal Cord Injury Rehabilitation/Winter 2015

Expert clinicians

80

Results A new questionnaire testing MSC usability was constructed through the 5 phases of study: establishing user preferences for the questionnaire’s format, identifying usability indicators from literature and expert judgements, constructing the draft questionnaire, assessing the questionnaire’s content validity, and finalizing the questionnaire. The results are described in terms of these 5 phases. Phase 1: Determining preferences for questionnaire format

Characteristics of participants in phase 1 interviews are shown in Table 2. Qualitative analysis found that all participants supported development of a questionnaire assessing MSC usability. However, participants reported mixed



preferences regarding the questionnaire’s format. Only one participant (an adult with SCI) supported rating every item on a list of all available MSC features. Rating a user-selected subset of features initially proved appealing to 87.5% (n = 7) of clinicians and 85.7% (n = 6) of adults with SCI. Similarly, generating user-determined goals was initially supported by 70% of participants. Qualitative analysis of participant comments then yielded 4 themes regarding the format: (1) lists act as prompts, (2) priorities change, (3) goals and outcomes cause confusion, and (4) support for a different format. Lists acts as prompts

Most participants expressed reluctance to rate a long list of MSCs features that were either unnecessary for activities being performed or not relevant to their needs. Twelve participants (80%) indicated that a shorter list of features could be useful for prompting discussions between users and clinicians during MSC assessments. All expert clinicians suggested that a list of important MSC features might be used for less-experienced clinicians engaged in assessments with experienced MSC users. Similarly, adults with SCI felt that lists might prompt discussion with clinicians on new features or problems with current MSC performance. Priorities change

Most participants initially supported selecting a subset of features from a longer list. However 5 adults with SCI and 7 expert clinicians described how priorities for MSC features and performance changed as MSCs were assessed and discussed. These changes could be triggered by incompatibilities between MCS features or realization that an MSC feature facilitating performance of one activity negatively affected performance of another. For example, tilt-in-space facilitated postural positioning in MSCs but added weight (making propelling and manoeuvring more difficult) and was incompatible with folding frames (affecting portability). Adults with SCI described how MSC features and performance needed for the home environment may be different than those needed

Testing Mobile Commode Usability

81

when travelling, requiring compromises on some features. Implementing a user-selected subset of features therefore appeared complex in practice. Goals and outcomes cause confusion

When asked about goal- or outcomes-based questionnaires, all adults with SCI initially expressed goals and outcomes in terms of MSC features they deemed critical. When prompted further, they outlined performance of specific functional activities that these features would facilitate. For example, one participant identified tilt-in-space as a critical goal or outcome for his new MSC, and then explained that this feature was necessary to facilitate positioning. Clinicians tended to express goals and outcomes initially in terms of functional activities being undertaken. After further prompting, they described the specific MSC features that facilitated this. The responses suggested that constructing the questionnaire to focus on goals or outcomes could cause confusion between adults with SCI and clinicians. Support for a different format

None of the 3 proposed formats gained support from the majority of participants. Most participants (n = 12; 80%) wanted another questionnaire format that included MSC features and MSC performance during completion of activities. Phase 2: Developing an item bank

The review of research yielded 131 possible indicators of MSC usability. Interview transcripts with MSC experts were scrutinized and quantitative data on MSC features were extracted. Data were ranked according to the mean rating of importance given by participants (Table 3). Comparative data from a published study were also extracted into the same table.25 Seat shape, seat cushioning, and back support cushioning were reported as separate items.4,6,25 A further 13 possible indicators were generated through these expert judgments. Indicators were reviewed to remove duplicates and combine like items. For example, “seat design - cut out,” “seat design - full opening,” and “seat

82

Topics in Spinal Cord Injury Rehabilitation/Winter 2015

Table 3.  Mean ratings and ranking for MSC features extracted during phase 2 MSC feature Seat cushioning Hand access to genitals and the perianal area Seat shapeb Ease of rolling, turning, & propelling Reliability of brakes Ease of brake activation Easy to clean Portable, transportable (folding or collapsible frame) Postural supports Ease of moving and removing armrests Weight of MSC Adjustability of back support height Back support cushioning Tilt-in-space Addition of fitted pan underneath Addition of safety straps and belts Ease of moving and removing footrests Appearance of MSC Adjustability of back support angle (recline) Addition of anti-tippers

Mean rating of importance

Mean rating in Nelson et al9

4.93 4.93 4.85 4.80 4.80 4.60 4.60 4.21 4.00 3.93 3.93 3.67 3.67 3.57 3.43 3.21 3.13 3.08 3.07 3.00

4.4a 4.2 NA 4.3 4.2 4.0 NA NA NA 3.9 3.8 3.7 4.4a NA 3.2 3.0 3.6 NA 3.2 NA

Note: MSC = mobile shower commode; NA = not assessed. Cushioning for the seat and back support were ranked as a single item by Nelson et al (1993)9 but later considered as separate features by Nelson et al (2000).7 b Seat shape was not assessed by Nelson et al (1993),9 but was reported as part of a later study.7 a

design-bite” were combined into a single item, “seat shape.” The final item bank of possible usability indicators included 19 MSC features and 23 MSC performance items. Phase 3: Constructing the preliminary questionnaire

After reviewing data from phases 1 and 2, we made decisions about the questionnaire’s content and format to construct the electronic Mobile Shower Commode Assessment Tool (eMAST-d). Possible usability indicators extracted from only one data source in phase 2 were excluded from consideration. Indicators that appeared as MSC features but also affected MSC performance were selected for inclusion after discussion between the authors. Since qualitative data from phase 1 supported a format comprising both MSC features and MSC performance, the eMAST-d was constructed containing 25 questions in 3 sections. Section 1 contained 9 MSC features, and used a 5-point Likert scale from 1 (very unhappy) to 5 (very happy). Section 2 covered 12 items

concerning MSC performance, rated on a 5-point scale from 1 (strongly disagree) to 5 (strongly agree). Section 3 included the age in years of the MSC frame and seat and provided space for 3 positive and 3 negative aspects of the current MSC to capture specific user requirements. Because assessing unknown physical environments (such as hotel rooms) could be difficult, the context of use was restricted to the home environment. Phase 4: Establishing content validity

Five expert clinicians with an average of 10.6 years of clinical experience (range, 5-12 years) were recruited (Table 4). Data from the standardized content evaluation questionnaire were extracted and converted into the content index Likert scale (Table 5). CVIs, that is, the proportion of content validity experts rating items as valid or highly valid, were then calculated. CVIs were 100% for all but one item (eMAST Section 2, MSC Performance - Comprehensiveness of domain/construct) (Table 5).



Testing Mobile Commode Usability

83

Table 4.  Characteristics of content validity experts recruited for phase 4 Clinical qualification

Clinical experience, years

CVE 1 CVE 2 CVE 3 CVE 4

OT OT OT OT

15 5 10 12

CVE 5

OT

11

Participant

Type of employment

Geographical areas of practice

Specialist SCI community service Specialist SCI community service Specialist SCI inpatient service Specialist SCI nongovernment community and   outreach service Private practice

Statewide Statewide Capital city Statewide Capital city & regional centers

Note: CVE = content validity expert; OT = occupational therapist; SCI = spinal cord injury.

One content expert provided written comments, and 2 provided verbal comments to the first author. Three comments related to wording of items. Two suggested including brakes in Section 1, and one suggested removing the item on replacing the seat. Phase 5: Constructing the final questionnaire

As the content validity established in phase 4 was high to very high, minimal changes were made to the questionnaire. An item on brakes was added to Section 1 and the item on ease of seat replacement was removed. Wording on showering and cleaning the body was altered to eliminate confusion with cleaning the MSC. Personal information including name and date of birth was also included. The final version, named the electronic Mobile Shower Commode Assessment Tool Version 1.0 (eMAST 1.0) is provided in the eAppendix. Discussion This study reports on development of a questionnaire testing MSC usability. The questionnaire, named the eMAST 1.0, is the first to measure MSC usability for adults with SCI, using the definition from ISO 9241:11-1998.4,17 The eMAST 1.0 is AT device-specific 8,9,11 and was developed using a standardized approach12,23 in consultation with users as recommended.9-12,21 The eMAST 1.0 builds on earlier studies using human factors and ergonomics approaches to design MSCs for adults with SCI that address issues of safety, dissatisfaction, and nonuse.7,25 The qualitative

findings in phase 1 agree with emerging literature that suggest that the goals and outcomes associated with AT prescription and use may be understood differently by clinicians and AT users9,21,27 and that outcomes involve multiple constructs.1,9,10,21 The eMAST 1.0 assesses MSC features and performance during activities, identified through literature reviews and judgement of experts.2,4,6,7,25 It is consistent with conceptualizations of usability as it relates to AT10,16 and demonstrates that AT usability is both complex and multidimensional.10,16,27 By focusing on interactions between the user and the MSC, and characteristics of the MSC itself, the eMAST 1.0 may discriminate between MSC designs, therefore enabling comparisons during design, assessment, and prescription.11,12,27 The eMAST 1.0 complements usability inspection approaches evident in existing design and clinical assessment instruments.2,4,5 The eMAST 1.0 may also be administered after short- or long-term MSC use to determine if changes are needed.1,10,12 Soliciting user feedback at all stages of service delivery is associated with reducing user dissatisfaction and non-use of AT over time.13 The eMAST 1.0 may be useful for demonstrating outcomes of MSC provision,8,9,11 for generating evidence required as part of medical device regulations,11,12,14 and in future MSC research.8,9,11,12 As an electronic questionnaire, the eMAST 1.0 may be administered using telerehabilitation and telehealth platforms.6,18-20,28 This may encourage greater uptake in clinical settings where home visits cannot be undertaken, such as where clinicians are delivering services across large geographical areas.1,3,8,18,19,28

0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0)

4.0 4.0 4.2 4.0 4.0 4.0

0 (0) 0 (0) 0 (0) 0 (0)

0 (0)

0 (0)

0 (0) 0 (0) 0 (0)

0 (0) 0 (0) 0 (0) 0 (0)

0 (0) 0 (0) 0 (0) 0 (0)

Invalid

0 (0) 0 (0) 0 (0) 0 (0)

0 (0)

0 (0)

0 (0) 0 (0) 0 (0)

20 (1) 0 (0) 0 (0) 0 (0)

0 (0) 0 (0) 0 (0) 0 (0)

Somewhat valid

Note: eMAST-d = electronic Mobile Shower Commode Assessment Tool; MSC = Mobile Shower Commode. a CVIs were calculated by combining the proportion of content index Likert scale items rated either valid or highly valid. b Section 3 of the eMAST-d does not use a scale. “Appropriateness of scale” was therefore not assessed.

0 (0) 0 (0) 0 (0)

0 (0) 0 (0) 0 (0) 0 (0)

0 (0) 0 (0) 0 (0) 0 (0)

4.2 4.2 4.2

4.2 4.6 4.8 4.6

eMAST-d Section 2: MSC performance   Comprehensiveness of domain/construct scale   Clarity of item wording   Ease of administration   Appropriateness of scale

eMAST-d Section 3: Final questionsb   Comprehensiveness of domain/construct scale   Clarity of item wording   Ease of administration Overall content validity of eMAST-d   Objective (The eMAST-d measures MSC    usability effectively.)   Relevance (The eMAST-d will be useful for MSC    research.)   Administration (The eMAST-d can be    administered in a reasonable amount of time.)   Usefulness for MSC selection   Usefulness for measuring MSC effectiveness   Usefulness in MSC outcomes research

4.6 4.6 4.6 4.6

eMAST-d Section 1: MSC features   Comprehensiveness of domain/construct   Clarity of item wording   Ease of administration   Appropriateness of scale

Questionnaire items

Average score on content index Likert scale (n=5) Highly invalid

80 (4) 100 (5) 100 (5) 100 (5)

100 (5)

100 (5)

80 (4) 80 (4) 80 (4)

40 (2) 40 (2) 20 (1) 40 (2)

40 (2) 40 (2) 40 (2) 40 (2)

Valid

20.0 0 (0) 0 (0) 0 (0)

0 (0)

0 (0)

60 (3) 60 (3) 60 (3)

40 (2) 60 (3) 80 (4) 60 (3)

60 (3) 60 (3) 60 (3) 60 (3)

Highly valid

Proportion of responses on the content index Likert scale, % (n) (n = 5)

Table 5.  Calculation of content validity indices (CVIs) for each item in the standardized content evaluation questionnaire

100 (5) 100 (5) 100 (5) 100 (5)

100 (5)

100 (5)

100 (5) 100 (5) 100 (5)

80 (4) 100 (5) 100 (5) 100 (5)

100 (5) 100 (5) 100 (5) 100 (5)

CVIsa

84 Topics in Spinal Cord Injury Rehabilitation/Winter 2015



Limitations

The study had 4 main limitations. First, evidence on MSC use and performance resides mostly in the grey literature 4 and sources of usability indicators may have inadvertently been missed. Second, the sample size for interviews in phase 1 was small, and participants may not represent the full range of views across all adults with SCI and expert clinicians. Third, the eMAST 1.0 focuses on MSC use in the home and may not reflect usability across physical environments in rental or travel accommodation.6 Finally, the standardized methodology utilized recommended recruiting a panel of experts consisting of expert clinicians but not users.16, 23 Further psychometric evaluation with MSC users (adults with SCI) is needed before widespread adoption can be recommended.23 Conclusions

The eMAST 1.0 measures usability of MSCs from the perspective of adults with SCI. It combines MSC features and performance in use to test MSC usability and has demonstrated excellent content validity. The eMAST 1.0 may be useful for

Testing Mobile Commode Usability

85

clinical assessments of MSC after short- or longterm use, when gathering evidence required by medical device regulations, and in future research on MSCs for this population. Studies assessing the questionnaire’s psychometric properties are needed. Acknowledgments Conflicts of interest: The research is being conducted by Ms. Emma Friesen as part of a PhD program under the supervision of Associate Professor Trevor Russell and Professor Deborah Theodoros. There are no further declarations of conflicts of interest concerning this research. Financial support/disclosures: The research is sponsored by the University of Queensland School of Health and Rehabilitation Sciences and has not received external funding. Statement of ethics: The study protocol was approved by the University of Queensland Medical Research Ethics Committee. All ethical requirements outlined by the National Health and Medical Research Council in Australia and the University of Queensland were followed during the course of this research. All participants gave written informed consent.

REFERENCES 1. Harvey LA, Chu J, Bowden JL, et al. How much equipment is prescribed for people with spinal cord injury in Australia, do they use it and are they satisfied 1 year later? Spinal Cord. 2012;50(9):676-681. 2. Spinal Outreach Team. Queensland Spinal Cord Injury Service, ed. Mobile Shower Commode (MSC) assessment & prescription tool for therapists. Brisbane: The State of Queensland (Queensland Health); 2013. http://www.health.qld.gov.au/qscis/documents/ msc-assess.pdf. Accessed February 15, 2014. 3. Biering-Sørensen T, Hansen RB, Biering-Sørensen F. Home aids and personal assistance 10-45 years after spinal cord injury. Spinal Cord. 2009;47(5):405412. 4. Friesen E, Theodoros D, Russell T. Clinical assessment, design and performance testing of mobile shower commodes for adults with spinal cord injury: An exploratory review. Disabil Rehabil Assist Technol. 2013;8(4):267-274. 5. Ford S, Keay A, Skipper D, eds. Occupational therapy interventions for adults with a spinal cord injury – An overview. Chatswood, NSW: Agency for Clinical Innovation; 2014. http://www.aci.health.nsw. gov.au/__data/assets/pdf_file/0004/155191/

Occupational-Therapy-Interventions.pdf. Accessed October 24, 2014. 6. Friesen E, Theodoros D, Russell T. Use, performance and features of mobile shower commodes: Perspectives of adults with spinal cord injury and expert clinicians [published online ahead of print]. Disabil Rehabil Assist Technol. 2013. 7. Nelson A, Malassigné P, Cors MW, Amerson TL. Promoting safe use of equipment for neurogenic bowel management. SCI Nurs. 2000;17(3):119-124. 8. Lenker JA, Scherer MJ, Fuhrer MJ, Jutai JW, DeRuyter F. Psychometric and administrative properties of measures used in assistive technology device outcomes research. Assist Technol. 2005;17(1):7-22. 9. Lenker JA, Harris F, Taugher M, Smith RO. Consumer perspectives on assistive technology outcomes. Disabil Rehabil Assist Technol. 2013;8(5):373-380. 10. Arthanat S, Bauer SM, Lenker JA, Nochajski SM, Wu YW. Conceptualization and measurement of assistive technology usability. Disabil Rehabil Assist Technol. 2007;2(4):235-248. 11. Bridgelal Ram M, Grocott PR, Weir HCM. Issues and challenges of involving users in medical device development. Health Expect. 2008;11(1):63-71.

86

Topics in Spinal Cord Injury Rehabilitation/Winter 2015

12. Berg Rice VJ. Human factors in medical rehabilitation equipment: Product development and usability testing. In: Jacobs K, ed. Ergonomics for Therapists. St. Louis, MO: Elsevier Mosby; 2008:151-172. 13. Martin JK, Martin LG, Stumbo NJ, Morrill JH. The impact of consumer involvement on satisfaction with and use of assistive technology. Disabil Rehabil Assist Technol 2011;6(3):225-242. 14. Magnier C, Thomann G, Villeneuve F, Zwolinski P. Methods for designing assistive devices extracted from 16 case studies in the literature. Int J Interactive Design Manufactur. 2012;2(2):93-100. 15. Cooper RA. Introduction. In: Cooper RA, Ohnabe H, Hobson DA, eds. An Introduction to Rehabilitation Engineering. Boca Raton, FL: Taylor & Francis; 2007:1-18. 16. Arthanat S, Wu YWB, Bauer SM, Lenker JA, Nochajski SM. Development of the Usability Scale for Assistive Technology-Wheeled Mobility: A preliminary psychometric evaluation. Technol Disabil. 2009;21(3):79-95. 17. International Organisation for Standardardization. ISO 9241-11: Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs) -- Part 11: Guidance on Usability. Geneva: Author; 1998. 18. Middleton JW, McCormick M, Engel S, et al. Issues and challenges for development of a sustainable service model for people with spinal cord injury living in rural regions. Arch Phys Med Rehabil. 2008;89(10):1941-1947. 19. Hauber RP, Michael LJ, Ma AJT, Vesmarovich S, Victoria LP. Extending the continuum of care after spinal cord injury through telerehabilitation. Top Spinal Cord Inj Rehabil. 1999;5(3):11-20.

20. Hoffmann T, Cantoni N. Occupational therapy services for adult neurological clients in Queensland and therapists’ use of telehealth to provide services. Austral Occup Ther J. 2008;55(4):239-248. 21. Mortenson WB, Miller WC, Miller-Pogar J. Measuring wheelchair intervention outcomes: Development of the Wheelchair Outcome Measure. Disabil Rehabil Assist Technol. 2007;2(5):275-285. 22. Crocker LM, Algina J. Introduction to Classical and Modern Test Theory. New York: Holt, Rinehart, and Winston; 1986. 23. Portney LG, Watkins MP. Foundations of Clinical Research: Applications to Practice. Upper Saddle River, NJ: Pearson/Prentice Hall; 2009. 24. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77-101. 25. Nelson A, Malassigné P, Amerson T, Saltzstein R, Binard J. Descriptive study of bowel care practices and equipment in spinal cord injury. SCI Nurs. 1993;10(2):65-67. 26. Wynd CA, Schmidt B, Schaefer MA. Two quantitative approaches for estimating content validity. West J Nurs Res. 2003;25(5):508-518. 27. Gil-Agudo A, Solis-Mozos M, Del-Ama AJ, CrespoRuiz B, de la Pena-Gonzalez AI, Perez-Nombela S. Comparative ergonomic assessment of manual wheelchairs by paraplegic users. Disabil Rehabil Assist Technol. 2013;8(4):305-313. 28. Cox RJ, Amsters DI, Pershouse KJ. The need for a multidisciplinary outreach service for people with spinal cord injury living in the community. Clin Rehabil. 2001;15(6):600-606.

Development, construction, and content validation of a questionnaire to test mobile shower commode usability.

Usability is an emerging domain of outcomes measurement in assistive technology provision. Currently, no questionnaires exist to test the usability of...
876KB Sizes 0 Downloads 8 Views