JBUR-4192; No. of Pages 9 burns xxx (2013) xxx–xxx

Available online at www.sciencedirect.com

ScienceDirect journal homepage: www.elsevier.com/locate/burns

A comparison of two smartphone applications and the validation of smartphone applications as tools for fluid calculation for burns resuscitation R. Morris a,*, M. Javed a, O. Bodger b, S. Hemington Gorse a, D. Williams a a b

The Welsh Centre for Burns and Plastic Surgery, Morriston Hospital, Swansea SA6 6NL, United Kingdom School of Medicine, Swansea University, Swansea SA2 8PP, United Kingdom

article info

abstract

Article history:

We conducted a randomised, blinded study to compare the accuracy and perceived usability

Accepted 18 October 2013

of two smartphone apps (uBurn# and MerseyBurns#) and a general purpose electronic calculator for calculating fluid requirements using the Parkland formula. Bespoke software

Keywords:

randomly generated simulated clinical data; randomly allocated the sequence of calculation

Iphone

methods; recorded participants’ responses and response times; and calculated error mag-

Merseyburn

nitude. Participants calculated fluid requirements for nine scenarios (three for each: calcu-

Uburn

lator, uBurn#, MerseyBurns#); then rated ease of use (VAS) and preference (ranking), and

Parkland formula

made written comments. Data were analysed using ANOVA and qualitative methods. The

Smartphone

sample population consisted of 34 volunteers who performed a total of 306 calculations. The three methods showed no significant difference in incidence or magnitude of errors. Mean

Apps

(SD) response time in seconds for the calculator was 86.7 (50.7), compared to 71.7 (42.9) for uBurn# and 69.0 (35.6) for MerseyBurns#. Both apps were significantly faster than the calculator ( p = 0.013 and p = 0.017 respectively, ANOVA: Tukey’s HSD test). All methods showed a learning effect ( p < 0.001). The participants rated ease of use on a VAS scale with a higher score indicating greater ease of use. The calculator was easiest to use with a mean score (SD) of 12.3 (2.1), followed by MerseyBurns# with 11.8 (2.7) and then uBurn# with 11.3 (2.7). The differences were not found to be significant at the p = 0.05 level after using paired samples t-test and a multiple correction was applied manually. Preference ranking followed a similar trend with mean rankings (SD) of 1.85 (0.17), 1.94 (0.74) and 2.18 (0.90) for the calculator, MerseyBurns# and uBurn# respectively. Again, none of these differences were significant at the p = 0.05 level. # 2013 Elsevier Ltd and ISBI. All rights reserved.

1.

Introduction

Fluid resuscitation remains a critical and challenging step in the initial management of major burn injury [1,2]. Numerous formulae have been described for intravenous fluid resuscitation; the most widely used is the ‘‘Parkland formula’’

developed by Baxter et al. [3]. This formula is based on total volume of resuscitation fluid to be infused, over 24 h of 3–4 millilitres-per-kilogram body weight per percentage total body surface area burned. Since the introduction of the Parkland formula in 1968 several authors have described various methods for calculating the fluid requirements using this formula [4–10].

* Corresponding author. Tel.: +44 07949450866. E-mail address: [email protected] (R. Morris). 0305-4179/$36.00 # 2013 Elsevier Ltd and ISBI. All rights reserved. http://dx.doi.org/10.1016/j.burns.2013.10.015 Please cite this article in press as: Morris R, et al. A comparison of two smartphone applications and the validation of smartphone applications as tools for fluid calculation for burns resuscitation. Burns (2013), http://dx.doi.org/10.1016/j.burns.2013.10.015

JBUR-4192; No. of Pages 9

2

burns xxx (2013) xxx–xxx

It is recognised that errors frequently occur in burn size estimation [11–16], and this will inevitably lead to inaccuracies in fluid resuscitation [16]. However, inaccuracies can also occur when using the Parkland formula, with one study showing that only 33% of surgeons and 17% of emergency medicine physicians were able to accurately calculate the initial fluid rate when using the Parkland formula from memory [8]. Another study of plastic surgery trainees, anaesthetists and burns nurse specialists showed that the fluid resuscitation requirement calculations were correct in only 55% of cases when using the Parkland formula [17]. A recent study by Theron et al. has attempted to quantify the magnitude of errors when using the Parkland formula – and has recorded errors of magnitude of 25%, 50% and 75% in 25%, 16.7% and 9.5% of calculations respectively for manual calculations, and 17.9%, 14.3% and 8.3% of calculations when a general purpose electronic calculator was used [18]. The last decade has seen an extensive development of smartphone technology and its evolving use and application in the healthcare sector [19]. This has been complimented by a steady increase in usage amongst doctors and medical students [20]. Further, the development of software applications (apps) related to medicine based on these platforms has added a new dimension to access and interpretation of medical knowledge. Apps have recently been developed for the calculation of fluid requirements following burns based on the Parkland formula. One such app, the Mersey Burns# App, has also been approved by the Medicines and Healthcare products Regulatory Agency (MHRA) authority as a class I medical device in the United Kingdom [21]. However there seems to be a lack of published literature assessing the validity or comparison between these smartphone apps and other methods for fluid calculation. The aim of this study was to compare two existing smartphone apps; the Uburn# and the Mersey Burns#; with a general purpose calculator method for calculating intravenous fluid requirements using the Parkland formula, using criteria of accuracy, response time and subjective ease of use (Figs. 1 and 2).

2.

Method

2.1.

Ethics

The study did not require a formal ethical review and appropriate letters of exemption were acquired from our National Health Service Trust’s Research Ethics Committee and Research and Development office.

2.2.

Design

We conducted an anonymised randomised volunteer study at our Regional Burns Centre from November 2012 to February 2013. Study design was based on and informed by similar previous studies [18,22]. The Uburn app# (JAMB innovations, London, UK), Mersey Burns# (St. Helens and Knowsley Teaching Hospitals NHS Trust) app and the calculator methods were assessed using a total of nine calculations per participant (i.e. three calculations using each method) over a 30–40 min period.

Fig. 1 – Screen shot of uBurn# application.

The choice of nine calculations per participant was an acceptable compromise between collecting sufficient data and potential bias due to participant fatigue. The null hypothesis was that ‘‘There is no difference in accuracy or speed of calculation when comparing the three methods’’. Based on the data from a previous study [22] we found that the two most similar methods (Nomogram and Calculator) could be distinguished, in terms of error rates, by a sample size as low as 80 observations per method so required a total sample of (80  2)/9 = 17 participants. To identify a 10% difference in response time (considered a lower threshold for relevance) we required a sample size of n = 158 individual calculations; or (150  2)/9 = 33 participants. We therefore used this as a target our sample size, with the expectation of being able to distinguish between all three methods in regard to both response time and error rate. In total 34 volunteers participated in the study, including trainee and consultant Burns and Plastic Surgery surgeons, anaesthetists and nursing staff. Individuals were not directly approached to participate. Recruitment of participants involved sending emails to all the doctors on the Burns and Plastic Surgery rota, all anaesthetists who cover the burns unit, and all senior nursing staff on the Burns unit. Awareness for the study was also raised at interdepartmental meetings. None of the volunteers were offered any financial incentives or benefit of any sort for participation; however, the educational value of experiencing new techniques in calculating burns

Please cite this article in press as: Morris R, et al. A comparison of two smartphone applications and the validation of smartphone applications as tools for fluid calculation for burns resuscitation. Burns (2013), http://dx.doi.org/10.1016/j.burns.2013.10.015

JBUR-4192; No. of Pages 9 burns xxx (2013) xxx–xxx

3

Fig. 2 – Screen shot of Mersey Burns# application.

fluid resuscitation was encouraged. The participants received full instruction on how to calculate the Parkland formula using each app and the calculator. They were given the unlimited opportunity to practise, and only proceeded with the study once they felt confident with all three methods of calculation. Each participant performed a series of calculations based on computer-generated simulated clinical scenarios.

2.3.

Accuracy and speed

For the purpose of the study bespoke software was developed using Python, an open-source cross-platform object-oriented programming language, developed for scientific applications [23]. For each scenario, the software randomly generated values for body weight (Kg), TBSA (total burn surface area, %), and Delay (time from burn to commencement of resuscitation, hours) within appropriate clinical ranges. Participants used

this information to calculate infusion rates for the first and second resuscitation periods, with the first period representing the initial 8 h following a burn, and the second period representing the subsequent 16 h. It was emphasised that the first resuscitation period commenced at the time of burn, rather than the time of hospital admission. Participants’ responses were entered directly into the computer (Fig. 3). The software also incorporated an automatic timing routine, which assessed the time taken by each participant from being presented with each scenario to entering their answers. Participants were supervised throughout the process. To eliminate any potential bias due to learning effects or fatigue the software randomised the order in which the three methods of calculations to be used (Uburn#, Mersey Burns# app and the calculator) were presented, whilst ensuring that each volunteer used each of the three methods of calculation exactly three times. The software logged all relevant information (simulated clinical data, method of calculation used, participant

Fig. 3 – Screenshot of a clinical scenario presented to a participant in the study. Please cite this article in press as: Morris R, et al. A comparison of two smartphone applications and the validation of smartphone applications as tools for fluid calculation for burns resuscitation. Burns (2013), http://dx.doi.org/10.1016/j.burns.2013.10.015

JBUR-4192; No. of Pages 9

4

burns xxx (2013) xxx–xxx

responses, and response time) to a spreadsheet (Excel, Microsoft, WA). The correct responses to each simulated clinical scenario were also calculated by the software and logged for subsequent analysis. Throughout the study the participants and investigators were blinded to the response times and correct answers to the clinical scenarios.

2.4.

Analysis

Analysis of the results was conducted using the Statistical Package for the Social Sciences software version 13 (SPSS Inc. Version 13, Chicago, IL). Appropriate statistical tests were used to analyse each of the different measures used. The response time data was positively skewed and so was the log transformed before statistical testing. The transformed data was continuously distributed, and not rejected by the Kolmogorov test when compared to the Normal distribution, so Analysis of Variance (ANOVA) was used. Post hoc testing was performed using Tukey’s honestly significant difference (HSD) test. The incidence of errors were analysed using the chi-square test of association, with all cases where the calculated infusion total and the true value were more than 1000 ml apart being classified as errors. The threshold of 1000 ml was chosen as it was felt any difference of greater than 1000 ml in estimation of fluid requirements would be clinically significant. When minor errors were excluded from the analysis the distribution of remaining, clinically significant, error magnitude was sufficiently close to a normal distribution to allow the use of ANOVA. Data from the VAS were converted to continuous numerical data by measuring the distance (mm) from the left hand side of the line to the position of the cross marked by the participant (most difficult = 0 mm to easiest = 160 mm); expressed as a proportion of the total length of the line. These scores were also appropriately distributed to allow the use of parametric tests and so ANOVA and Tukey’s HSD test were used for any analysis involving them. Where the impact of age was considered a Chi-Square test was used to assess significance. Free text responses were analysed using an iterative constant comparison (‘‘grounded’’) method [24].

3.

Mean Median Standard deviation Minimum Maximum

36.4 34.5 8.2 27 63

Usability

On completion of the nine simulated clinical scenarios, each participant was asked to fill in a questionnaire regarding the perceived usability of each method. The questionnaire incorporated three different parameters. A visual analogue scale (VAS) of the ease of use which ranged from ‘‘very difficult’’ to ‘‘very easy’’, a preference ranking (1st to 3rd) for each method, and free text for specific comments.

2.5.

Table 1 – Age of participants. Age

Results

In total 34 volunteers participated in the study completing a total of 306 calculations. All participants successfully completed all stages of the protocol. 24 (71%) were male and 10 (29%) were female. Tables 1 and 2 summarise participant demographics.

Table 2 – Doctor grades and occupation of participants.

Consultant surgeon Consultant anaesthetist SpR (Plastic surgery) SHO (Plastic surgery) SHO (Anaesthetics) Nurse Total

Frequency

Percentage

5 2 8 12 1 6 34

14.7 5.9 23.5 35.3 2.9 17.6 100.0

All the participants had previous experience of performing calculations using the Parkland formula with 52.9% (n = 18) having an experience of greater than 20 times (Table 3). The majority of the participants 82.4% (n = 28) routinely used a calculator as the method of choice for fluid calculations; only one participant had previously used a smartphone app to perform the calculations.

3.1.

Response time

Mean (SD) response time in seconds for the calculator was 86.7 (50.7), compared to 71.7 (42.9) for Uburn# and 69.0 (35.60) for MerseyBurns#. There was significant variation between the different methods in the time taken to perform the calculations ( p = 0.006; ANOVA). When post hoc comparisons between the methods were made using Tukey’s HSD test the calculator was found to be significantly slower than both uBurn# ( p = 0.013) and MerseyBurns# ( p = 0.017). The difference between the two apps was not significant at the p = 0.05 level (Fig. 4).

3.2.

Propensity for error

We considered two elements of the propensity to error for each of the three methods. We first examined the frequency with which clinically relevant errors occurred, then considered the magnitude of the errors that were made. The highest error rate was observed for the calculator (16.7%), with the apps showing much lower rates of 9.8% and Table 3 – Prior experience with the Parkland formula. Number of times Parkland formula used

Frequency

Percentage

20 times

7 8 1 18

20.6 23.5 2.9 52.9

Total

34

100.0

Please cite this article in press as: Morris R, et al. A comparison of two smartphone applications and the validation of smartphone applications as tools for fluid calculation for burns resuscitation. Burns (2013), http://dx.doi.org/10.1016/j.burns.2013.10.015

JBUR-4192; No. of Pages 9 burns xxx (2013) xxx–xxx

5

Fig. 4 – Confidence intervals for (log) response time.

Fig. 5 – Frequency of errors.

7.8%forMerseyBurn# and uBurn# respectively (Table 4).Despite these differences they did not represent a significant difference in the propensity for error in the three methods ( p = 0.065) (Fig. 5). Further, there was no evidence of the effect of gender ( p = 0.697) or age ( p = 0.339) on the propensity to make errors. Table 4 – Frequency of errors. Count Was a clinically relevant error made?

Method Calculator Mersey Burn app U-Burn app Total

No

Yes

85 94 92 271

17 8 10 35

Total

102 102 102 306

Analysis of the magnitude of errors was complicated by the distribution of the data. The distribution was both multimodal and skewed, and therefore beyond any possibility of transformation. A visual representation of the errors illustrates the problem (Fig. 6). Both the magnitude of errors and the frequency of occurrence were log-scaled to assist visual clarity with a line marking the boundary that separates minor from clinically significant errors. There appear to be two different sources of error; one resulting in minor errors, and the other in much larger mistakes. It is possible the smaller errors were predominantly due to the rounding of values during the calculations while the larger cases arose from genuine mathematical errors. Before considering the distribution of the magnitude of the errors we removed all those cases not deemed clinically significant. No significant difference in magnitude of the remaining errors between the different methods was demonstrated at the p = 0.05 level. ( p = 0.778; ANOVA). Furthermore the magnitudes were not found to be dependent on other

Please cite this article in press as: Morris R, et al. A comparison of two smartphone applications and the validation of smartphone applications as tools for fluid calculation for burns resuscitation. Burns (2013), http://dx.doi.org/10.1016/j.burns.2013.10.015

JBUR-4192; No. of Pages 9

6

burns xxx (2013) xxx–xxx

Fig. 6 – Magnitude of errors with each method.

Fig. 7 – Learning effect and response times.

variables such as gender ( p = 0.313; ANOVA), age ( p = 0.331; Chi-Square test) or job title ( p = 0.518; ANOVA).

3.3.

Learning effect

The analysis demonstrated a strong evidence of learning effect across all the three methods (Fig. 7). The response time for each method was noted to fall dramatically with repeated attempts ( p < 0.001, ANOVA) (Fig. 7). The error rate was showed to decrease with repeated attempts ( p = 0.077; Chisquare), and the magnitude ( p = 0.008; ANOVA) was observed to decrease significantly with subsequent attempts for all three methods.

3.4.

Preference

User preference was assessed using a VAS scale which ranged from ‘‘Very Difficult’’ to ‘‘Very Easy.’’ The calculator method

was the preferred method:- with a mean score (SD) of 12.3 (2.1), followed by MerseyBurns# with 11.8 (2.7) and then uBurn# with 11.3 (2.7). However, the differences were not statistically significant when compared pair-wise using paired samples ttests with manually applied multiple correction. The observed scores are shown in Table 5. Participants were also asked to rank each of the methods (1–3) in order of preference with 13 preferring the calculator, and 10 and 11 choosing MerseyBurns# and

Table 5 – Ease of use measure using VAS.

Calculator distance (cm) Mersey distance (cm) Uburn distance (cm)

Mean

Standard deviation

N

12.3 11.8 11.3

2.1 2.7 2.6

34 34 34

Please cite this article in press as: Morris R, et al. A comparison of two smartphone applications and the validation of smartphone applications as tools for fluid calculation for burns resuscitation. Burns (2013), http://dx.doi.org/10.1016/j.burns.2013.10.015

JBUR-4192; No. of Pages 9

7

burns xxx (2013) xxx–xxx

Table 6 – Method preference. Method

Number of participants ranking 1st

Number of participants ranking 2nd

Number of participants ranking 3rd

N

13 10 11

13 16 6

8 8 17

34 34 34

Calculator Mersey Uburn

UBurns# respectively. Table 6 shows the actual results and frequency with which they were chosen.

3.5.

Qualitative analysis

Qualitative analysis identified several common themes, with number of responses in parentheses, and representative comments quoted where appropriate. Perceived advantages of the UBurns# app over the Mersey Burns# app were that: UBurns# was quicker and easier to use (4); allowed patient weight to be entered in 1 kg rather than 5 kg increments (2); pre-hospital fluids could be taken into account (3); it was possible to see the whole calculation displayed on a single page at once (‘‘more like mathematical thinking’’), without having to navigate back and forth between different pages (3). UBurns# emphasised rate of fluid administration (ml h 1), (‘‘which is what is actually prescribed’’), rather than total volume of fluid to be administered (ml) (1); and it was possible to directly enter data more quickly using a numeric keypad rather than a slider or wheel (2). Perceived advantages of the Mersey Burns# app over the UBurns# app were that: the Mersey Burns# interface was more intuitive and easier to use overall (11); with specific comments that the slider interface in UBurns# made data entry ‘‘fiddly’’ and slow (5), and that the options for multiple units of measurement (kg or lb, min or h) offered by UBurns# added additional complexity and potential for error (4). The option to estimate TBSA% by drawing on the touch screen was also seen to be an advantage of the Mersey Burns# app (1). A

summary of the relative strengths and weaknesses of the apps can be seen in Tables 7 and 8.

4.

Discussion

A plethora of healthcare-related smartphone apps is readily available via the internet [25]. Amongst the applications designed to assist in the management of burn patients, the uBurn# and the Mersey burns# are two such apps which use Parkland formula and are available for nominal fees ($1.99 and free respectively from Apple iTunes store). In common with many other medical apps, at the time of writing, uBurn# is unlicensed for clinical use. The approval of the Mersey Burns# app as a medical device by the UK Medicines and Healthcare Regulatory Agency (MHRA) [21] is a significant step, as it highlights acceptance of smartphone apps in the management of medical conditions and may act as a stimulus to encourage a new generation of software developers to design apps which can be used as medical devices in the future. Due to huge numbers of medical apps available and an incremental trend in acquisition of such apps by medical staff – up to 50% of junior doctors owned 1–5 medical apps in one survey [20] – there is a growing and a valid concern about regulation [26,27]. One recent article has called for the ‘‘need to ensure apps are safe, useful, and effective.’’ [28] This is the first randomised blinded study to scientifically validate the use of apps as adjuncts to the management of fluid resuscitation in burns patients; and compare their

Table 7 – Summary of the strengths and weakness of UBurns# app. Strengths Allows patient weight to be entered in 1 kg increments Prehospital fluid taken into account The entire calculation was shown on one page, therefore there was no need to navigate back and forth Emphasised rate of fluid administration rather than total volume It is possible to enter data quicker with a numeric key pad rather than a slider/wheel

Weaknesses Episode of data loss when a tab was accidentally pressed Does not emphasise importance of excluding erythema in assessment Does not allow for variations of original Parkland formula e.g. 3 ml/kg/%TBSA Slider interface made data entry slow and ‘‘fiddly’’ Option for multiple units of measurement (kg, lbs, minutes or hours) increased complexity and possible error Does not emphasise that app and formulae are only guidelines

Table 8 – Summary of the strengths and weaknesses of Mersey Burns# app. Strengths Interface was more intuitive and easier overall Option to estimate TBSA by drawing on touch screen

Weaknesses No option to account for prehospital fluids Navigating between pages was required during a calculation Weight increments of 5 kg could affect accuracy Appeared to erroneously display formula as 2 ml/kg instead of 2 ml/kg/ TBSA %

Please cite this article in press as: Morris R, et al. A comparison of two smartphone applications and the validation of smartphone applications as tools for fluid calculation for burns resuscitation. Burns (2013), http://dx.doi.org/10.1016/j.burns.2013.10.015

JBUR-4192; No. of Pages 9

8

burns xxx (2013) xxx–xxx

performance (accuracy, speed and learning effect) to a general purpose electronic calculator. In the management of burns, accuracy is much more important than speed; however response time is also an important consideration, and any measure which will expedite management without compromising the accuracy is welcomed. In our study the calculator was the slowest method for fluid estimation. However, although this is statistically significant, the time difference is unlikely to be of clinical significance in practice. We demonstrated no significant difference between the three methods in propensity for errors ( p = 0.065), which is reassuring, as the possibility of errors due to apps has been cited as a caveat to their widespread adoption [28]. The population of volunteers in this study were all experienced in the use of the Parkland Formula, in order to obtain an ‘‘expert opinion’’ on the relative merits and disadvantages of the three different methods. However, in reality these apps are designed for, and will be predominantly used by clinicians who are likely to be, inexperienced in these calculations – for example trainees or clinical staff who do not work on a specialist burns unit- and may therefore have different requirements and preferences. It is therefore planned to repeat this study with a new sample population of volunteers who have no prior experience in calculation of the Parkland formula. As all participants were volunteers there was also a potential for selection bias during the recruitment process. Although the apps demonstrated a faster response time; there was no significant difference in accuracy, they were not preferred over the calculator method, and no significant difference in preference was found between methods. We believe this to be possibly due to hesitancy in adaptation and evolving acclimatisation of smartphone apps as adjuncts in management of burn patients. Overall both the apps performed equally well, and qualitative analysis showed that both were well received, but had specific differences in user interface and functionality which polarised user preference. This may reflect individual differences in the way in which information is processed and expressed. The incidence of unrecognised keystroke error when entering data into any numeric keypad (e.g. calculator or smartphone) has been estimated at 4% of key presses [29]. Therefore, although these apps may be of use in aiding calculation of Burns Fluid requirements, particularly in users who are unfamiliar with the Parkland formula, we would recommend that the results obtained by using electronic means should always be cross-checked by an alternative method – e.g. manual calculation or a graphical method [22]. The most common source of error when calculating resuscitation fluids is estimation of the burn surface area (BSA). We therefore plan to undertake further studies to investigate whether differences in the graphic interfaces of the two apps affect accuracy in estimation of BSA. Other potential sources of error, which are also relevant in real-life scenarios and can influence the predicted fluid requirement were not included in our study. These include: regional variations in interpretation of the Parkland formula (3 mL/kg/ % TBSA or 4 mL/kg/% TBSA); inaccuracy in measuring patient weight; and inaccuracy in estimation of the time of burn. However, for the purposes of this study, we provided participants with values for these initial variables, assuming that they had all been measured to a clinically appropriate

degree of accuracy, in order to better evaluate the ease of use, accuracy and speed of the three different methods. Our study was limited only to fluid estimation in adult population and did not validate the use of apps for fluid estimation in paediatric patients who require additional maintenance fluids. Whilst the Parkland formula provides useful guidance for fluid resuscitation in the first 24 h following burn, fluid management should ultimately be guided by individual clinical response (e.g. urine output and mean arterial blood pressure).

5.

Conclusion

Both uBurn# and the Mersey Burns# apps were faster than the general purpose calculator, and all three methods demonstrated similar rates and magnitude of error, and similar evidence of a learning effect. We conclude that both the apps are appropriate methods to aid estimation of fluid requirements for adult burns.

Conflict of interest We have no conflict of interest or affiliation to any of the developers of the smartphone apps used in this study.

references

[1] Dulhunty JM, Boots RJ, Rudd MJ. Increased fluid resuscitation can lead to adverse outcomes in major-burn injured patients, but low mortality is achievable. Burns 2008;34(December (8)):1090–7. [2] Muller MJ, Herndon DN. Challenge of burns. Lancet 1994;343(January (8891)):216–20. [3] Baxter CR, Shires T. Physiological response to crystalloid resuscitation of severe burns. Ann N Y Acad Sci 1968;150:874–94. [4] Jenkinson LR. Fluid replacement in burns. Ann R Coll Surg Engl 1982;64(September (5)):336–8. [5] Milner SM, Hodgetts TJ, Rylah LT. The burns calculator: a simple proposed guide for fluid resuscitation. Lancet 1993;342(October (8879)):1089–91. [6] Milner SM, Rylah LT, Bennett JD. The burn wheel: a practical guide to fluid resuscitation. Burns 1995;21(June (4)):288–90. [7] Malic CC, Karoo RO, Austin O, Phipps A. Resuscitation burn card – a useful tool for burn injury assessment. Burns 2007;33(March (2)):195–9 [Epub 12.01.07]. [8] Kahn SA, Schoemann M, Lentz CW. Burn resuscitation index: a simple method for calculating fluid resuscitation in the burn patient. J Burn Care Res 2010;31(July/August (4)):616–23. [9] Dingley J, Williams D. A hand-held electronic device to calculate fluid requirements for burns. Eur J Anaesthesiol 2010;27(47):192–3. [10] Javed M, Shokrollahi K. Fluid resuscitation in burns: a modern ‘‘APProach’’. Ann Plast Surg 2012;69(August (2)):121–2. [11] Hammond JS, Gillon Ward C. Transfers from emergency room to burn center: errors in burn size estimation. J Trauma 1987;27(10):1161–5.

Please cite this article in press as: Morris R, et al. A comparison of two smartphone applications and the validation of smartphone applications as tools for fluid calculation for burns resuscitation. Burns (2013), http://dx.doi.org/10.1016/j.burns.2013.10.015

JBUR-4192; No. of Pages 9 burns xxx (2013) xxx–xxx

[12] Irwin LR, Reid CA, McLean NR. Burns in children: do casualty officers get it right? Injury 1993;24(3):187–8. [13] Laing JH, Morgan BDG, Sanders R. Assessment of burn injury in the accident and emergency department: a review of 100 referrals to a regional burns unit. Ann R Coll Surg Engl 1991;73:329–31. [14] Berkebile BL, Goldfarb W, Slater H. Comparison of burn size estimates between prehospital reports and burn center evaluations. J Burn Care Rehab 1986;7(5):411–2. [15] Perry RJ, Moore CA, Morgan BDG, Plummer DL. Determining the approximate area of a burn: an inconsistency investigated and re-evaluated. BMJ 1996;312:1338. [16] Collis N, Smith G, Fenton OM. Accuracy of burn size estimation and subsequent fluid resuscitation prior to arrival at the Yorkshire Regional Burns Unit. A three year retrospective study. Burns 1999;25:345–51. [17] Lindford AJ, Lim P, Klass B, Mackey S, Dheansa BS, Gilbert PM. Resuscitation tables: a useful tool in calculating pre-unit fluid requirements. Emerg Med J 2009;26: 245–9. [18] Theron A, Bodger O, Williams D. Comparison of three techniques using the Parkland formula to aid resuscitation in adult burns. Emerg Med J 2013 [in press; Epub 22.06.13]. [19] The Smartphone in medicine: a review of current and potential use among physicians and students. J Med Internet Res 2012;14(September–October (5)):e128.

[20] Smartphone and medical related App use among medical students and junior doctors in the United Kingdom (UK): a regional survey. BMC Med Inform Decis Mak 2012;12(October):121. [21] http://www.bapras.org.uk/news.asp?id=972 [accessed 08.05.13]. [22] Bodger O, Theron A, Williams D. Comparison of three techniques for calculation of the Parkland formula to aid resuscitation of paediatric burns. Eur J Anaesthesiol 2013;30:483–91. [23] Langtangen H, editor. A primer in scientific programming with Python. London, UK: Springer; 2009. [24] Ryan G, Bernard H. Techniques to identify themes. Field Methods 2003;15:85–109. [25] Warnock GL. The use of apps in surgery. Can J Surg 2012;55(April (2)):77. [26] http://www.d4.org.uk/research/regulation-of-health-appsa-practical-guide-January-2012.pdf [accessed 08.05.13]. [27] Visvanathan A, Hamilton A, Brady RR. Smartphone apps in microbiology – is better regulation required? Clin Microbiol Infect 2012;18(July (7)). E218-20.7. [28] McCartney M. How do we know whether medical apps work? BMJ 2013;346:f1811. [29] Oladimeji P, Thimbleby H, Cox A. Number entry interfaces and their effects on error detection. In: INTERACT 2011. 2011. p. 178–85.

Please cite this article in press as: Morris R, et al. A comparison of two smartphone applications and the validation of smartphone applications as tools for fluid calculation for burns resuscitation. Burns (2013), http://dx.doi.org/10.1016/j.burns.2013.10.015

9

A comparison of two smartphone applications and the validation of smartphone applications as tools for fluid calculation for burns resuscitation.

We conducted a randomised, blinded study to compare the accuracy and perceived usability of two smartphone apps (uBurn(©) and MerseyBurns(©)) and a ge...
1MB Sizes 0 Downloads 0 Views