Original Research Diagnostic image quality in gynaecological ultrasound: Who should measure it, what should we measure and how? Peter Cantin1 and Karen Knapp2 1

Department of Diagnostic Ultrasound, Derriford Hospital, Plymouth, UK Medical Imaging, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, UK Corresponding author: Peter Cantin. Email: [email protected] 2

Abstract Assessment of diagnostic image quality in gynaecological ultrasound is an important aspect of imaging department quality assurance. This may be addressed through audit, but who should undertake the audit, what should be measured and how, remains contentious. The aim of this study was to identify whether peer audit is a suitable method of assessing the diagnostic quality of gynaecological ultrasound images. Nineteen gynaecological ultrasound studies were independently assessed by six sonographers utilising a pilot version of an audit tool. Outcome measures were levels of inter-rater agreement using different data collection methods (binary scores, Likert scale, continuous scale), effect of ultrasound study difficulty on study score and whether systematic differences were present between reviewers of different clinical grades and length of experience. Inter-rater agreement ranged from moderate to good depending on the data collection method. A continuous scale gave the highest level of inter-rater agreement with an intra-class correlation coefficient of 0.73. A strong correlation (r ¼ 0.89) between study difficulty and study score was yielded. Length of clinical experience between reviewers had no effect on the audit scores, but individuals of a higher clinical grade gave significantly lower scores than those of a lower grade (p ¼ 0.04). Peer audit is a promising tool in the assessment of ultrasound image quality. Continuous scales seem to be the best method of data collection implying a strong element of heuristically driven decision making by reviewing ultrasound practitioners. Keywords: Ultrasound, audit, heuristics, gynaecology, image quality Ultrasound 2014; 22: 44–51. DOI: 10.1177/1742271X13511242

Introduction Assessment of the technical and diagnostic quality of medical images forms a fundamental part of the quality assurance program of an imaging department. In the current political context, with more providers of diagnostic services,1 it is essential that imaging (including ultrasound) departments have measures in place to monitor the diagnostic quality of the imaging that they provide. This is a prerequisite for successful external accreditation.2 Other areas of diagnostic imaging, most notably x-ray mammography, have well-established procedures in place to monitor diagnostic image quality,3 and it is therefore tempting to look at this practice and apply it to ultrasound. However, there are fundamental differences between ultrasound and other imaging modalities, which need to be recognised and understood. Ultrasound imaging displays a real-time, two-dimensional (2D) section of tissue, while other imaging modalities display volumes of tissue. Even in other cross-sectional imaging modalities (such as CT), standard sections are obtained and displayed in sequential Ultrasound 2014; 22: 44–51

order, enabling the anatomy of a 3D volume of tissue to be inferred from several 2D images. Unless ultrasound images are stored in real time (and this is not current standard practice), static ultrasound images can only be representative of the ultrasound study rather than a complete record of the tissue volume. Medical ultrasound is perhaps unique in that static images will often not be recorded against standardised criteria. While explicit guidance on standard imaging is possible, it has to be recognised that one of the strengths of ultrasound is the operator’s ability to assess internal anatomy (and pathology) in real-time and in a variety of image planes. The production of standard imaging is therefore not as desirable for diagnostic ultrasound as for other imaging modalities. This creates unique challenges when attempting to audit the diagnostic quality of ultrasound images. Well researched and validated scoring systems to assess ultrasound image quality do exist.4–7 However, these tend to assess ultrasound images obtained for screening (rather than diagnostic) purposes. There are fundamental differences between ultrasound in screening programmes (e.g. in the prenatal detection of Down’s syndrome or the early

Cantin and Knapp

Diagnostic image quality in gynaecological ultrasound

45

.......................................................................................................................... detection of an abdominal aortic aneurysm), and diagnostic ultrasound in symptomatic patients, where the test is performed to answer a clinical question or help solve a diagnostic problem. Screening ultrasound studies have a limited number of ultrasound images, taken against specific criteria. Standardised criteria make objective audit of image quality relatively straightforward. In contrast, diagnostic ultrasound studies may have a large number of images, many of which are taken in nonstandard planes and angles, to answer a clinical question or best demonstrate areas of pathology. Creating specific and objective criteria to assess images under these conditions is problematic. Even if possible to achieve, the time taken to apply a complex scoring system to numerous images within an ultrasound investigation for quality assurance purposes would be impractical and cost-prohibitive. While recognising the inherent difficulties of image quality assessment in diagnostic ultrasound, this remains an important factor in service quality.2 A strategy is required to overcome the inherent difficulties, but currently the fundamental factors of what should be assessed, how and by whom remain unclear. Peer audit is a technique which could be used to address some of these issues. It is a concept widely-recognised among medics and involves auditing of work undertaken by those of a similar grade or status. It is recognised as a valid method of assessing performance of individuals and departments.8–11 It allows audit to be shared among a greater number of people and therefore reduces the burden on individual practitioners. There are also advantages in allowing practitioners to directly compare the quality of their work with their peers and develop a reference point, which can be used for self-appraisal.12 There is little literature on the use of peer audit among ultrasound practitioners. However, this concept has the potential to involve ultrasound practitioners more deeply within the audit process, and better utilise the skills and knowledge of practitioners in areas other than simply performing and reporting ultrasound studies. The aim of this study was to develop and evaluate an audit tool for assessment of image quality in gynaecological ultrasound for peer assessment among a group of qualified sonographers.

Method An audit tool was developed and refined for assessment of gynaecological ultrasound image quality (Figure 1). The study was restricted to gynaecological ultrasound as it was felt that opening the study to all forms of diagnostic ultrasound would introduce too many confounding factors, making interpretation of results difficult. The audit tool encompassed factors including completeness of ultrasound examinations, image quality of individual anatomical structures, use of equipment and the technical difficulty of the ultrasound study. Reviewers were also asked to assign an overall score to the quality of each ultrasound study included. Data collection was by a variety of methods including binary, 5-point Likert and continuous scales. Sonographers were not given any instruction on how to

use the tool, although existing department protocols were well known and freely available to all participants. In particular, no instruction or algorithm was offered to assist in assigning an overall score of study quality. The tool was used by a group of seven qualified sonographers (mean time since qualification 9 years; range ¼ 1–23 years) from the same institution, who volunteered to take part in this study. Inclusion criteria were possession of either the Diploma in Medical Ultrasound (DMU) or a Consortium for the Accreditation of Sonographic Education (CASE) accredited qualification in gynaecological ultrasound, a regular gynaecological ultrasound commitment and willingness to undertake the necessary review of images. Six sonographers were at Agenda for Change (AfC) Band 7 and one sonographer was at AfC Band 8. Each sonographer independently reviewed the same series of 20 gynaecological ultrasound studies using the audit tool. These studies had been randomly selected from a list of all gynaecological ultrasound studies performed within our department between January and April 2012. Reviewing of images was by use of digitised static ultrasound images displayed on a PC. All studies were performed according to department protocols. The images expected are set out within section 3 of the audit tool (Figure 1). Outcome measures were (1) inter-rater agreement in overall study score, (2) comparison of levels of interrater agreement for different scoring methods (binary scores, 5-point Likert scale and continuous scale), (3) level of correlation between perceived study difficulty and study score and (4) assessment of systematic differences between reviewing sonographers of differing length of experience and grade. Agreement between reviewers was judged utilising intra-class correlation. Interpretation of the level of agreement was made according to Altman:13 0–0.2 – poor, 0.21–0.4 – fair, 0.41–0.6 – moderate, 0.61–0.8 – good, 0.81– 1.0 – very good. The relation between perceived study difficulty and overall assigned study score was assessed by correlation coefficient (r). Systematic differences between reviewers were assessed by use of a paired t-test. All statistical calculations were performed with dedicated statistical computer software (SPSS v19, IBM, New York, USA).

Results One reviewer was unable to complete this study due to time constraints and one ultrasound examination was not available for review for technical reasons. Thus, six reviewers independently reviewed the same 19 gynaecological ultrasound examinations and assessed diagnostic image quality using the audit tool undergoing piloting. This gave a total of 114 individual study reviews. The mean overall study score (out of 10) was 7.0 (range ¼ 1.3–8.8). Inter-rater agreement and data collection method Table 1 outlines the level of inter-rater agreement for each method of data collection (binary, 5-point Likert scale or continuous scale) relating to questions 8 and 9 on the audit form (Figure 1). The binary score was extrapolated from question 8. Reviews indicating that the examination

46

Ultrasound

Volume 22

February 2014

.......................................................................................................................... Hospital Number-----------------------------------------------------------------------------------------Gynaecological Ultrasound Image Quality Audit.

4. EQUIPMENT USAGE Please assess how well the ultrasound machine is used

For each of the following questions, please mark an X in the box that best describes your answer.

Size of Images (Depth/Zoom).

Poor

OK

Good

Use of Focal Zone(s)

1. What is your professional background

N N/A

L/S midline view-----------------

Transabdominal Images----T/S midline view----------------External Iliac vessels Lt------Rt-------

Transvaginal Views L/S uterus------------------------T/S Uterus------------------------Endometrial Measurement-Two views of each ovary at 90° to each other Lt-------

Transvaginal Images Uterus----------------------------Endometrial Measurement-------------------Ovaries/adnexa Lt-----------Rt------------

Cervix----------------------------Rt-----Pouch of Douglas--------------Cervix------------------------------Figure 1. Image assessment audit form

Pouch of Douglas. -------------

Textbook

Y

Transabdominal Views

Good

Are the following Images present?

5. IMAGE QUALITY Please assess image quality for the following structures.

Acceptable

3. SCOPE OF IMAGING

Doppler imaging parameters (if appropriate)

Poor

2. Years performing gynaecological ultrasound

Other imaging parameters (Dynamic range, harmonics, post-processing etc)

Undiagnostic

Other

Nurse

Gynaecologist

Radiologist.

Sonographer

Appropriate frequency selected

Cantin and Knapp

Diagnostic image quality in gynaecological ultrasound

47

.......................................................................................................................... Hospital Number---------------------------------------------------------------------------------------

Textbook

Good

Acceptable

Poor

9.Finally on a scale of 1-10, with 0 being the worst, and 10 the best please mark overall scan quality on the scale below. 10 (Best)

Textbook

Good

Acceptable

Limited Value

Undiagnostic

7. Please describe the overall diagnostic quality of the study leaving aside issues of scan difficulty.

8. Please describe the overall diagnostic quality taking into account study difficulty.

Undiagnostic

Not Applicable

Good

Poor

Acceptable

6. Demonstration of Pathology. Please indicate how well any pathology has been demonstrated (if applicable)

Easy

Diagnostic images not possible

8. Please describe the degree of scan difficulty on the following scale and give reasons why (if any)

0

Thank-you for completing this form. Reason(s).

Figure 1. Continued.

48

Ultrasound

Volume 22

February 2014

.......................................................................................................................... was ‘acceptable’, ‘good’ or ‘textbook’ were classified as ‘acceptable’. Reviews indicating that the ultrasound examination was ‘undiagnostic’ or ‘poor’ were classified as not acceptable. Correlation coefficients ranged from 0.51 to 0.73 depending on the data collection method used. The binary score showed moderate agreement between reviewing sonographers and the Likert and continuous scales showed good agreement. A continuous scale method of image scoring gave the highest level of inter-rater agreement with a correlation coefficient of 0.73. Correlation between perceived study difficulty and study score Study difficulty was assessed by means of a 5-point Likert scale ranging from most difficult (1) to easy (5). Mean difficulty score for all studies was 3.53 (range1.3–4.5). The studies perceived as most difficult were due either to contraindication to vaginal scan or patient obesity. Comparison of study difficulty and overall study score demonstrates a high level of correlation (Figure 2) with a correlation coefficient (r) of 0.89 (95% CI: 0.75–0.96). Studies which are more difficult to perform are generally given a lower score for overall study quality; p < 0.0001.

Five sonographers were at Agenda for Change (AfC) Band 7 and one sonographer was at Band 8. Mean study score for the Band 7 sonographers was 7.04 and for the Band 8 sonographer was 5.89. Paired t-test demonstrates a significant difference, with the Band 8 sonographer scoring studies significantly lower than Band 7 sonographers (p ¼ 0.04). Two Band 7 sonographers were qualified for less than 2 years and three Band 7 sonographers were qualified for greater than 2 years. There was no significant difference in study scores between those of greater (7.05) and lesser (7.03) clinical experience.

Discussion Assessment of image quality in medical ultrasound is fraught with difficulty, uncertainty and bias, but there is an increasing drive in radiology to produce absolute, objective measures of quality.2,14–17 Assessment of ultrasound diagnostic image quality remains difficult in any

Systematic differences between reviewers of different grades and length of experience Figure 3 demonstrates the mean score given by each sonographer after review of 19 ultrasound studies. Table 1 Level of agreement for different data collection methods

Data collection method

Intra-class correlation coefficient (r)

95% Confidence intervals

Inter-rater agreement

Binary scale

0.51

0.32–0.72

Moderate

5-Point Likert scale

0.61

0.43–0.79

Good

Continuous scale

0.73

0.57–0.87

Good

Figure 2

Figure 3 Chart demonstrating mean score (with standard deviation) given by each sonographer after review of 19 gynaecological ultrasound studies. Sonographer C is the band 8 sonographer

Scatter graph showing correlation between study score and perceived level of study difficulty

Cantin and Knapp

Diagnostic image quality in gynaecological ultrasound

49

.......................................................................................................................... standardised format due to the very nature of the imaging specialty. This leaves ultrasound providers with a serious difficulty. Measures of quality are important, both in terms of patient safety and ultrasound department viability in an increasingly competitive world. It is very tempting to try to reduce the amount of variability in the way ultrasound examinations are performed so that demonstration of competence can be achieved. Other imaging modalities, in particular mammography, are striving hard to reduce the degree of subjectivity in image quality audit.17,18 Attempting to apply a similar ethos in diagnostic ultrasound would be difficult and potentially inappropriate. The ultimate intention of audit in this area is to ensure that ultrasound examinations are performed in a way which maximises the amount of diagnostic information available to the patient and clinician. This is unlikely to be achieved by a dogmatic adherence to very prescriptive and formulaic rules of performing and recording ultrasound examinations. Of course, adherence to guidelines is important, but a drive to introduce increasingly objective ‘rules’ for performing ultrasound examinations threatens to undermine ultrasound’s greatest strength; that is the ability to examine anatomical structures in real time, and in a variety of planes and from a variety of angles to maximise the diagnostic yield of an examination. This study shows that peer audit of ultrasound image quality is a promising tool in maintaining and improving the quality of an ultrasound service. Even without detailed guidance in the use of this audit tool, ultrasound practitioners are in broad agreement about what constitutes a ‘good’ ultrasound examination. However, this study has yielded interesting results. When the results are analysed in binary terms (acceptable/not acceptable) there is only moderate agreement between sonographers. When the options are widened, inter-rater agreement improves. A 5-point Likert scale improves inter-rater agreement, but the best agreement is found using a continuous scale of assessment, even without putting in place set rules as to how to assign an overall score. The ultrasound practitioners performing this audit were operating under conditions of uncertainty. They did not have access to the patient’s clinical history or presentation, or access to the final ultrasound report. No objective ‘rules’ were given about how the image review should be conducted, other than provision of the audit form. The conditions under which the audit was performed were therefore simply not geared to producing simple yes/no answers. The reviewers were forced into a system of relying on their prior knowledge and expertise, clinical experience, rules-of-thumb and ‘gut-feeling’ to assign a score to an ultrasound study. The use of continual scales to rate these studies appear useful as the degree of latitude allowed in assigning a score seems to match well the inherent uncertainties in performing the audit. Many practitioners may baulk at the prospect of performing peer-audit, based on factors as undefinable as rules-of-thumb, gut-instinct and prior experience, but perhaps a change of mind-set is required. The concept of decision making without prescriptive rules and in the presence of uncertainty is known as heuristics. Heuristics involves ‘reducing the complex tasks of assessing probabilities and

predicting values to simpler judgemental operations’.19 It is a well-recognised concept of decision making, where uncertainty or inadequate prior information exists.20 It is well recognised that simple, logical, rule-based algorithms (such as in computer aided diagnosis) are currently unable to replicate the integration of perception, knowledge and visual analysis performed by an expert human observer.21 Ironically, while we strive for more and more objectivity in radiology, researchers in artificial intelligence and computer-aided diagnosis are increasingly recognising the value of heuristics in their design of ‘intelligent’ computer systems.21,22 This is mirrored by an increasing awareness of the importance of heterogeneity and heuristics in healthcare.23 While many of our research and audit practices attempt to reduce the effects of heterogeneity in a population, this does not always reflect reality ‘on the ground’ where situations arise that do not accurately reflect the homogenised populations of prior research or practice. In this case, alternative pragmatic strategies are required for decision making, of which heuristics are the most common.24 Heuristics are prone to numerous biases,25 yet properly applied and understood have the potential to allow valid decision making and audit where uncertainty exists.26 With prior testing of the audit tool among reviewing practitioners, some of the potential systematic biases can be measured and accounted for (such as the bias of reviewer grade and study difficulty). These can easily be factored into the audit process and accounted for in the results. This study has produced interesting results, but there are some limitations in study methodology which need to be taken into account. Obtaining a random sample of ultrasound examinations for review was felt to be important to reduce the possibility of bias. However, this has led to some unintended confounding factors. Images selected for review were obtained from any one of five ultrasound machines (rather than a single machine). However, all machines were of the same make and model (Aplio, Toshiba), and all were under 5 years old, minimising the impact of this. Some of the examinations selected for review would have been performed by sonographers undertaking the audit, and it is probable that some sonographers would recognise images that they had taken, which may be a source of bias. Intra-observer variation has also not been measured within the study. Many of the questions within the audit form (questions 1–6) were not utilised in the data analysis. These questions were included in case of serious disagreements between reviewers, where further data analysis would have been necessary. In reality, the sheer volume of data produced by this pilot audit form make it cumbersome and timeconsuming to use for regular clinical audit purposes. Given the good level of agreement between reviewers, a simplified version of the audit tool should be possible. Perhaps the greatest drawback of this study (and this method of audit) is the issue of department culture and knowing what is truly being measured. With no true and absolute arbiter of image quality, the danger is always that the audit will simply reveal the collective view of what is acceptable, even in areas of poor performance.

50

Ultrasound

Volume 22

February 2014

.......................................................................................................................... However, this potential bias should not preclude practitioners from undertaking peer audit. Any audit performed internally without reference to external reference points are at risk of this bias, regardless of whether audit is performed within a peer group or the audit process is undertaken by individuals of a higher clinical grade. Film reading in mammography has addressed similar concerns, with development of an ‘image bank’ which is used to assess individual and department image reading skills. The PERFORMS system (Personal Performance in Mammographic Screening) utilises a standardised database of images which allows individuals and departments to rate their performance against groups of participants elsewhere, or against a panel of expert reviewers.27,28 Not only does this allow individuals to rate themselves against a national mean but also allows the department to rate itself against similar departments elsewhere. There is no reason why similar ultrasound departments cannot join forces in auditing diagnostic image quality in ultrasound. By utilising a similar audit method to the one described in this study, and using an agreed set of images, departments can identify biases in ways of working and assessing diagnostic image quality, compared with other, similar departments. Not only would this enable individuals to rate their own performance against others but it would also widen the concept of peer audit from an individual to department level. Given the inherent difficulties and biases associated with diagnostic image quality assessment in ultrasound, the use of cross-department audit utilising a standard audit tool, validated against an agreed image set may be the best way forward. However, the development of such an image set would involve extensive discussion within the ultrasound community.

Conclusion This study has identified a novel and potentially useful method of assessment of image quality in gynaecological ultrasound. Although limited in scope, it is hoped that this study can be extended to other areas of diagnostic ultrasound and to reporting of ultrasound scans also. This method of audit is novel in that it utilises the concept of heuristics for image quality assessment. To overcome some of the biases of this approach, it is recommended that individuals and departments calibrate assessment of their own performance with a standardised audit tool (such as the one described), against a standardised image set. Comparison of such data between individuals and departments has the potential to provide powerful benchmark data, which will be of value to all participants. This process is not currently used in diagnostic ultrasound to the author’s knowledge, yet is extensively used in other areas of radiology, most notably mammography. Similar processes may be useful in sonography. ACKNOWLEDGEMENTS

Grateful thanks to the sonographers of Derriford Hospital, Plymouth, who gave freely of their time, patience and expertise in undertaking this pilot study.

DECLARATIONS

Competing interests: None Funding: None Ethical approval: Ethics approval not required. This study has been registered with Clinical Audit, Derriford Hospital. Guarantor: KMK. Contributorship: PC devised the audit tool, undertook the audit and analysed the results. Write up of results was performed by PC and KN. Both PC and KMK approved the final draft.

REFERENCES 1. Department of Health. Equity and Excellence: Liberating the NHS. London: HMSO, 2010 2. United Kingdom Accreditation Service, 2010 (online). See http://www. isas-uk.org 3. Health and Consumer Directorate General. European Guidelines for Quality Assurance in Breast Cancer Screening and Diagnosis, 4th edn. Luxembourg: Office for Official Publications of the European Communities, 2006 4. McLennan A, Schluter P, Pincham V, et al. First-trimester fetal nasal bone audit: evaluation of a novel method of image assessment. Ultrasound Obstet Gynaecol 2009;34:623–8 5. Herman A, Maymon R, Dreazen E, et al. Nuchal translucency audit: a novel image-scoring method. Ultrasound Obstet Gynecol 1998;12:398–403 6. The Fetal Medicine Foundation. 11-13 week scan. Nuchal Translucency. London: FMF; See http://www.fetalmedicine.com/fmf/trainingcertification/certificates-of-competence/11-13-week-scan/nuchal/ #topofpage (last checked 21 December 2011) 7. Earnshaw J. Ultrasound imaging in the National Health Service Abdominal Aortic Aneurysm Screening Programme. Ultrasound 2010;18:167–9 8. Armstrong D, Hollingworth R, Macintosh D, et al. Point-of-care peer comparator colonoscopy practice audit. Can J Gastroenterol 2011;25:13–20 9. Asao K, Mansi I, Banks D. Improving quality in an internal residency program through a peer medical record audit. Acad Med 2009;84:1796–802 10. Lalloo D, Ghafur I, Macdonald E. Peer review audit of occupational health reports-process and outcomes. Occup Med 2012;62:54–6 11. Royal College of Radiologists. Standards for Self-Assessment of Performance. London: RCR, 2007 12. Lockyer J, Amson H, Chesluk B, et al. Feedback data sources that inform physician self assessment. Med Teach 2011;33:113–20 13. Altman D. Practical Statistics for Medical Research. London: Chapman and Hall, 1999 14. International Atomic Energy Agency. Comprehensive Clinical Audits of Diagnostic Radiology Practices: A Tool for Quality Improvement. Vienna: IAEA, 2010 15. European Commission. European Guidelines on Quality Criteria for Radiographic Diagnostic Images. Luxembourg: Office for Official Publications of the European Communities, 1996 16. NHS Cancer Screening Programmes. Quality Assurance Guidelines for Mammography Including Radiographic Quality Control. Sheffield: NHSBSP, 2006 17. Arnau O, Freixenet J, Bosch A, et al. Automatic classification of breast tissue. Lect Notes Comput Sci 2005;3523:171–5 18. Bentley K, Poulos A, Rickard M. Mammography image quality: analysis of evaluation criteria using pectoral muscle presentation. Radiography 2008;14:189–94 19. Tresky A, Kahneman D. Judgement under uncertainty; heuristics and biases. Cambridge: Cambridge University Press, 2001 20. Dolan B, Tillack A. Pixels, patterns and problems of vision: the adaption of computer-aided diagnosis for mammography in radiological practice in the U.S. Hist Sci 2010;xlviii:228–49

Cantin and Knapp

Diagnostic image quality in gynaecological ultrasound

51

.......................................................................................................................... 21. Luger G. Artificial Intelligence: Structures and Strategies for Complex Problem Solving. Harlow: Pearson Education Ltd, 2005 22. Patel V, Shortliffe E, Stefanelli M, et al. The coming of age of artificial intelligence in medicine. Artificial Intell Med 2009;46:5–17 23. Davidoff F. Heterogenity: we can’t live with it, and we can’t live without it. BMJ Qual Saf 2011;20:i11–12 24. Pauker S, Wong J. How (should) physicians think? A journey from behavioural economics to the bedside. JAMA 2010;304:1233–5

25. Gunderman R. Biases in radiologic reasoning. AJR 2009;192:561–4 26. Fauster M. How do ‘simple’ rules ‘fit to reality’ in a complex world? Minds Mach 1999;9:543–64 27. Scott H, Gale A. Breast screening: PERFORMS identifies key mammographic training needs. BJR 2006;79:S127–33 28. Dong L, Chen Y, Gale A. Early identification of substandard breast screening performers. Breast Cancer Res 2011;13(S1): 13

Diagnostic image quality in gynaecological ultrasound: Who should measure it, what should we measure and how?

Assessment of diagnostic image quality in gynaecological ultrasound is an important aspect of imaging department quality assurance. This may be addres...
229KB Sizes 0 Downloads 8 Views