Downloaded from http://jcp.bmj.com/ on April 3, 2015 - Published by group.bmj.com

JCP Online First, published on March 12, 2014 as 10.1136/jclinpath-2013-201621 Best practice

External quality assessment: best practice David James,1 Darren Ames,2 Berenice Lopez,3 Rachel Still,4 Wiliam Simpson,5 Patrick Twomey6 1

Southwest Pathology Services, Taunton, UK 2 Department of Pathology, St Helens and Knowsley NHS Teaching Hospitals Trust, Prescot, UK 3 Department of Chemical Pathology, Harrogate and District NHS Foundation Trust, Harrogate, UK 4 Department of Laboratory Medicine, Abertawe Bro Morgannwg University NHS Health Board, Swansea, UK 5 Department of Clinical Biochemistry, Aberdeen Royal Infirmary, Aberdeen, UK 6 Department of Clinical Biochemistry, The Ipswich Hospital, Ipswich, UK Correspondence to Dr David James, Southwest Pathology Services, Lisieux Way, Taunton TA1 2JX, UK; [email protected] Received 27 August 2013 Revised 11 February 2014 Accepted 18 February 2014

To cite: James D, Ames D, Lopez B, et al. J Clin Pathol Published Online First: [please include Day Month Year] doi:10.1136/jclinpath2013-201621

ABSTRACT There is a requirement for accredited laboratories to participate in external quality assessment (EQA) schemes, but there is wide variation in understanding as to what is required by the laboratories and scheme providers in fulfilling this. This is not helped by a diversity of language used in connection with EQA; Proficiency testing (PT), EQA schemes, and EQA programmes, each of which have different meanings and offerings in the context of improving laboratory quality. We examine these differences, and identify what factors are important in supporting quality within a clinical laboratory and what should influence the choice of EQA programme. Equally as important is how EQA samples are handled within the laboratory, and how the information provided by the EQA programme is used. EQA programmes are a key element of a laboratory’s quality assurance framework, but laboratories should have an understanding of what their EQA programmes are capable of demonstrating, how they should be used within the laboratory, and how they support quality. EQA providers should be clear as to what type of programme they provide – PT, EQA Scheme or EQA Programme.

Within the UK, the recent Department of Health Review into pathology quality assurance1 has focussed attention on quality matters, with emphasis on exploring ‘the overarching quality and risk management systems in NHS pathology services’. Clinical Governance should be embedded in all pathology services, and internal quality control (IQC) and external quality assessment (EQA) play an important role in the quality management and improvement processes of clinical laboratory services to ensure high standards of patient care. Focussing on EQA, the purpose of this article is to summarise and define best practice as applicable to participants and providers of EQA, and how EQA performance can influence the quality of service provided by laboratories. It represents the consensus view of the National Quality Assessment Advisory Panel for Chemical Pathology. This article will largely draw upon examples from clinical chemistry, being the area of expertise of the contributors, but many of the general principles will be applicable across most clinical laboratory disciplines. While concentrating on laboratory-based analyses, we recognise that the issues of maintaining and evidencing quality in point-of-care, or near-patient testing, are much more problematic, and regrettably often ignored, even with clear guidance from the Medicines and Health Regulatory Agency (MHRA)2 and ready availability of suitable EQA schemes.

James D, et al. J Clin Pathol 2014;0:1–5. Copyright Article author doi:10.1136/jclinpath-2013-201621 (or their employer) 2014.

EQA OR PROFICIENCY TESTING? In many parts of the world, the terms EQA and Proficiency Testing (PT) are used almost interchangeably. EQA can mean either ‘external quality assessment’ or ‘external quality assurance’. More precisely, EQA schemes provide an assessment which laboratories use for assurance of quality. By contrast, a widely accepted definition of PT is: ‘A program in which multiple samples are periodically sent to members of a group of laboratories for analysis and/or identification; whereby each laboratory’s results are compared with those of other laboratories in the group and/or with an assigned value, and reported to the participating laboratories and others.’3 PT is the term commonly used in North America, and also serves a function in terms of regulatory requirements and in licensing and/or accreditation. Additionally in the USA, laboratories must also meet standards defined in the Clinical Laboratory Improvement Act (CLIA).4 Such linkage to regulation with broad acceptance limits rather than quality improvement programmes based on biologic goals or clinical decision points may inhibit improvement and encourage laboratories to adopt ‘special’ practices in dealing with PT samples.5 A succinct definition of the difference between PT and EQA provided by the International Federation of Clinical Chemistry (IFCC) is summarised in table 1.6 Essentially, the primary intention of an EQA programme (EQAP) in pathology is to support quality improvements for the benefit of patients. Clinical laboratories in the UK have a long history of quality assurance through participation in EQA,7 and although often referred to as EQA, the participation of laboratories fits more with the definition of EQAP. Although the terms may be used interchangeably, what is key is that laboratory directors should focus on how their PT/EQA provider supports their quality management and improvement processes and where they require additional support/processes. Throughout the rest of this article we will refer to EQA as including PT, and how EQA can impact on laboratory quality and patient safety.

DOES PARTICIPATION IN EQA IMPROVE PATIENT CARE AND SAFETY? There is not a wealth of evidence that can be drawn upon to support this at present. One issue is that EQA schemes cannot improve analytical quality per se. They can only, at best, identify problems. As such, only changes in the laboratory procedures, processes or methods may improve the quality of laboratory services and the laboratory.

Produced by BMJ Publishing Group Ltd under licence.

1

Downloaded from http://jcp.bmj.com/ on April 3, 2015 - Published by group.bmj.com

Best practice Table 1 Summary of differences between Proficiency Testing (PT) and External Quality Assessment (EQA) PT External Quality Assessment Schemes (EQAS) External Quality Assessment Programmes (EQAP)

Laboratory performance evaluation for regulatory purposes Laboratory performance and method evaluation Educational Interlaboratory comparisons designed and operated to assure one or more of: Participant performance—analytical, interpretive, clinical advice Method performance evaluation in vitro diagnostic device vigilance Education Training and help

From IFCC guidelines.6 IFCC, International Federation of Clinical Chemistry.

The Centre for Disease Control (CDC) in the USA commissioned a report, Review of Proficiency Testing Services for Clinical Laboratories in the United States (2008),8 and found that following the implementation of CLIA regulations, PT performance in regulated laboratories improved. Perhaps the key element here was defining a standard that service providers are expected to achieve, which implies the benefit of defining a ‘target’ rather than looking purely at peer comparisons. It is equally important that the target is appropriate, and the system does not induce special practices to achieve ‘compliance’ rather than using EQA as an aid to improving quality. The report quotes examples of studies which indicate that PT performance improves with participation in PT.8 Although this could be attributable to laboratories learning ‘to play the game’ the report identified six other factors which contributed to reduced failure rates: ▸ elimination of chronic poor performers from the pool of participating laboratories/correction of chronic problems by laboratories remaining in the pool ▸ improved PT materials and report forms ▸ familiarity with the programme by participants ▸ identification of problems with methods and their correction ▸ adoption of more accurate and reproducible methods ▸ generally improved technical education and technical performance. From this, it is evident that items 4–6 reflect a laboratory’s response to potentially poor performance in PT, by improving quality processes. Item 2 reinforces the responsibility of providers to provide programmes that enable users to make quality improvement a reality. Although PT/EQA have the potential to influence analytical performance, defining appropriate criteria for ‘poor performance’ is essential. An important benefit of EQA is that it allows for those commissioning laboratory services, or even end users including patients, to gain assurance that analytical performance is as specified or expected, and clinically relevant. Openness regarding EQA performance should be encouraged between laboratories and their users. EQA forms part of that assurance process, being complimentary to an appropriate standard of accreditation. Again, there is a potential benefit to patients and commissioners. In 2004, the National Institute of Standards and Technology released a report on the impact of calibration error in clinical decision making and healthcare costs. This was estimated to be up to US$199 m per year in the USA.9 2

If PT relates to demonstration of the analytical performance of laboratories, the key elements which set EQA apart from PT are ▸ Education and support—EQA providers should be able to provide support for laboratories in resolving poor performance, being knowledgeable in aspects of method application and problems. ▸ Method performance—identification of poor performance issues related to methods rather than laboratory issues. EQA providers may be among the early identifiers of potential quality problems. ▸ Assessing a laboratory’s ability to report appropriately where samples may pose a challenge, for example, presence of interferent or antigen excess. EQA is only part of the equation in quality management and improvement. Assessment of some elements of the pathway may not be feasible, or are subject to factors outwith the control of the laboratory, for example, the competency of the requestor in selecting appropriate tests, although quality laboratories will provide tools to aid/educate requestors. The interpretation placed on the laboratory output may again depend on external factors, although accredited laboratories will have staff able to comment on results and provide support as required. Although not yet mandatory in the UK, a number of interpretive EQA schemes are available. One element which, as yet has received little attention in the EQA process, is EQA of the ‘pre-analytical phase’. According to International Laboratory Accreditation Cooperation (ILAC) guidance for implementation of a Medical Laboratory Accreditation System 5.6.4,10 EQA programmes should, as far as possible, provide clinically relevant challenges that mimic patient samples and have the effect of checking the entire examination process, including pre-examination and postexamination procedures.

EQA CYCLE An EQA cycle involves a number of participants, namely; the laboratory ‘general’, the laboratory ‘analytical’, the laboratory ‘quality improvement’ and the EQA provider. Outside of this cycle are any oversight bodies, for example, within the UK, the National Quality Assessment Advisory Panels (NQAAPs) and the Joint Working Group on Quality Assessment of the Royal College of Pathologists ( JWG). It should not be forgotten that the purpose of EQA schemes for clinical laboratories should be to be part of a process ensuring patient safety. The roles, responsibilities and expectations of each of these elements are discussed below.

GENERAL LABORATORY PRACTICE Choice of EQA programme Within the UK there is currently no requirement to participate in EQA, as neither accreditation with CPA (Clinical Pathology Accreditation) nor to ISO15189 is mandatory. Laboratories that are accredited may choose from a number of potential EQA providers, the only requirement of the accrediting body being that the laboratory must participate in EQA. Choice of EQA provider lies with the laboratory, and a number of factors may influence the choice of EQA provider (box 1), including preference for accredited laboratory EQA schemes11 which have an appropriate, independent medical and scientific committee and participate in postmarketing vigilance of in vitro diagnostics,12 and can demonstrate the use of commutable materials.13 Laboratories should be clear that their chosen EQA provider has programmes which meet their requirements. James D, et al. J Clin Pathol 2014;0:1–5. doi:10.1136/jclinpath-2013-201621

Downloaded from http://jcp.bmj.com/ on April 3, 2015 - Published by group.bmj.com

Best practice

Box 1 Factors influencing choice of External Quality Assessment (EQA) Scheme ▸ Accreditation status of provider. Preference should be given to schemes accredited to ISO 17043 or equivalent (eg, those still Clinical Pathology Accreditation (CPA) accredited within the UK). If a non-accredited provider is chosen, the reason(s) should be clearly documented. Under International Laboratory Accreditation Cooperation (ILAC),11 accreditation bodies should support the use of appropriate proficiency testing programmes which meet the essential requirements of ISO/IEC 17043, where applicable. ▸ Appropriateness of distribution frequency. Distributions should be at a frequency sufficient to identify performance issues in a timely manner. For core tests, this probably equates to at least monthly distributions. ▸ Range and number of EQA samples. Samples within the distribution cycle should cover an appropriate range of values for each analyte to verify performance across clinically relevant concentrations. Each cycle should supply sufficient samples to provide evidence of reproducibility; 3–4 samples in each distribution would probably fulfil this requirement. Samples should be ‘blinded’ to participants in relation to expected results. ▸ Scheme management and development. The scheme should be designed and overseen by appropriately competent professionals (clinical, technical and statistical). The scheme should also have an independent medical and scientific committee.12 ▸ Poor performance issues. Mechanisms should be in place for reporting of poor performance to the appropriate regulatory/ oversight body. ▸ Variety of sample provided. ‘Challenging’ samples should be included in selected distributions. ▸ Education. Educational input should be provided. ▸ Manufacturers. Participation of the EQA provider in postmarketing vigilance of in vitro diagnostics.12 ▸ Materials. EQA providers should demonstrate use of commutable materials.13

Within box 1, points 3, 6 and 9 are essential if a scheme is to provide clinically relevant challenges that mimic a patient sample. The use of appropriate samples in programmes is an important aspect of EQAP, as a laboratory can be falsely reassured that its performance is good if non-commutable materials are used. Given the fundamental role that participation in EQA plays as part of a laboratory’s overall quality programme, laboratories should be encouraged to be proactive in their choice of EQA providers, documenting the reason for the selection of specific schemes.

THE LABORATORY: ANALYTICAL Sample handling If the EQA scheme is providing samples that mimic as closely as possible a patient’s sample, and that EQA is used to assess preanalytical and postanalytical aspects, as well as the analytical performance, then it is imperative that the EQA samples are treated in the laboratory as if they were patient samples. This should lead to the inclusion of EQA samples as soon as possible into the normal flow of patient samples within a laboratory, and James D, et al. J Clin Pathol 2014;0:1–5. doi:10.1136/jclinpath-2013-201621

subject to no special handling. For example, holding the results of an EQA sample until ‘good’ IQC is confirmed should be considered unacceptable practice. If EQA samples are treated as patient samples, it also follows that they should be reported to the EQA scheme in the same format as for patient samples (eg, units, decimal places, ‘trace’, ‘positive’ etc). For example, in the UK, all clinical laboratories are expected to report paracetamol as mg/L. Laboratories should not be measuring and reporting in SI units and then converting them to mg/L for EQA purposes as this does not reflect the normal laboratory process.

Multiple analysers Perhaps the area which causes most discussion relates to what should happen in relation to EQA and a laboratory with multiple identical analysers. If we start from a view point that the purpose of EQA is to enable a laboratory to maintain and improve on the quality of its service, it follows that each analyser should participate within an EQA scheme in order to provide assurance of its performance. This is likely to require a degree of manipulation of process to ensure that EQA samples are processed by each analyser, and will mean that EQA samples are not treated in exactly the same way as a patient sample, although this will be only in the context of how the samples are presented to the analyser. Consider the following. A laboratory has three analysers, A, B and C on a track system, all available to measure serum creatinine, and the laboratory wishes to ensure that performance is within the Minimum Analytical Performance Specification (MAPS) criteria.14 The probability of a sample being analysed by analyser A is 1 in 3 in the first month. The subsequent month, the probability of the EQA sample being analysed by analyser B is again 1 in 3, and in month 3, the probability of analysis by analyser C is again 1 in 3. Over a 3-month period, the probability of each different analyser being used once is only 22%, it is more likely that only two out of the three will be used for EQA (67%). If the laboratory lies towards one side in the distribution of results, how will it be able to evidence that, overall, its performance is within MAPS, and be aware of the potential performance issues with one of its analysers? Perhaps more concerning is that it is possible that poor performance would never come to light from a laboratory potentially issuing results on one-third of samples from an analyser with performance issues. Within the UK, it is the consistency of performance which is assessed through the reporting by EQA schemes to the NQAAPs, rather than a simple pass/fail on a distribution. Therefore, each analyser should participate in EQA separately. This then leads on to the need for a definition of an ‘analyser’ which requires pragmatism, but fulfils the intention of EQA outlined above. A simple definition would be an analytical instrument (whether or not connected to others by track or similar) which, under normal circumstances, can be operated in a stand-alone mode and be loaded with samples directly. For example, a track may have two immunoassay analysers performing cortisol. The performance of both analysers should be assessed separately in EQA. There has recently been the suggestion that a blood-gas analyser with its ability via software to compare QC values across all networked instruments required only one analyser in the network required to participate in EQA. That is an example of practice which would not fulfil the intention of EQA. As the QC samples may not enter the ‘system’ in the same manner as EQA samples, they would not fulfil the purpose of EQA which, as stated above according to 3

Downloaded from http://jcp.bmj.com/ on April 3, 2015 - Published by group.bmj.com

Best practice ISO 15189, provide clinically relevant challenges that mimic patient samples and have the effect of checking the entire examination process, including pre-examination and postexamination procedures.

LABORATORY: QUALITY IMPROVEMENT Review of performance EQA requires proper staff engagement to be effective, with appropriate review of reports. EQA returns should be reviewed by designated individuals with appropriate competencies; it would seem illogical for it to be left to random individuals who may not be familiar with issues arising from different techniques. It should be expected that the Head of the Department is included in this process, and that the Head of the Department is known to the EQA provider in order that they can be contacted in the event of performance issues. Within the UK, the expectation of the NQAAP for Chemical Pathology is that the EQA Scheme Organiser communicates directly with the Departmental Head where issues of poor performance arise, and EQA providers, with the aid of the participating laboratories, should have an up-to-date record of contact details. As EQA should form part of the overall quality assurance programme within a laboratory; laboratory staff in general should be aware of the laboratory’s performance and any issues arising. Mechanisms to communicate EQA performance and issues within laboratories should be in place, and this can be particularly important for departments operating across several sites, especially if there is staff rotation between sites.

Engagement with oversight bodies Within the UK, where there are issues of persistent poor performance, the NQAAP or, if necessary, the Joint Working Group of the College, communicates directly with the Head of the Department who holds overall responsibility for performance and resolution of performance issues. Additionally, within the UK, where poor performance issues are identified, the EQA scheme organiser is expected to contact the laboratory to ascertain the reason for poor performance and offer support in resolving issues. Laboratories should have in place a process enabling a prompt response to contact with scheme organisers, the NQAAP or the JWG.

that are acceptable and appropriate, and where oversight bodies exist, work with them to assure that poor performers are adequately identified. Within the UK, initial steps have been taken to define and pilot a definition of poor performance to be applied across EQA providers through Minimum Analytical Performance Standards (MAPS) for serum/plasma total cholesterol, HDL-cholesterol, glucose and creatinine and blood HbA1c to avoid difficulties in defining poor performance, and to make an assessment of performance which is clinically relevant, rather than a pure ‘statistical’ ranking.

Organisation of distributions For a true EQA programme, ‘samples’ included in the programme should be sufficient to challenge performance at clinical decision points and across an appropriate range of values. The frequency of distributions, and number of samples distributed varies widely across different EQA schemes. The European Committee for Quality Assurance Programmes in Laboratory Medicine (EQALM) reported a survey in 2009 with responses from 22 organisations representing 407 schemes.16 Not surprisingly, there was wide variation in the frequency of distributions and the number of samples, although most schemes provided between 1 and 3 samples per round. The survey provided a significant degree of agreement on a number of key questions, with a strong positive agreement that a laboratory need participate in only one scheme where analyte and matrix are the same (eg, glucose in blood and serum), but a different scheme where the matrix differs (eg, glucose in blood and CSF). The survey also revealed strong disagreement that there is equivalence between 12 samples in one annual distribution and 12 distributions of one sample. Where EQA schemes are used to assess in vitro diagnostics (IVDs), BS EN 14136:2004 requires a minimum of six distributions per year.12 In the absence of any local regulations or guidance, EQA schemes will organise the schemes according to their beliefs/evidence, and as in ‘Choice of EQA’, laboratories should ensure that the frequency and number of samples in a distribution is adequate for their particular needs. With regard to EQA in molecular biology tests, the survey showed an even split as to whether individual ‘tests’ require EQA, or if it is the technique which needs to be assessed.

EQA PROVIDER It is evident from the above, that the relationship between EQA provider and the laboratory and EQA provider is key to enhancing laboratory quality. That does place some responsibilities on the EQA providers. Initial choice of EQA scheme (see box 1) does require information from EQA providers which should be readily available.

Postmarketing vigilance of in vitro diagnostics Providers of EQA programmes should be active in this area, as the aggregated view of performance issues they see may provide an early indication of issues with a particular method which may affect patient safety/management.

Oversight bodies Definition of poor performance If EQA is to be used to improve quality, it does require that common definitions of poor performance are in place. In the UK, EQA schemes which report to the relevant NQAAP have to agree poor performance criteria with the respective NQAAPs. Where there is more than one provider for the same analyte, it is possible that definitions of poor performance may differ between EQA providers, particularly as according to the ISO17043 standard, it is up to the EQA provider to define performance criteria. This creates the potential for EQA schemes not to be equivalent in their detection of poor performance.15 EQA providers should take account of local regulatory requirements where they exist, and where they do not, work with local recognised professional groups in defining performance criteria 4

As has been mentioned previously, oversight of EQA performance in the UK is overseen by the Joint Working Group for Quality Assessment of the Royal College of Pathologists and its various NQAAPs, further information on which is available online.17 For oversight bodies to function appropriately, definitions of poor performance must be capable of being applied across all EQA providers operating within that region/country that the oversight body serves. That is not to say that all EQA schemes must adopt the same practices but, rather, that detection of defined poor performance is independent of EQA scheme. In some jurisdictions, that may be as the result of legislation or professional agreement. Where this does not exist, responsibility for assessing performance must lie with the laboratory James D, et al. J Clin Pathol 2014;0:1–5. doi:10.1136/jclinpath-2013-201621

Downloaded from http://jcp.bmj.com/ on April 3, 2015 - Published by group.bmj.com

Best practice

Box 2 Key Elements of External Quality Assessment (EQA) ‘Best Practice’ Laboratory ▸ Proactive, documented choice of EQA scheme, incorporating factors listed in box 1. ▸ Samples are handled as far as is possible in same manner as patient samples, and not subject to any special measures, for example, held until internal quality control (IQC) is ‘good’. ▸ Results should be reported to EQA scheme in same format as patient samples (units, decimal places, trace, positive etc.), ▸ Each individual ‘analyser’ should participate separately in EQA for all tests performed on it. ▸ As part of the engagement of laboratory and EQA provider, EQA providers should have up-to-date contact details of head of department for when poor performance issues arise. ▸ EQA forms part of a wider quality process engaging all laboratory staff. EQA provider ▸ EQA schemes should readily be able to provide information allowing laboratories to make an informed choice according to factors listed in box 1. Some providers may wish to limit their scheme to that of proficiency testing, but information should be available to allow laboratories to ascertain if scheme meets their requirements. ▸ EQA schemes should report persistent poor performance as defined by oversight bodies in accordance with the defined mechanism. ▸ Postmarketing vigilance is seen as an integral function of EQA.

average performance and, at worst, performance which may not be clinically relevant. Elements discussed in this article which may be expected to be seen in laboratories exhibiting ‘Best Practice’, or perhaps better put as engagement in EQA programmes, have been summarised in box 2. We do not suggest that this list is exhaustive, but it would appear that these elements could be described as ‘essential’ in laboratories that have quality as a focus. An EQA programme should not be seen as purely a ‘tick box’ for accreditation, but a proactive process, from choice of EQA scheme to the involvement of laboratory staff in reviewing EQA performance. Contributors The content of the article reflects discussions held at NQAAP meetings, at which all coauthors contributed. DJ wrote the article, which was then reviewed by all coauthors, and further edited. Final approval was sought and given by all coauthors. Competing interests None. Provenance and peer review Commissioned; externally peer reviewed.

REFERENCES 1 2 3

4 5

6 7

director to ensure that the quality of results and advice being produced is of an appropriate standard. Given appropriate support from EQA providers, and a proper quality assurance programme within the laboratory, referral to oversight bodies should be a rare occurrence, and in our experience, often has more to do with a laboratory not engaging with the EQA provider when performance issues arise.

CONCLUSIONS EQA programmes are a key element of a laboratory’s quality assurance framework, and are important for the maintenance and, perhaps equally important these days, evidencing the quality of a laboratory’s output. An EQA programme, if engaged with properly, comprises more than slavish adherence to PT programme participation, and positively benefits laboratory quality and patient care. A framework of EQA programmes encouraged by accreditation will lead to a laboratory, and professionals involved in laboratories challenging their overall performance, and continuously striving to improve it. Focus on PT aspects alone may assure nothing than maintenance of, at best,

James D, et al. J Clin Pathol 2014;0:1–5. doi:10.1136/jclinpath-2013-201621

8

9

10 11 12 13

14

15

16 17

https://www.wp.dh.gov.uk/publications/files/2013/01/ Bruce-Keogh-letter-Pathology-Quality-Assurance-Review.pdf (accessed Feb 2014). Device Bulletin—management and use of IVD point of care test devices. MHRA DB2010(2) February 2010. Clinical and Laboratory Standards Institute (CLSI). Using Proficiency Testing to improve the clinical laboratory. Approved Guideline-Second Edition. CLSI document GP27-A2. Clinical and Laboratory Standards Institute, 940 West Valley Road, Suite 1400, Wayne, Pennsylvania 19087–898 USA 2007. Clinical Laboratory Improvement Amendments (CLIA) of 1988 US Public Law 100–575. Cembrowski GS, Vanderlinde RE. Survey of special practices associated with College of American Pathologists proficiency testing in the Commonwealth of Pennsylvania. Arch Pathol Med 1998;112:374–7. Maziotta D, Harel D, Schumann G, et al. Guidelines for the Requirement of Competence of EQAP organizers in medical laboratories. IFCC/EMD/C-AQ, 2003. http://www.england.nhs.uk/wp-content/uploads/2014/01/path-qa-review.pdf (accessed Feb 2014). Peterson JC, Hill RH, Black RS, et al. Review of the Proficiency testing Services for Clinical Laboratories in the United States. Final Report of a Technical Working Group. Division of Laboratory Systems, CDC, April 2008. Gallaher MP, Mobley LR, Klee GG, et al. The impact of calibration error in medical decision making ( planning report 04-1). National Institute of Standards and Technology, 2004. Guidance for the Implementation of a Medical Laboratory Accreditation System. ILAC-G26:07/2012. Policy for participation in proficiency testing activities. ILAC-P9:11/2010. Use of external quality assessment schemes in the assessment of the performance of in vitro diagnostic examination procedures. BS EN 14136 (2004). Kristensen GBB, Christensen NG, Thue G, et al. Between-lot variation in external quality assessment of glucose: clinical importance and effect on participant performance evaluation. Clin Chem 2005;51:1632–6. Minimum Analytical Performance Standards (MAPS) July 2010. http://www.rcpath. org/Resources/RCPath/Migrated%20Resources/Documents/N/NQAAP_ChemicalPath_ MAPS%20pilot2010_22.05.13.pdf (accessed Feb 2014). Carobene A, Franzini C, Ceriotti F. Comparison of the results from two different External Quality Assurance Schemes supports the utility of robust quality specifications. Clin Chem Lab Med 2011;49:1143–9. Thomas A. External quality assessment in laboratory medicine: is there a rationale to determine frequency of surveys? Accred Qual Assur 2009;14:439–44. http://www.rcpath.org/committees/intercollegiate-and-joint-committees/jointworking-group-for-quality-assessment-in-pathology/joint-working-group-for-qualityassessment-in-pathology.htm (accessed Feb 2014).

5

Downloaded from http://jcp.bmj.com/ on April 3, 2015 - Published by group.bmj.com

External quality assessment: best practice David James, Darren Ames, Berenice Lopez, Rachel Still, Wiliam Simpson and Patrick Twomey J Clin Pathol published online March 12, 2014

Updated information and services can be found at: http://jcp.bmj.com/content/early/2014/03/12/jclinpath-2013-201621

These include:

References Email alerting service

This article cites 4 articles, 1 of which you can access for free at: http://jcp.bmj.com/content/early/2014/03/12/jclinpath-2013-201621 #BIBL Receive free email alerts when new articles cite this article. Sign up in the box at the top right corner of the online article.

Notes

To request permissions go to: http://group.bmj.com/group/rights-licensing/permissions To order reprints go to: http://journals.bmj.com/cgi/reprintform To subscribe to BMJ go to: http://group.bmj.com/subscribe/

External quality assessment: best practice.

There is a requirement for accredited laboratories to participate in external quality assessment (EQA) schemes, but there is wide variation in underst...
113KB Sizes 3 Downloads 4 Views