EDITORIALS

Systems Agencies to use existing data to the maximum extent practicable. At the present level of funding, HSAs do not have the resources to collect supplementary data independently, even if expressly permitted to do so. To solve this problem, Congress should give serious consideration to the appropriation of supplementary funds for data collection and analysis. This can be justified on the basis of a comparison of the amounts of money spent on health services delivery, and on health systems planning: in 1977 we spent $737 per capita on the delivery of bealth care,4 and only 50 cents per capita, with minor additions from matching funds, on planning the delivery system.5 The average HSA covers a population of about 1 million. If 10 cents per capita were added to the HSA budget nationally, the average HSA would have $100,000 to use for collecting additional data for planning. In allocating these monies a suitable formula should be used to accommodate the special needs of small HSAs since the need for financial assistance is not strictly proportional to population size in the area. This anmount could be used to conduct approximately 4,000 telephone interviews-assuming that $40,000 were spent on personnel and the maintenance of an office, and the rest spent on interviews at $15 per interview if performed in-house, and using reduced-cost telephone service. In many parts of the country, university faculty may well be interested in contributing to the preparation of survey design, and data collection and andlysis because of their own interests in education and

research. Telephone interviews have several disadvantages as well as advantages.6 In the United States we do not generally have the advantage of up-to-date and complete lists of households as is required by law in Canada; a varying proportion of households either do not have telephones or have unlisted numbers; and telephone directories often do not cover the same geographic area that is to be studied. The validity of the responses given over the telephone may, as reported by Siemiatycki,' be limited, but other investigators

have found that there is little or no difference between telephone and face-to-face interviews. In a comparative study of face-to-face and telephone surveys involving 200 different measures, only a few statistlcally significant differences were fouhd between the two approaches.7 The advantages of telephone over face-to-face surveys are the lower cost, and, especially in urban areas, the high level of access to households. On balance, telephone surveys should be seriously considered as a way of reducing the existing gaps in our knowledge of important variables necessary for intelligent health planning. With relatively little increase in expenditure, community data used by HSAs for planning purposes could be improved and the amount of guesswork significantly reduced.

HARRY T. PHILLIPS, MD, DPH ANGELL G. BEZA, AB Address reprint requests to Harry T. Phillips, MD, DPH, Professor, Department of Health Administration, University of North Carolina, School of Public Health, Chapel Hill, NC 27514. Angell G. Beza is Associate Director for Research Design, Institute for Research in Social Science, UNC Chapel Hill.

REFERENCES 1. Siemiatycki J: A comparison of mail, telephone and home interview strategies for household health surveys, Am J Public Health 69:238-245, 1979. 2. U.S. Bureau of the Census. Census Use Study: Data Uses in Health Planning. Report #8, 1970. (See also other health reports in the Census Use Study series.) 3. Wennberg J and Gittelsohn A: Small area variations in health care delivery, Science, 182:1102-1108, 1973. 4. Gibson RM and Fisher CR: National health expenditures, fiscal year 1977, Soc Sec Bull 41:3-20, 1978. 5. National Health Planning and Resources Development Act of 1974 (PL 93-641). Section 1516 (b). 6. Dillman DA: Mail and Telephone Surveys: The Total Design Method. New York: John Wiley and Sons, 1978. 7. Groves RM: Comparing Telephone and Personal Interview Surveys, In: Economic Outlook USA, Summer 1978, pp. 49-51.

Editor's Report: Peer Review As is our wont, we publish this month the names of those who have functioned namelessly as referees of the manuscripts submitted to us during the past calendar year. In 1978 we received 639 papers, somewhat fewer than 20 per cent of which have been or will be published: these figures seem to have stabilized over the past two years. The review process followed by this Journal falls midway between those journals which send all articles out for review and those which send out only a selected few.' For about one out of three papers we receive, a decision (usually not to publish) is made by the Editor, either alone or with the help of the Editorial Board. The remaining papers are sent out to two or more of the referees listed elsewhere in the Journal this month. Their advice is sought, and usually followed; 222

their criticisms and comments are almost always helpful to both the author(s) of the paper and to the Editor on whom the burden of decision rests. The peer review institution has been both criticized and defended, but rarely studied in objective and scientificallyvalid fashion. One of the aspects of peer review, frequently cited in arguments against it, is the lack of concordance between referees. Given three options-Accept, Accept if Revised, Reject-our figures are 57 per cent for complete agreement and 9 per cent for complete disagreement (accept vs. reject). These are rather similar to figures reported from at least one other biomedical journal2 and seem well within the range of reliability of clinical judgments.3 The low level of agreement is far from reassuring, but AJPH March, 1979, Vol. 69, No. 3

EDITORIALS

the bald figures may be deceptive-perhaps even misleading. It is obvious, to me at least from my reading of referee comments, that checking one of the three "option" boxes does not mean the same thing to everyone, especially in the case of the middle option. In my experience, true discordance is far less than 9 per cent and true concordance is higher than 57 per cent. The problem of applying significance tests to such measurements is a complicated one, and has been reviewed by Koran.3 Insufficient data are available to apply the Kappa statistic to this particular exercise. This type of reasoning can easily pass for rationalization. There is every reason to consider the peer review system on a priori grounds as the institutionalization of a freeze-out of newcomers and innovation. Indeed, the "Matthew effect"4 (unto everyone who hath shall be given) has been attacked with both vigor and conviction, albeit incomplete and inaccurate citation of available evidence.5 6 The oft-quoted example of Thomas Henry Huxley,* the most recent example of a Nobel Prizewinner,** and the Lal1cet rejection letter which another Nobel Prizewinner framed,*** lend anecdotal color to the accusation. Evidence, of a relatively limited and incomplete nature, that a high proportion of manuscripts rejected by one journal eventually find their publication outlet in another journal, adds some weight to the accusation.9 10 And, of course, it is just as easy to interpret concordance between referees as just one more indication of the intransigence of the "old-boy network", as to assert that referee discordance proves the worthlessness of the system. Considering the heat which these arguments can generate, it is surprising that the scientific mind has so rarely addressed them. The only study of the peer review process of which I am aware that can stand on its own meritt was reported by Harriet Zuckerman and Robert Merton in 1971." They examined a sample of contributors to the Physical Review and concluded that although more manuscripts were assigned proportionately to referees of higher status than the authors of a paper, no evidence could be found that status differences affected referee recommendations. However, as Ingelfinger has pointed out,2 the Physical Review is a unique publication: it publishes 80 per cent of the single authored *'I know that the paper I have just sent in (to the Royal Society) is very original and of some importance, and I am equally sure that if it is referred to the judgement of 'my particular friend' . it will not be published . . .''7 **Nobel Award address by Dr. Rosalyn Yalow as cited by reference 6.

***Cited by Sir Theodore Fox, former Lancet editor.8 tA number of other studies have been reported dealing with social science journals, but their research designs are far weaker than the Zuckerman/Merton effort.

AJPH March, 1979, Vol. 69, No. 3

manuscripts which Zuckerman and Merton sampled, and decisions are made with help of referees in only 23 per cent of the manuscripts received. Physics is a hard science with well established standards. The findings may not be applicable to biomedical and social science journals with high rejection rates. A number of ways to enhance reviewer agreement and neutralize presumed bias have been suggested, some of them pretty wild. The Journal has opted for one that most others say is not possible: trying to ensure anonymity of author as well as of reviewer. Removing the identity of the author is not always possible, but it seems to work often enough for us to warrant continuation. Furthermore, most of our referees like it, and it has given us the opportunity to study outcomes in relation to referee knowledge of author identity. This is not the place to present or discuss the data collected. However, I can say that after three years of editing this Journal I doubt that any system or method of processing can eliminate human weakness, and I can think of no better solution to the problems of peer review, if there are any, than to trust that all referees and editors will follow the advice of Alexander Pope: "But you who seek to give and merit fame, And justly bear a critic's noble name, Be sure yourself and your own reach to know, How far your genius, taste and learning go; Launch not beyond your depth, but be discreet, And mark that point where sense and dullness meet." from An Essay on Criticism, 1711

ALFRED YANKAUER, MD, MPH REFERENCES 1. Douglas-Wilson I: Editorial review: peerless pronouncements. N Eng J Med 296:877, 1977. 2. Ingelfinger FJ: Peer review in biomedical publications. Am J Med 56:686-692, 1974. 3. Koran LM: The reliability of clinical methods, data and judgments. N Eng J Med 293:642-646, 695-701, 1975. 4. Merton RK: The Matthew effect in science. Science 159:56-63, 1968. 5. Gordon M: Evaluating the evaluators. New Sci 73:342-343, 1977. 6. Commoner B: Peering at peer review. Hosp Pract 35:25, 1978

(Nov.) 7. Huxley L: Life and Letters of Thomas Henry Huxley, Macmillan, London, 1900 as cited by reference 11 below. 8. Fox T: Crisis in Communication, Oxford Univ. Press, New York, 1965. 9. Relman AS: Are we a filter or a sponge? N Eng J Med 299:197, 1978. 10. Wilson JD: Peer review and publication. J Clin Invest 61:16971701, 1978. 11. Zuckerman H and Merton RK: Patterns of evaluation in science: Institutionalization, structure and functions of the peer review system. Minerva 9:66-100, 1971.

223

Editor's report: peer review.

EDITORIALS Systems Agencies to use existing data to the maximum extent practicable. At the present level of funding, HSAs do not have the resources t...
374KB Sizes 0 Downloads 0 Views