Inr. J. Nun. Stud.. Vol. 28, No. Printed in Great Britain.

1, pp. 71-81,

1991.

9

cQ2ts7489/91 $3.00+0.00 I!991 Pergamon Press plc

The experiment: Is it worthwhile? JENIFER

WILSON-BARNETT,

Ph.D.,

S.R.N.,

F.R.C.N.

Professor and Head of Department, Department of Nursing Studies, King’s College London, U.K.

Abstract-Parallel developments in social and educational research as well as more explicit humanistic philosophies of nursing have influenced views on what types of investigation are acceptable or useful to this field. Shifts in opinion towards more participative and qualitative studies reflect the move away from the predominant traditions of medical research and the experiment. This paper attempts to explore the reasons for such a change and examine the related criticisms of the positivist school and in particular deductive experimental approaches in order to assess whether such approaches should continue to have a role in building nursing knowledge.

Introduction

Few authors disagree that nursing should be a holistic art and science that recognises the multiplicity of factors within the social and physical environment which affect human beings during their lifespan. Such dynamic systems, like the family or the health service, are constantly changing and one part of such a system will inevitably be influenced by events in another. Accounting for the interactive complexity in health care and in research presents enormous challenges. Over the last 20 years some methodologists in the social and educational sciences have devised approaches which reject the classical scientific assumptions of experimentation as irrelevant to the study of human beings or systems. While attempting to represent social reality and understand influences and interpretations of events, they hold that humans should not be seen as objects of enquiry. They see that there is more to a system than can be defined and lastly that science is merely a product of the human mind (Susman and Evered, 1978). Development and expansion of nursing research studies employing this increased variety of methods reflects the diversity of topics examined. When asking many different questions about practice, researchers need a growing range of methodological approaches. Such methods should accord with the nature of the question or problem to be addressed (Treece and Treece, 1977). Respect for personal beliefs is one of the central principles of nursing practice. Exploration of another’s belief system frequently leads to much more appreciation and agreement. It therefore follows that those with firmly held methodological allegiances 77

78

J. WILSON-BARNETT

should attempt to understand apparently contrary views. However, early attempts to criticise psychological nursing experiments have rejected their value totally on the grounds of incompatible beliefs about human inquiry (Greenwood, 1984). Experimental studies in nursing, have aimed to assess the effects of specific interventions and compare those with other established modes of treatment. Critics often see this form of research as purist and artificial, applying laboratory techniques to the real clinical world. However, methodologists, such as Schwartz and Lellouch (1967) review differences in experimental approach, exploring the role of pragmatic trials which disrupt real life to a minimal extent in contrast to more controlled explanatory experiments. While the latter may lead to more understanding, pragmatic studies help to guide decisions on preferred treatments. Reflections on these and other considerations have encouraged this author to discuss how the experiment can be modified and interpreted to provide useful evidence. They have been helpful in demonstrating that more careful exploration may reveal common ground, or if not, more understanding of what a particular approach may offer. If we accept that there are grounds for compromise or combination in approaches advantages could be gained.

The Experimental Approach

In order to evaluate the experimental approach, the definitions, claims and traditions of experimentalists must be understood. Experimental methods were devised in order to answer the question “what if”, provoked by John Stuart Mill (1873) and his followers who were not content solely to observe and count phenomena, as followed by the Francis Bacon School. They felt the need to change things, to put nature to the test and evaluate the results. This approach was painstakingly elaborated into a methodological treatise [John Stuart Mill’s ‘Systems of Logic’, (1873)] in which rules are based on methods of agreement and of differences. By comparisons of situations where a phenomenon occurs either similar in property and results or similar in property but not in results, factors may be isolated which are influential. In other words, rules of agreement can be applied in drawing causal inferences from the data. Repeated trials are needed to confirm the causal agent. His canon of concomitant variation produced the logic for the basic statistical model: whenever two phenomena vary together in a consistent or persistent manner, either the variations represent a direct causal connection between the two phenomena or both are being affected by some other causal factor.

Essential components of the experimental design are therefore controlled comparison and evaluation of manipulated change. Groups of similar subjects must be selected in an unbiased way and treated the same except for the one element introduced to influence the outcome. Treece and Treece (1977) represent this in the most simple way (Fig. 1). Before

After

Control group Experimental group

Fig. 1. Comparison and evaluation of manipulated change.

THE EXPERIMENT:

IS IT WORTHWHILE?

19

Data from all four cells are compared to ensure groups were not dissimilar before the introduction of the independent variable (or change) to the experimental group and to assess the extent of any difference afterwards. They also say that: “classical experimental design in which all four cells are used is an extremely valuable approach. Its worth is probably greater in the physical sciences than in the social sciences, however in both fields it is the most powerful approach of any known research technique for calculating data, testing hypotheses and defining cause-effect relationship.” (p. 154).

Experimentalists thus state that the classic experimental design provides evidence on causal relationships, as Waltz and Barker Bausell (1981) state, after reviewing these claims of causal relationships. “One generalisation does stand out, however, and that is that everything being equal the quality of knowledge accruing from experimental studies is higher than from any other type of research if quality be defined on a causal continuum . .” “other types of research do not result in this type of knowledge.” (p. 201).

Social scientists and methodologists have also credited the experimental approach with great powers of explanation and prediction. For instance Riley (1967) writing on “sociological research: a case approach” said of the experiment, “a powerful design for testing hypotheses of causal relationships among variables. Ideally, in the experimental design the investigator throws into sharp relief the explanatory variables in which he is interested, controlling or manipulating the independent variable observing its effect on the dependent variable . . . and minimizing the effects of the extraneous variable which confirmed his results”. (p. 612).

In summary, the positive advantages offered by the experimental approach, include, the testing of hypotheses, and the capacity to compare effects of interventions and to generate confidence intervals for estimated values uncovering influences and patterns of interactions. Experimentalists admit to the disadvantages and limitations of experimental research with human beings. As Seaman and Verhonick (1982) say the experimental approach “may not work well with the study of human subjects, a complex and complicated process” and “there are few if any valid criterion measures, or measures of the dependent variable, available to indicate the effects of independent variables upon human subjects”. (p. 162). The plurality of causation in any social context may be seen as insuperable (limited by the relative indeterminacy of human nature, culture and choice), or may require multivariate non-linear analysis and repeated studies to isolate variables within necessarily very large samples. Even if this were possible, ascribing causation is rarely possible and outcomes cannot always be reduced to valid criterion measures. Not only are human beings seen to be particularly unpredictable and difficult to observe, but their critical behaviour and responses often seem to defy definition and measurement. Despite this, many groups (Ross and Smith, 1971) have built substantial knowledge through experiments and still continue to work towards what they see as the ultimate stage, where they have sufficient understanding of the many variables and influences within a situation, in order to test the effect of modifying or adding some other factor and observing the outcome. In health-related fields significant advances in understanding provided through experimental evidence include the role of psychological and physical rehabilitative interventions in aiding adaption and recovery after major illness (Wilson-Barnett, 1988), and the effects of psychiatric nurses’ care compared to “usual treatment” for patients in the community (Paykel and Griffith, 1983; Marks, 1985). Reviews of interventions for various physical and psychological problems also demonstrate the extent to which practice can be guided by experimental findings (see Wilson-Barnett and Batehup, 1988).

80

J. WILSON-BARNETT

Objectivity in research Most fundamental criticisms of experiments involving human beings are based on a rejection of the assumptions and requirements involved in this approach. Axiomatic to this total debate is the understanding of human activity and endeavour. Comte asserted proudly that knowledge should be built on detached observations within a system of neutral values, independent of the individual experimenter, whilst Harre (1981) claims that it is nonsensical to suggest that any human activity, research or other can be conducted in this neutral completely depersonalised way. Experience, preference and perception determine individual choices and observations of researchers and participants. Whereas experimentalists seek to achieve consistency in observations, reliability in measures across subjects and reproducible findings, other researchers would conclude that this is imposing a particular view of the world: constraining interpretation by restricting investigation to those aspects which can be measured or reduced within the experimentalist’s so-called realm of ‘objectivity’, but may not be important or meaningful. Objectivity or value-free research is a human impossibility according to the new paradigm or interpretive schools (Carr and Kemmis, 1989). Choice of type, method and measurement, as well as interpretation of results, is dependent on the individuality of the researcher. Qualitative researchers are interested in the experience of participants, whereas experimentalists may not gain this information. It is thus deceptive to claim superiority in the search for knowledge by claiming inhuman powers. It also inhibits the search for understanding to reject all that is particular and subjective in the world of human science. Other relevant information may be ignored by experimentalists if it cannot be accepted or used in the process of testing the hypothesis. Exclusion of other data may bias the picture of social reality, just as a reflection of other theories and the culture within the research in non-experimental work may influence pictures or constructions. All good researchers in the human sciences aim to be impartial in their representation of reality. It is perhaps the belief that experimentalists remain detached and objective in their work which has led to the possibility of consistent bias. Once values and priorities are explicitly addressed, they can be explored and compared. Their approach may not change but others can openly evaluate the effects on research studies and results. However, it is necessary to realise that partiality or bias is a product of both an individual’s preferences and point of view as well as their cultural and scientific framework. The experimentalists in psychology and nursing would probably accept that much of the human world cannot be reduced to experimental investigation. They knowingly choose to do studies when they or others have established some understanding in the area. Objectivity in measurement Measures utilised as dependent variables in experiments obviously need to be valid and representative of the concept being assessed. Ideally the meaning of such measures should be standardized or universal, so that individual interpretation or subjectivity is reduced. Critiques of this goal suggest that this is reductionist and dehumanising. Many of the most important dimensions in life defy measurement and quantification. Such concepts as ‘wellbeing’ or ‘health’ are complex, multi-dimensional and by nature individually defined and experienced. Challenges of measurement exist which cannot be refused; for instance, ‘quality of life’ may be seen as ephemeral and complex, yet various approaches to measurement strive to encompass individual dimensions that highlight individual differences, while attempting to assess this central factor which can be affected by so many changes.

THE EXPERIMENT:

IS IT WORTHWHILE?

81

At present, it seems there can be advantage from adopting quantifying and experimental approaches to research. Policy makers and research funders may encourage this type of work in order to monitor change from established baselines. Evidence is tangible and measurement of dependent variables on several occasions can demonstrate ‘naturally’ occurring influences as well as those which are more intentionally introduced and evaluated. This does, however, rest on being able to construct measures which are meaningful and do not impose new interpretations of concepts upon respondents. Sharing interpretation of words or a situation is seen to be essential to those who reject the idea of ‘objective’ measurement. Exchange of views and perceptions of an event can lead to insights which might be missed if respondents are only asked closed questions (Webb, 1986). Adopting a more open ‘feminist methodology’ may help to mitigate against research which does not reflect key influences in any situation. However it may not be incompatible with experimental evaluation or need to be used exclusively. Data which are quantifiable can also be gathered using an interactive research approach. Detaching the research process from responses Experimentalists claim that research should not alter natural events but should demonstrate the causal relationships within such events. However, it is debatable whether observations can be made without affecting subjects’ behaviour. The Hawthorne effect (see Easthope, 1974) shows that control or comparison groups change their behaviour in response to the process of data collection. However, once the Hawthorne effect was recognised, improved experimental designs enabled researchers to account for changes evoked by the process of their observation or measurement. Placebo or comparison interventions are difficult to design and carry out in order to assess for the effects of the experiment itself rather than the specific intervention agent. However this has been done and experimentalists need to consider whether such procedures are superior to comparisons made with other interventions. For instance, would it be sufficient to estimate the magnitude of extent or influence by comparing two similar interventions? Is it essential to compare an active intervention with one which is considered inert or which does not aim to change the present situation, as the classicist approach would maintain? Indeed, Schwartz and Lellouch (1967) use such examples to illustrate pragmatic approaches to clinical experiments. These experiments have demonstrated that comparative interventions, such as psychological versus physical treatments have different effects, which can provide guidelines for practice, without necessarily requiring a demonstration of what would occur without intervention (Mayou et al., 1981). Specificity or partiality in research While the new paradigm researchers aim to explore a holistic picture, accounting for a complex mix of events and human reactions, the experimentalist looks to select one aspect of that context and explore the relationship of two or almost three variables in depth. The former approach cannot have the power of generalizability, as claimed by experimentalists, each context may change and alter in response to new events. However, the special characteristics of each system preclude suggestions that the same events will result in similar effects. They look to others for evidence of similarity in events and processes of change.

82

J. WILSON-BARNETT

Although experimentalists are criticised for taking a partial perspective, not accounting for all relevant variables, they would claim that other factors must be acknowledged and that the design aims to minimize such differences affecting outcome. Although experimentalists try to hold confounding variables constant, there is a real danger that the interaction between these variables and the dependent variable differs between the centre in which the experiment took place and any other centre to which results may be applied. Generalizability, while desirable, is always somewhat limited. They would therefore say that every researcher must constantly be aware that another study may reveal other influences, but it is possible through repeated studies to achieve some confidence from consistency in findings. Although they study a small part of the universe and describe the context, they use an approach that can be repeated by others and has the potential for generalizability. All experimenters begin with the understanding that “the probability that one dependent variable has multiple causes is greater than the probability that it is caused by a single independent variable” (Blalock, 1971, p, 390). Both groups accept the complexity of the situations in which humans exist, but experimentalists attempt to examine relationships one by one (or two by two), systematically describing a series of relationships within a pluralistic system. Although new paradigm researchers may study the same phenomena seeking to examine their relationships, they refuse to accept that isolation of such variables is reasonable or possible. In reality, there may not be such a contrast in these views as there seems to be in the underlying rationale for these approaches. Just as the new paradigm attempts to look at a complex of factors so do the modern scientists using experimental design. For instance, when discussing causality, Sellitz et al. (1965) acknowledge that experimentalists emphasize determining conditions rather than a single event; that is, they try to discover the necessary or sufficient conditions for an event. In other words, experiments are designed to explore which contributory, contingent or alternative conditions operate to make an event probable but not certain. This type of explanation demonstrates much less of a dichotomy between the approaches. Yet without the experiment, researchers may end up with a set of propositions that are merely plausibly interrelated. Concepts of control and manipulation It is not surprising that critiques of the classic experimental method reject the concepts of controlling and more certainly of manipulating human beings. Most nurses would not be proud of using such strategies in the process of their work. Control, when applied to experimental research infers leaving everything unchanged except the independent variable and its effect. It also implies that the researcher determines the approach, measurement strategy and expected change. Respondents or subjects receive these conditions without influencing the researchers’ intentions or deciding which condition they receive. ‘Manipulation’ of subjects into groups with or without exposure to experimental changes is also required within this framework. Given the current philosophy of encouraging participation in care, fully informed consent and the freedom of choice (Brearley, 1990), ‘controlled experiments’ may seem unacceptable and unethical. Experimentalists in nursing have, however, recognised the problems of conducting research in this way and attempted to modify their strategy. Although this process may have infringed the rights of research subjects in the past, there really is no excuse for uninformed individuals being controlled or manipulated against their will. Full explanation of the experimental procedure rarely negates or interferes with

THE EXPERIMENT:

IS IT WORTHWHILE?

83

the purpose of experiment. If subjects fully understand that they may be allocated to different groups, each condition being discussed, they then need to agree to be assigned to one of these groups. Most statisticians accept randomization as the best strategy to prevent unequal distribution of certain variables between groups. Thus individual choice may not be possible and researchers and subjects need to accept this, otherwise the internal and external validity of the experiment may be threatened. However, in many studies some degree of choice may be considered. For instance, Ridgeway and Mathews (1983) conducted an experiment evaluating different types of explanatory, pre-operative booklets on recovery from hysterectomy. Subjects were approached and asked if they would like one of these booklets. As a result 10 refused the booklet but agreed to become subjects for data collection. Likewise, Corner’s (1990) experiment only included nurses who wished for further education on nursing cancer patients. The small group who were working in unrelated clinical specialities chose not to become part of the two experimental groups but were happy to be followed up as comparison subjects. Use of placebo interview sessions as control interventions has been recognized as unacceptable (Webb, 1986). Studies which have compared information giving with a discussion of general issues were found to be difficult to conduct and somewhat uncomfortable for researchers. Either control groups should be provided with another potentially useful intervention (as in Corner’s research) or they can be approached for data collection only and provided with full explanations just as would occur in other descriptive studies. However, the data collection process, involving the control group, needs to be acceptable and preferably useful and interesting to these subjects. Repeated measures may present difficulties, being boring and meaningless to subjects. Yet, without these data, comparison over time with experimental subjects would be lost. Control of the environment has defeated human kind, so there is no way that natural field experimenters can hope to achieve this. Attempts to prevent others introducing major changes during data collection has also proved impossible. Intentional or forced changes in practice or treatment modes occur despite the best laid plans and explanations. Attempting to reduce disclosure between experimental and control subjects is also a source of difficulty which must be minimized to maintain a reliable experiment. However, these terms of ‘control’ and ‘manipulation’ emphasize the material and philosophical routes of experimental science and serve as reminders that human subjects may be vulnerable. Debriefing of subjects is also often not possible and they may remain uninformed of research results, as nurses and other researchers rarely send reports to subjects after their study. Interpretation

of experimental

data

Experimentalists have been labelled as mechanistic, unimaginative and narrow in their perspective (Greenwood, 1984). This obviously implies that the preconceived structure of this approach prevents a flexibility and holistic appreciation of factors which affect human beings. It also reflects the view that different interpretations and perspectives in the research area are not possible. By definition, the experiment must be guided by objectives of estimating differences, through hypotheses and ideas already formulated into operational plans, with clearly documented expectations. In contrast the new paradigm researchers would use open plans, participative roles and rely on many interpretations of what is observed. Experimental scientists have been seen only to value facts when quantified and calculated using formulae to establish the magnitude of change. Clearly there is not seen to be much room for imagination when collecting or analysing data within this model of

84

J. WILSON-BARNETT

research. Pierce (1955) asserts this, saying that one of the earliest criticisms of the deductive mode is that it offers no new knowledge, it only works out the consequences of what one already accepts. This is fiercely contended by Huxley (1958), “It is a favourite popular delusion that the scientific enquirer is under a sort of moral obligation to abstain from going beyond that generalisation of observed facts which is absurdly called Baconian induction! . . . anyone who refuses to go beyond fact, rarely gets as far as fact”. Clearly, Huxley felt that imaginative interpretation was needed when designing and evaluating experiments saying “every great stride has been made by the anticipation of nature” (p. 372). Likewise, Kuhn (1970) and Popper (1962) believed that knowledge can only be created when the researcher goes beyond the data and performs a conceptual leap of imagination by considering analogies, metaphors, models and myths to explain results. Current criticisms are therefore more appropriately directed at researchers than the experimental research approach. Imagination is not seen to be so necessary when collecting data for experimental studies as when working with the more flexible phenomenological or the new paradigm approach, which attempts to capture the product of participants’ imagination and interpretations. Whereas the latter may interpret data as it presents, making it part of the intrinsic process, possibly combining with insights from existing theory, the experimenter relies on using previous research evidence and theory only when drawing up a protocol and when exploring the results. Failure to be open-minded, use lateral thinking and exploit unexpected results would be the mark of an inept researcher in whichever camp he was placed. Interpretation of experimental data need not rely on explorations of statistical relationships alone. When studying human phenomena, it is possible to gain further insights into what is perceived to occur, by gaining direct explanations from respondents. By augmenting experimental data with interviews or observations, further participation is possible which adds to the meaning and interpretation of the experimental and other influences. If triangulation of methods and data occurs (as in Corner’s 1990 study) this provides more sources and may serve as a check on validity and can be used to enrich the research reporting. Without this simultaneous and complementary data collection, the subjects own views are lost and the study rests on the researcher’s ability and imagination, testing hypotheses but not necessarily providing a description of subjects interpretations or the social context for the research process. Choosing the experiment Different research questions and contexts require different approaches or research designs. (Macleod Clark and Hockey, 1989). The more complex or varied the pattern of interaction the less likely it is that an experiment can be designed. Indeed the action research and new paradigm approaches were devised to cope with this dynamic environment in the face of change. More qualitative approaches are appropriate when asking broad questions seeking the subjects’ interpretation on the theme and their opinions. When seeking to change what occurs through group action and participation, monitoring and sometimes self-monitoring, is more appropriate and likely to succeed over an imposed plan of change and assessment. Data from such approaches is clearly less structural and analysis aims to represent the total picture as interpreted by the players and researcher. Less varied, more specific and shorter term changes may be more amenable to experimentation, especially when cultural factors are less important or varied and more easy to identify and generalize. For some enquiries it may be possible to combine approaches and maximise their

THE EXPERIMENT:

IS IT WORTHWHILE?

85

advantages. From the previous discussion, it would seem that many of the criticisms of the classic experiment can be countered by involving subjects more, by giving full explanations and through more consideration and generosity to subjects generally, but particularly those in the control group. Thus questions of humanity must be recognised but they may affect validity. Experimental researchers need to make every effort to meet conditions necessary to maintain internal validity within this approach. Randomization of subject allocation to groups is one of the more powerful mechanisms known to ensure equivalence of endogenous and extraneous variables. Researchers must therefore decide whether explicit choice of group assignment for subjects is so preferable as to jeopardize this element of the research design. At least one might expect subjects to understand and agree to randomization within the experiment. The level of knowledge of the research topic or subject (Sellitz et al., 1965) should also affect design. Previous work in the area might have identified certain relevant influences and provided a picture or theory. Although this may be substantially conjectural, hypotheses may be formulated and further research questions may be quite obvious. Judgements over whether variables are sufficiently described and understood will determine whether an experiment is worthwhile. It is this final level of research which can provide new evidence on causal interaction. Where problems are not amenable to an experiment other approaches are being refined. The new paradigm approaches of action research are increasingly respected for studying organic social changes and addressing managerial questions (Susman and Evered, 1982). It is unfortunate that some researchers appear to prefer experimentation above all other approaches and some fields of enquiry seem to be peppered by studies which appear unnecessarily repetitive, while related issues are relatively unexplored. For instance, nursing work evaluating patient teaching which has benefited in the past from careful experiments has been succeeded by numerous trials confirming the same hypothesis: increased information for patients increases subjects’ scores on tests of their knowledge (or recall of that information) (see Wilson-Barnett, 1988). Description and exploratory work is both a necessary fore-runner and complementary method which supports experimental approaches. In the final analysis, evidence from experimental studies must be reviewed to assess whether or how this has added to nursing knowledge. Whereas insights into many nursing issues have arisen through descriptive work, experimentation has enabled comparisons between approaches to treatment (Treece and Treece, 1977, p. 202) and in many cases provided evidence which provides the basis for guidelines to practice (Wilson-Barnett and Batehup, 1988). For the study of alternative practical treatments to care for people suffering from a particular problem it seems that Treece and Treece (1977) are correct in their assertion that the field experiment is still a viable option for testing new techniques and procedures for patient care (p. 202). Replication in several areas of nursing care has provided confidence in making such recommendations, although many of the original suggestions and ideas were created through other types of research. To many practitioners the experiment may have a special appeal. While so many descriptive studies and those assessing the effects of change provide useful insight, reports often highlight poor practice as part of the justification for changes and emphasize the complex yet particular context of the research setting. The experiment may seem more positive in providing clearer and valid indicators for the future, claiming more generalizability than other designs and causal explanations. The worth of experiments may

86

J. WILSON-BARNETT

therefore be seen to come from both the nature of evidence and the apparent clarity with which they can be reported and applied in practice.

Conclusions

Suggestions for improving the experimental approach with human subjects to account for the criticisms, might include: Attention to the representation or reflection of reality of the situation being tested. Researchers should share their ideas to ensure they are not irrelevant or unimportant to participants. Informed choice can be given to subjects and may not threaten validity, but enhance understanding by tailoring interventions to suit subjects’ preferences. Control groups can be seen as participants in a data collection exercise, if this is valuable in itself, or be provided with a meaningful intervention. Qualitative data on the meaning of the situation to subjects and their experiences, can and should augment an experimental study. Subjects and those who help in such studies (in fact all studies) should receive debriefing in a relevant and planned way. Presentation of experimental research can be made more relevant to practice when clinical significance is described.

From this consideration of the experiment, using an eclectic and unbiased outlook (of course, albeit not objective), sensitive use of the experimental approach need not be contradictory to the values and purposes of nursing and nursing research. This approach can produce useful and powerful knowledge while respecting the rights of human subjects, who can also share their perspective with the researcher in a way which enriches the data and provides some benefits to those who make such research possible.

References Blalock, H. M. (1971). Theory building and causal influences. In Methodology in Sociul Research. H. M. Blalock and A. B. Blalock (Eds). pp. 155-196. McGraw-Hill, London. Brearley, S. (1990). Participation in cure: (I review ofthe literature. Royal College of Nursing, London Research Series. Scutari Press, London. Carr, W. and Kemmis, S. (1989). Becoming Critical: Education, Knowledge and Action Research. pp. 83-101. The Faliner Press, London. Corner, J. (1990). The newly registered nurses and the cancer patient. Unpublished Ph.D. in Nursing Studies. King’s College, London. Easthope, G. (1974). Social Research: History of Social Research Methods. Longman, London. Frieman, J. A., Chalmers, T. C., Smith, H. and Kuebler, R. R. (1978). The importance of Beta, the Type II Error and sample size in the design and Interpretation of the randomizal control trial. Survey of 71 “negative” trials. New Engl. J. Med. 299, 690-694. Greenwood, J. (1984). Nursing research: a position paper. J. Adv. Nurs. 9, 77-82. Ha&, R. (1981). The positivist-empiricist approach and its alternative. In P. Reason and J. Rowan (Eds). Human Inquiry: Sourcebook of New Paradigm Research, pp. 3-18. John Wiley, Chichester. Huxley, A. (1958). “Science in The Reign of Queen Victoria”. As cited in J. Taylor (Ed.). Selected writings of Hughlings Jackson. Vol 2, p. 372. Staples Press, London. Kuhn, T. S. (1970). The Structure of Scientific Revolutions. 2nd Edn. University of Chicago Press, Chicago. Macleod Clark, J. and Hockey, L. (1989). Further Research for Nursing. $cutari Press, London. Mayou, R., Macmahon, D., Sleight, P. and Florencis, M. T. (1981). Early rehabilitation after myocardial infarction. Lancet ii, 1399-1401. Marks, I. (1985). Psychiatric Nurse Therapists in Primary Cure. RCN Research Series. Royal College of Nursing, London. Paykel, E. S. and Griffith, J. H. (1983). Community Psychiatric Nursing for Neurotic Patients. RCN research series. Royal College of Nursing, London. Pierce, C. S. (1955). Philosophical Whitings of Pierce. J. Buckler (Ed.). Dover, New York. Popper, K. (1962). Conjectures and Refutations. Harper, New York.

THE EXPERIMENT:

IS IT WORTHWHILE?

81

Ridgeway, V. and Mathew, A. (1983). Psychological preparation for surgery: a comparison of methods. Br. J. Clin. Psych. 21, 271-280. Riley, M. W. (1967). Sociological Research: a Case Approach. Harcourt Brace and Jovanovich, New York. Ross, J. and Smith, P. (1971). Orthodox Experimental Designs. Methodology in Social Research. In H. M. Blalock and A. B. Blalock (Eds). McGraw-Hill, London. Schwartz, D. and Lellouch, J. (1967). Explanatory and pragmatic attitudes in therapeutic trials. J. Chronic Diseases 20, 637-648.

Seaman, C. H. and Verhonick, P. J. (1982). Research Methodsfor Undergraduate Students in Nursing. Appleton Century Crofts, New York. Sellitz, C., Jahoda, M., Deutsch, M. and Cook, W. S. (1965). Research Methods in Social Relations. Methuen, New York. Stuart Mill, J. (1873). Systems of Logic. Harper & Bros, New York. Susman, G. 1. and Evered, R. D. (1978). An assessment of the scientific merits of action research. Admin. Sci. Q. 23, 582-603.

Treece, E. W. and Treece, J. W. (1977). Elements of research: Nursing. L. V. Mosby Co. London. Waltz, C. and Barker Bausell, R. (1981). Nursing Research: design statistics and computer analysis. K. A. Davis & Co. Philadelphia. Webb, C. (1986). Professional and lay social support for hysterectomy patients. J. Adv. Nurs. 11, 167-177. Wilson-Barnett, J. (1978). Patients emotional responses to barium X-rays. J. Adv. Nurs. 3, 37-46. Wilson-Barnett, J. (1988). Patient teaching or patient counselling? J. Adv. Nurs. 13, 215-222. Wilson-Barnett, J. and Batehup, L. (1988). Patient Problems: A Research Basefor Nursing Care. Scutari Press. London.

The experiment: is it worthwhile?

Parallel developments in social and educational research as well as more explicit humanistic philosophies of nursing have influenced views on what typ...
1012KB Sizes 0 Downloads 0 Views