Department of Neurosurgery, University Hospital, Sigmund-Freud-Str. 25, D-5300 Bonn, FRG

ABSTRACT. In this paper, the problem of correct ascriptions of consciousness to patients in neurological intensive care medicine is explored as a special case of the general philosophical 'other minds problem'. It is argued that although clinical ascriptions of consciousness and coma are mostly based on behavioral evidence, a behaviorist epistemology of other minds is not likely to succeed. To illustrate this, the so-called 'total locked-in syndrome', in which preserved consciousness is combined with a total loss of motor abilities due to a lower ventral brain stem lesion, is presented as a touchstone for behaviorism. It is argued that this example of consciousness without behavioral expression does not disprove behaviorism specifically, but rather illustrates the need for a non-verificationist theory of other minds. It is further argued that a folk version of such a theory already underlies our factual ascriptions of consciousness in clinical contexts. Finally, a non-behaviorist theory of other minds for patients with total locked-in syndrome is outlined.

Key words: behaviorism, coma, consciousness, locked-in syndrome, other minds problem

1. INTRODUCTION How can one know that other people have minds? How can one know that they are conscious beings like oneself?. These questions paraphrase the notorious philosophical other minds problem (henceforth OMP), which may also arise when we are confronted with the fact that no matter how obvious the occurrence of consciousness in other people may be, a verification of statements concerning such consciousness seems impossible (see section 4). Now this primarily philosophical problem also deserves the neurologists' (or the physicians' in general) interest, especially when they try to establish an empirical - and, if possible, scientific - method of ascribing consciousness to patients with lesions of the central nervous system. In this paper, we try to explore the epistemological status of ascriptions of consciousness in clinical practice, chiefly in neurological intensive care medicine. In the following section 2, we present some clinical concepts of consciousness and coma that seem to be rather behavioristically minded at first

Theoretical Medicine 12: 69-79, 1991. © 1991 KluwerAcademicPublishers.Printed in the Netherlands.



sight. In the third section, we introduce the most striking example of consciousness without behavioral expression: the so-called total locked-in syndrome (TLIS), in which preserved consciousness is combined with a total loss of motor abilities. We then argue that the TLIS at least discloses the practical limits of a behavioristic account of consciousness. In section 4, the famous Wittgensteinian account of the OMP is redefined, and it is argued that no behavioristic solution of the OMP can be derived from the mere fact that other-ascriptions of consciousness are performed on the basis of 'outward criteria'. In the last section, 5, we maintain that clinical ascriptions of consciousness are based on a folk psychological rather than a behavioristic account of consciousness, and that the TLIS demonstrates the limits of such a folk theory of consciousness. Finally, the prospect of a neuroscientific theory of other minds will be considered, and it will be conjectured that such a theory would not be conclusive unless it could provide a complete 'neuroscientific image of man'.

2. CLINICAL CONCEPTS OF CONSCIOUSNESS AND COMA: IMPLICIT BEHAVIORISM? Most attempts of defining 'consciousness' for neurological contexts can be classified as either mentalistic or behavioristic. But the mentalistic definitions play no major role in clinical practice: rather, they give metaphorical and mostly circular explanations. 1 Contrary to this, the behaviorist approach seeks to judge a patient's level of consciousness on the basis of purely behavioral evidence. 'Consciousness' could then be defined as the ability to respond adequately to certain applied stimuli or to emit certain patterns of behavior without actual stimuli. If 'coma' is just a 'pathological loss of consciousness', we also obtain definite and easily checkable features of coma, summed up as an 'absence of conscious behavior' (where 'conscious' is just an abbreviation of a list of behavioral patterns). Our current clinical scales, which should allow to discriminate not only 'intact consciousness' and 'coma', but also some intermediate stages of impaired consciousness, typically take responses to verbal and painful stimuli to be signs of consciousness or coma. 2 Now such a determination of a patient's level of consciousness is not to be confused with the additional evaluation of his/her neurological status. Our clinical ascriptions of consciousness are not dependent on neurological criteria or on further diagnostic data (from computerized tomography, etc.). It is just that the fact that we have learned a lot about neurological correlates of impaired consciousness may lead to the mistaken view that we also assess levels of consciousness primarily on the basis of such neurological (or radiological) evidence. This view is mistaken in that it confuses the causal explanation of a state of consciousness with the ascertain-



ment of that state. Although information from both behavioral and neurological data is to be considered for the evaluation of a patient's level of consciousness, we can say that while the former has a primarily diagnostic function with regard to the state of consciousness itself, the latter has a primarily explanatory function in this regard but a diagnostic function with respect to the underlying pathological mechanisms. Thus one might conclude that there is an implicit behaviorism in our currently applied concepts of intact and impaired (and even lost) consciousness: the idea that the level of consciousness is first of all to be determined on the basis of the patient's spontaneous or elicited behavior. The obvious anti-behaviorist reaction to this claim would be to maintain that this sort of behaviorism must fail if there are cases of consciousness without behavioral expression (see [8]). And though the behaviorist may doubt whether there really are such cases (see [9] for an argument in this spirit), this doubt seems to be disproved not least by the occurrence of the TLIS, to which we now turn.

3. THE TOTAL LOCKED-IN SYNDROME AS A CHALLENGE FOR BEHAVIORISM The total locked-in syndrome [10, 11] is characterized by a total loss of voluntary motor functions, combined with preserved consciousness. The main cause of this syndrome (and of the related 'classical' locked-in syndrome, in which voluntary vertical eye movements can be performed; see [1]) is an extensive ventral infarction of the pons and]or midbrain due to a basilar artery occlusion [12]. Such an infarct may chiefly involve all the relevant motor pathways and, by this, leave the patient without any possibility of voluntary movement. Thus, the patient is completely unresponsive to sensory stimuli and shows no spontaneous motions either. Since it is then impossible to demonstrate preserved consciousness on the basis of overt-behavioral criteria, we must turn to electrophysiological criteria (e.g. a preserved reactive EEG-alpha-rhythm [13]) and consider further anatomical data (e.g. from magnetic resonance imaging, for usually brainstem lesions produce coma only if the tegmentum is affected bilaterally [10]). Thus we are inclined to say that the proper treatment of TLIS-consciousness requires a sort of neurophysiologism (rather than behaviorism) concerning other minds. But unfortunately, those new neurophysiological criteria of consciousness are - at least at the moment neither definite nor reliable, as several studies have shown [10, 14, 15]. So, if neither behavioral nor clinical or neurophysiological signs of consciousness are valid, there is currently no way at all to determine a TLIS patient's state of consciousness. Looking into the future, we might then ask if a (hypothetical) perfectly reliable neurophysiological evidence couM qualify as a criterion of



consciousness. But before this question is discussed, we should mention a possible neo-behaviorist reply to the argument that behaviorism fails because in a TLIS case there is no behavioral evidence to account for. A liberal behaviorist might simply define the electrophysiological events that are looked upon as signs of consciousness as elements of the patient's behavior. This may seem strange, but it's not absurd, since, anyway, the boundary between behavioral and physiological events is to be drawn in accordance with the given stage of scientific knowledge (see [16]). And since electrophysiological correlates of consciousness are at least in principle observable events, they could count as elements of behavior even if we accepted the postulate that all behavior must be observable. But this move provokes another question: if all brain processes could become a sort of 'behavior', what is left as the point of behaviorism (as contrasted to, e.g., neurophysiologism)? Perhaps the former rivalries between the doctrines would just boil down to differences in scientific goals and points of interest: while behaviorists would still mainly be concerned with the prediction and control of behavior, neurophysiologists (and others) rather seek to explain its internal microgenesis [16]. Anyway, for our present purpose it is sufficient to see that we should hesitate to insist in the impossibility of any behaviorist account of TLIS-consciousness. Behaviorism and neurophysiologism concerning other minds are not as fundamentally different as they seem to be at first sight. And the fact that we have some reason to hope that we can find more reliable neurophysiological signs of consciousness (but not overt-behavioral ones) does not imply that neurophysiologism surpasses behaviorism in an epistemological sense. The real epistemological crux in ascribing consciousness to TLIS patients is not linked to differences between behavioral and electrophysiological signs of consciousness, but rather to the relations between extrinsic (behavioral and physiological) and intrinsic (phenomenal and 'subjective') aspects of consciousness. This is illustrated by the fact that even if we were provided with perfectly reliable electrophysiological signs of consciousness, we would still feel uncomfortable if our determination of a TLIS patient's level of consciousness would be based on nothing but this evidence. For in a more commonsensical frame of mind we might suspect that since only the TLIS patient himself really knows that he is conscious, and since he cannot tell us about it, we can never be really certain about his state of consciousness. But, on the other hand, if electrophysiological signs are 'perfectly reliable', does not that mean that they are not to be dismissed in favor of the patient's veridical reports? Indeed, if the commonsensical argument that only the patient himself can be absolutely certain about his state of consciousness was sound, verbal reports (in normal subjects) and EEG arousals (in TLIS patients) have the same epistemological status: they are both



mere indicators of consciousness. But how then can we ever be sure about the occurrence of consciousness in other people? This is exactly the philosophical OMP, a solution (or dissolution) of which thus seems to be required before a theory of TLIS-consciousness can be worked out.

4. KNOWLEDGE OF OTHER MINDS If we take the OMP as the question of how to verify statements concerning other minds, we soon reach an impasse. Feigl [17] argued that such statements cannot be verified, since we could only try to infer mental states in others as 'concomitant' with certain patterns of their behavior, if we start from our acquaintance with such a concomitance in our own case. But we simply cannot convince ourselves of the correctness of this inference, because we cannot ascertain the occurrence of mental states in others in the way we can ascertain the occurrence of some patterns of behavior. Thus, the conclusion of this analogical argument concerning other minds cannot have the desired 'degree of certainty' because there is in principle only indirect (behavioral) evidence available. It seems that only in 'first-person experience' there can be direct evidence for correspondence between mental and behavioral states. Thus, if verificationism is correct, statements concerning other minds are factually meaningless. If we refuse to accept this implausible result, we must abandon verificationism and argue that true beliefs concerning other people's mental states can be subject to a different procedure of epistemic justification. This is largely Wittgenstein's strategy in the §§ 2~1~1351 of his Philosophical Investigations [18]. With space being limited, we cannot enter into the details of his argument here. Very briefly: Wittgenstein would endorse Feigl's claim that the analogical argument is insufficient (though for different reasons, see [18], §§ 302, 350). But he holds that the need for this argument can be traced back to the mistaken idea of a private language, the terms of which designate the 'private' sensations of its user ([18], § 243). This is because the notion of first-person authority with regard to mental states - the notion that calls for a 'starting from our own case' and, by this, for an 'analogical argument' in the first place - derives from the conception of a private language. Consequently, Wittgenstein tries to do without these background assumptions by arguing that all there is to the meanings of our mentalistic terms (and, hence, the meanings of our statements concerning other minds) is what is expressed in the factual usage of these terms in our ordinary communication ([18], § 43), which is essentially 'something public', both in first-person and in third-person statements. And if we just look at our factual usage of other-ascriptions of mental states, we see that 'knowledge of other minds' can be understood as a



high degree of social familiarity on the basis of a general attitude towards human beings: If I can be sure that he feels a pain, this is not because I can 'verify a hypothesis' about correlations between the behavior and the sensations of a certain organism, but simply because I appreciate him as a human being and apprehend his pain behavior as embedded in an adequate 'pain situation'. Knowledge of other minds is part of a complex social practice and not an epistemic 'relation' to some 'entities' (mental states) 'in' somebody (the other person or his/her mind). By emphasizing the basically public origin of our mentalistic discourse, Wittgenstein seems to leave us with solely behavioral criteria for the determination of other people's mental states (to talk meaningfully about other people's pain sensations, outward criteria are required: pain behavior, an adequate pain situation etc.; see [18] § 580). But this should not lead to the mistaken view that Wittgenstein's argument somehow 'leads back to behaviorism concerning other minds'. Wittgenstein just maintains that our attitude simply is an 'attitude towards other minds'; this is how we act, and statements concerning other minds are meaningful in virtue of their functional role in this social practice. His demand for 'outward criteria' is in no way behavioristic, for while the behaviorist still tries to determine truth conditions for the occurrence of mental states in others, Wittgenstein has completely abandoned the verificationist paradigm and analyzes our other-ascriptions of mental states solely in terms of their social role (see [19]). And our ordinary language games are, according to Wittgenstein, "ur-phenomena" ([18], § 654) that cannot be explained but merely described ([18], § 109). They need no further epistemic justification, for they are elements of our given (and unquestionable) "Lebensform" (mode of life). With Wittgenstein, we would then assume that we unproblematically know about other people's mental states by virtue of our attitude towards them (instead of showing this attitude by virtue of that knowledge). Would this be a useful conception for a theory of TLIS-consciousness?


5.1. Clinical Concepts of Consciousness: Folk Psychology, Not Behaviorism Wittgenstein has convincingly shown that although 'knowledge' of other minds somehow requires (among other things) behavioral evidence, there is no reason to revive the notion of a 'behaviorist epistemology of other minds'. He has also argued - controversely, though - that the OMP cannot be solved but rather



dissolved by interpreting 'knowledge' (in this context) as a sort of attitude-based social familiarity, which cannot be subject to further scientific scrutiny or philosophical analysis. In tracing back the meaning of 'knowledge' (and, of course, also of 'mind' etc.) to its ordinary usage, he puts the OMP back into that commonsensical framework of our everyday explanations and predictions of behavior that has come to be called 'folk psychology' in recent years. This may draw our attention to the fact that in our ordinary social communication we already have an unproblematic, though proto-theoretical and non-scientific, concept of 'knowledge of other minds', which is at least practicable (this we can accept even without consenting to Wittgenstein's claim that the OMP must be dissolved by ordinary language analysis). As for ascriptions of consciousness in patients with brain lesions, we must say that since neurophysiologism concerning other minds is e_pistemologically on a par with behaviorism (see section 3), both these conceptions are unacceptable as 'solutions' of the OMP. Thus, if our widely accepted 'scales' for the determination of levels of consciousness (except in the TLIS; see below) establish a well-functioning clinical practice - and, by this, an unproblematic knowledge of the patients' minds - , then this practice must be interpreted as a part of the folk-psychological framework. When we evaluate a head-injured patient's level of consciousness, we apply the same outward (that is: behavioral) criteria as in our everyday communication. Here, our "clinical methods" are but refined and "finer-grained" versions of folkpsychological strategies, tailored to clinical requirements. With this interpretation, we can avoid the complications of the OMP in clinical practice - but we still don't know how to account for TLIS-consciousness. 5.2. The TLIS and the Neuroscientific "Image of Man' When we focus on the TLIS, the insufficiency of the folk-psychological concept of other minds becomes evident. Furthermore, we recognize a disadvantage of Wittgenstein's 'Lebensform' approach. Remember Wittgenstein's demand that we stop searching for further epistemological explanations not later than we reach the level of our factual language games, that is, the level of the 'Lebensform'. This procedure becomes problematic as soon as essentially new situations arise - situations we have not yet dealt with in the context of our established mode of life. And one of these situations is the confrontation with a TLIS patient, whose state of consciousness cannot be judged in terms of outward criteria and hence cannot be evaluated within the folk-psychological framework. In such extreme and/or new situations we cannot take our social habits for granted, since coping with these situations requires that they be modified - and that these modifications be justified. We would not be helped if we were told to merely describe these modifications, for as in the TLIS case we



would have to provide outward criteria - or, more generally, we would have to create new language games - before we can apply and then describe them. It thus seems that in order to account for TLIS-consciousness we have to look for a completely new conceptual framework. In ordinary language, we unproblematically talk about other people's mental states (and we sometimes say that we know about them). But the functioning of these ascriptions is not due to their precision or verifiability - in fact, they are neither precise nor verifiable but rather depends on their being embedded in that 'complete language' which is rich enough to cover our whole 'Lebensform'. Ordinary language provides a complete 'image of man' (this term is borrowed from Sellars [20], to whom some of the following thoughts trace back) in that it is sufficient for all purposes of our usual social life. And it is only within such a complete image of man that attitudes toward others as human beings (and hence as mindful or conscious beings) can emerge. This also holds for the TLIS patient, who cannot find his place in a folk-psychological framework. In order to do justice to the TLIS patient as a conscious being, we must relocate him in a completely different framework, presumably a neuroscientific one. It would not do to just 'introduce' neurophysiological criteria of consciousness in these cases, as we have tried to do so far. For even if these criteria would become more precise and reliable than they are now, they would still be alien elements in a basically folk-psychological clinical discourse as long as a complete 'neuro-language' that might include a 'neuro-language game' concerning other minds is not available. As Sellars [21-23] showed, the transition from one complete language (or conceptual framework) to another would not allow a simple replacement of, say, behavioral criteria of consciousness by neurophysiological ones, for the elements of the 'first stratum' [24] of an account of other minds - the instances of 'conscious behavior' - would determine the features which appear as the explananda of the succeeding (neurophysiological) stratum of explanation. What constitutes 'knowledge of other minds' according to ordinary behavioral criteria must be recategorized in a neuroscientific framework, in which a TLIS patient could then find a proper place. Consequently, to explain a TLIS patient's preserved consciousness adequately in neurophysiological terms (and hence to know about it) is not merely to give a list of separate signs or indicators of consciousness, but rather amounts to a complete description of the TLIS patient (as a human being) in a neuroscientific vocabulary. In Sellars' [20] words, this explanation would require the realization of a complete "scientific image of man". And this is something very utopian, for at the time being all we have available are those dispersed neurological indicators of consciousness that are at best sufficient for establishing a somehow functioning clinical procedure. In order not to fall back into the complications of the OMP, our utopian 'neuroscientific image of man' would have to provide a successor concept not only of 'consciousness' itself, but



also of components of our mode of life like, for example, the 'attitude' that was identified as the ordinary basis of knowledge of other minds. The TLIS patient turns out to be the epistemological borderline case where a folk-psychological explanation no longer applies while a (neuro)scientific explanation is not yet available. At present, we cannot find out whether or not a presumed TLIS patient is conscious, not because there are no 'necessary and sufficient conditions', but because a fundamental uncertainty about what it means to be a human being in all respects arises as soon as the familiar outward criteria of our ordinary discourse are no longer applicable. 3 Confronted with the TLIS patient, we are uncertain of his state of consciousness and suspect that 'new' neurophysiological signs of consciousness might be unreliable or even deceptive. And this is not because the behavioral criteria we applied before (and in non-TLIS cases) were better or more reliable, but rather because these 'ordinary' criteria were (and are) embedded in that complete (though surely not completely correct) conception of man which is part of the 'Lebensform' we actually live.

6. CONCLUSIONS To summarize: we did not find that a behaviorist epistemology of other minds is refuted by the TLIS-example (see section 3). But we came to see that like any other 'scientific method' of achieving knowledge of other minds, behaviorism is inconsistent. This conclusion was a result of our acceptance of a Wittgensteinian dissolution of the OMP, which seemed to be the only way to explain our factual knowledge of other minds without getting entangled with the philosophical paradoxes that arise from the 'epistemic authority' of first-person experience and from the analogical argument. Furthermore, a classical behaviorist account that allows for overt-behavioral data only is inapplicable in a TLIS case. But our inability to 'determine the TLIS patient's consciousness' is not based on this insufficiency of classical behaviorism; rather, it is due to the incompatibility of the TLIS phenomenon with our whole folk-psychological framework of explanation. If knowledge of other minds presupposes an attitude towards human beings (as persons, as conscious beings) in general and thus a rich and homogenous image of man, then knowledge of a TLIS patient's consciousness will only emerge within a (utopian) complete scientific image of man, which some of us expect from our dramatically evolving neuroscience. But since this image is dreams of the future, we are left with familiar uncertainty and doubt - and with the old pragmatical advice to treat potential TLIS patients as/fthey are conscious.



* Dedicated to Prof. Dr. Dr. Roll Wiillenweber on the occasion of his 65th birthday. 1 Consider, for example, "consciousness is the state of awareness of the self and the environment" ([1], p. 1) or "that function of the nervous system which is concerned with the perceptual experience of information" ([2], p. 62): both authors use mentalistic concepts ('awareness', 'experience') the definitions of which would themselves require some concept of consciousness. 2 Thus the well-established "Glasgow Coma Scale" [3, 4] considers three aspects of the patient's behavior ("eye opening", "best motor response" and "verbal response") and distinguishes between various degrees of impairment, each of which is given a numerical value as a component of the final 'score'. All patients below a certain score are then classified as "comatose". Other scales [5-7] emphasize different aspects of the patient's behavior, most of them dispensing with the verbal responses (which are problematic in aphasics) and eye opening. 3 This is illustrated even more impressively by another discussion, in which the question for consciousness is sometimes confused with the question for personhood and in which the above mentioned 'fundamental uncertainty' calls forth some disquieting results: the actual controversy about brain-oriented definitions of death. Some authors refute the well-established concept of brain death as the "irreversible cessation of all functions of the entire brain" ([25], p. 2); they hold a concept of "neocortical death" by arguing that since 'death' is equivalent to "permanent loss of personhood" ([26], p. 7), the irreversible cessation of cortical functions as correlates of personal identity is sufficient to declare a patient dead. This claim is usually supported by equating 'personal life' with 'conscious life' ([27], p. 83) and thus 'personhood' with 'consciousness' ([28], p. 182). According to this line of thought, the question for consciousness literally is the question for personhood. But, of course, to say that someone has suffered personal death because he has become irreversibly comatose is to inverse the correct order of ideas. For far from providing evidence for classifying an individual as a person, ascriptions of consciousness to other people are themselves made on the basis of an attitude towards those people as persons. Thus the current disagreement concerning the determination of 'personal death' is at least partly brought on by the unsettled OMP. Our difficulties in defining personal death and in determining a TLIS patient's state of consciousness are both consequences of that fundamental uncertainty about what it means to be a human being that arises in those extreme situations which cannot be mastered within our folk-psychological image of man (see also [29] and [30] for further discussion).

REFERENCES 1. Plum F, Posner JB. The Diagnosis of Stupor and Coma. 3rd ed. Philadelphia: FA Davis Co, 1980. 2. Jouvet M. Coma and other disorders of consciousness. In: Vinken B J, Bruyn GW, eds, Disorders of Higher Nervous Activity. Amsterdam: North Holland Publishing Co, 1969: 59-78. 3. Teasdale GM, Jennett B. Assessment of coma and impaired consciousness: a practical scale. Lancet 1974;2:81--4. 4, Jennett B, Teasdale GM. Aspects of coma after severe head injury. Lancet 1977; 1: 878-81. 5. Moskopp D, Ries F, Durwen HF, Linke DB. Zur Einteilung der BewuStseinslagen. In: Poeck K, Hacke W, Schneider R, eds. Verhandlungen der Deutschen Gesellschaflfiir Neurologie 4. Berlin: Springer, 1987: 531-2.



6. Subczynski JA. State of consciousness scoring system. J Neurosurg 1975;43:251. 7. Stanczak DE, White III JG, Gouview WD, et al. Assessment of level of consciousness following severe neurological insult. J Neurosurg 1984;60:955-60. 8. Foss J. Radical behaviorism is a dead end. Behavioral and Brain Sciences 1985;8:59. 9. Rachlin H. Ghostbusting. Behavioral and Brain Sciences 1985;8:73-80. 10. Bauer G, Gerstenbrand F, Rumpl E. Varieties of the locked-in syndrome. J Neurol 1979;221:77-91. 11. Meienberg O, Mumenthaler M, Karbowski K. Quadriparesis and nuclear oculomotor palsy with total bilateral ptosis mimicking coma. Arch Neurol 1979;36:708-10. 12. Nordgren RE, Markesbery WR, Fukuda K, et al. Seven cases of cerebral medullary disconnexion: the "locked-in-syndrome". Neurology 1971 ;21:1140-8. 13. Markand ON. Electroencephalogram in "locked-in" syndrome. Electroencephalogr Clin Neurophysiol 1976;40:529-34. 14. Chase TN, Moretti L, Prensky AL. Clinical and electroencephalographic manifestations of vascular lesions of the pons. Neurology 1968; 18:357-68. 15. Carroll WM, Mastaglia FL. 'Locked-in coma' in postinfective polyneuropathy. Arch Neurol 1979;36:46--7. 16. Zuriff GE. Pr6cis of Behaviorism: a conceptual reconstruction. Behavioral and Brain Sciences 1986;9:687-99. 17. Feigl H. Other minds and the egocentric predicament. The Journal of Philosophy 1958;55:978-87. 18. Wittgenstein L. Philosophische Untersuchungen Philosophical Investigations. Oxford: Basil Blackwell, 1953. 19. Kripke SA. Wittgenstein on Rules and Private Language. Oxford: Basil Blackwell, 1982. 20. Sellars W. Philosophy and the scientific image of man. In: Colodny R, ed. Frontiers of Science and Philosophy. Pittsburgh: University of Pittsburgh Press, 1962: 35-78. 21. Sellars W. Empiricism and the philosophy of mind. In: Sellars W. Science, Perception, and Reality. London: Routledge and Kegan Paul, 1963: 127-96. 22. Seltars W. Science, sense impressions, and sensa: a reply to Comman. Review of Metaphysics 1971;24:391--447. 23. Sellars W. Foundations for a metaphysics of pure process: the Cams lectures of W. Sellars. The Monist 1981 ;64:3-90. 24. Sellars W. Naturalism and Ontology. Reseda: Ridgeview, 1979. 25. President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research. Defining Death. Washington, DC: Govemment Printing Office, 1981. 26. Zaner RM. Introduction. In: Zaner RM, ed. Death: Beyond Whole-Brain Criteria. Dordrecht: Kluwer Academic Publishers, 1988: 1-14. 27. Puccetti R. Does anyone survive neocortical death? In Zaner RM, ed. Death: Beyond Whole-Brain Criteria. Dordrecht: Kluwer Academic Publishers, 1988: 75-90. 28. Veatch RM. Whole-brain, neocortical, and higher-brain related concepts. In: Zaner RM, ed. Death: Beyond Whole-Brain Criteria. Dordrecht: Kluwer Academic Publishers, 1988: 171-86. 29. Kurthen M. Linke DB, Moskopp D. Teilhimtod und Ethik. Ethik in der Medizin 1989;1:13 a. 42. 30. Kurthen M. Linke DB. Androiden-Behaviorismus. In: Becker B, ed. Zur Terminologie in der Kognitionsforschung. St Augustin: GMD, 1989: 256-72.

The locked-in syndrome and the behaviorist epistemology of other minds.

In this paper, the problem of correct ascriptions of consciousness to patients in neurological intensive care medicine is explored as a special case o...
704KB Sizes 0 Downloads 0 Views