Topics in Cognitive Sciences 6 (2014) 47–52 Copyright © 2013 Cognitive Science Society, Inc. All rights reserved. ISSN:1756-8757 print / 1756-8765 online DOI: 10.1111/tops.12072

Comments on Quantum Probability Theory Steven Sloman Cognitive, Linguistic, & Psychological Sciences, Brown University Received 26 February 2013; accepted 1 March 2013

Abstract Quantum probability theory (QP) is the best formal representation available of the most common form of judgment involving attribute comparison (inside judgment). People are capable, however, of judgments that involve proportions over sets of instances (outside judgment). Here, the theory does not do so well. I discuss the theory both in terms of descriptive adequacy and normative appropriateness. Keywords: Cognition; Quantum probability; Nested sets; Probability judgment

1. Comments on Quantum Probability Theory I believe that quantum probability theory (QP; see Wang, Busemeyer, Atmanspacher, & Pothos, 2013) is the best formal representation available of one aspect of judgment of uncertainty. However, there is another aspect that the theory does not capture so well. Nevertheless, the aspect that QP does capture is the most central to natural, everyday human judgment, and certainly to unschooled judgment. But before getting into that, I want to address the normative question. Is QP a better normative theory of uncertainty assessment than, say, Bayesian conditioning in the context of classical probability (CP)? If we are talking about the behavior of subatomic particles or about light traveling through the cosmos, then, based on my limited understanding of such things, yes. Apparently, QP provides more accurate predictions in those domains. However, if we are talking about blackjack in Las Vegas, I would put my money (literally) on CP. Different domains require different normative analyses (Shafer & Tversky, 1985). Arguing whether QP or CP is the right normative theory is like arguing whether we should wear shorts or a snowsuit. It depends on prevailing conditions, and we should consider other choices, too.

Correspondence should be sent to Steven Sloman, CLPS, Box 1821, Brown University, Providence, RI 02912. E-mail: [email protected]

48

S. Sloman / Topics in Cognitive Sciences 6 (2014)

The truth is that I think that probability is overrated. Zadeh (2006) has made the point that people deal with many forms of imprecision, and uncertainty is just one of them. There are also ambiguity, vagueness, risk, and ignorance. Each may demand different formal structures. Zadeh has used the predicate “middle-aged” to illustrate this point. If Steven is 50 years old, then there is no probability that he is middle-aged, rather he is in fact middle-aged. He is also middle-aged if he is 49 or 51 or, according to Zadeh, anywhere between 45 and 55. The concept is fuzzy around the borders (what if he is 42 or 58?), but not because there exists some probability that he is middle-aged, but rather because the concept is vague and its applicability is up to the speaker in its context of use. So not everything is captured by probability. What I like about QP is that the format it uses for encoding representations has the potential to capture more than uncertainty; it can also represent some of these other forms of imprecision. Perhaps it should not be considered a theory of probability, but instead a theory of imprecision (Blutner, Pothos, & Bruza, 2013). In my endorsement of Busemeyer and Bruza’s (2012) book on quantum models of cognition, I said, “Mathematical models of cognition so often seem like mere formal exercises. Quantum theory is a rare exception. Without sacrificing formal rigor, it captures deep insights about the workings of the mind with elegant simplicity.” Lee and Vanpaemal (2013) argue that I had it “exactly the wrong way around.” They claim that quantum models offer “formal exercises that might produce impressive fits to data but, by their founding assumptions, cannot offer some of the most basic insights into the causes, effects, and relevant factors that underlie the workings of human cognition.” The “founding assumptions” that Lee and Vanpaemal (2013) are referring to is actually just one: “…quantum theory assumes deterministic causes do not exist, and that only incomplete probabilistic expressions of knowledge are possible.” It is not absolutely clear what Lee and Vanpaermal mean by “deterministic causes.” They refer to the necessity of some causes, but the argument that they review from Jaynes (2003) is an argument about what Bayesians call “explaining away” and it depends on a notion of causal sufficiency, not necessity. It would be odd if they meant that Prob(effect | cause) = 1, because that is a problematic relation for Bayesians. Extreme probabilities never permit beliefs to be changed. Presumably something could be learned, at least in principle, to modify one’s belief in the strength of any causal relation (e.g., wind causes leaves to rustle but not necessarily if they are wet). So, if their concern is that QP rules out the case of causal powers that are perfectly strong (causes leading to effects without the possibility of exception), it is a rather odd accusation coming from proponents of CP, as most Bayesians make exactly the assumption they are worried about. Jaynes’ (2003) argument, as related by Lee and Vanpaemal, is that “If there is some effect E that does not occur unless some condition C is present, it seems natural to infer that the condition is a necessary causative agent for the effect. If, however, the condition does not always lead to the effect, it seems natural to infer that there must be some other causative factor F, which has not yet been understood.” Indeed, a substantial amount of evidence shows that people are very comfortable with this inference schema (e.g., Morris & Larrick, 1995; Sloman, 2005). However, it does not imply either that

S. Sloman / Topics in Cognitive Sciences 6 (2014)

49

causes are sufficient or that Prob(effect | cause) = 1. Causal reasoning does not involve only causes and alternative causes but also preventers, disablers, and enablers. The strength of a cause can often (and perhaps always) be attributed to the absence of disablers and the presence of enablers. No matter how likely an effect is to arise from a cause, that effect could in principle be disabled. The rotation of the Earth may cause night and day, but this can be disabled (temporarily) by an eclipse. Moreover, night and day are enabled by the sun. So explaining away is a perfectly good inference schema, but it makes hidden assumptions about causal strength that could be violated. Specifically, it assumes that causes can be treated as sufficient, though in fact they never are; they require disablers to be absent and enablers to be present. The possibility of either condition being violated means we must treat causes as potentially ineffective. So both QP and CP have this right. More important, there are basic insights offered by quantum models that Lee and Vanpaemal missed, insights that Pothos and Busemeyer (2003) do a good job of reviewing. Here are some elaborations of the points made by Pothos et al.: Superposition can be associated not only with uncertainty but also with vagueness and ambiguity (Blutner, Pothos, & Bruza, 2013). The meaning of utterances in a discourse depends on the question under discussion (Roberts, 1996); how sentences are understood depends on the frame in which they are uttered. If this is true of sentences, it would be odd if it were not also true of mental representations. What we do know is that concepts are just as variable as word meanings in the sense that category membership judgments vary from week to week (McCloskey & Glucksberg, 1978), similarity judgments depend on which dimensions of the objects are made salient (Medin, Goldstone, & Gentner, 1993; Tversky, 1977), and frames of reference determine how we value things (Kahneman & Tversky, 1984). So the idea that meaning requires the projection of some sort of complex representation onto a frame of reference is very well motivated by psychological facts. With respect to incompatibility of questions, psychology is full of order dependence of questions (a fact that confounds a huge amount of survey research; cf. Sirken et al., 1999). So QP theory is consistent with some of the most central phenomena in psychology. I agree with some detractors that some of these explanations are post hoc. However, CP suffers from this complaint in spades (Sloman & Fernbach, 2008). At this point, QP theory is just a theoretical framework and not a fully fledged theory of judgment. This is true of all theoretical programs in the field of judgment that attempt to do more than fit data from a small number of experiments. I think the value of the QP framework derives from the fact that it is defined in similarity space. In contrast, CP models are usually defined in hypothesis space. At some abstract level, these are interchangeable, but they tend to elicit different assumptions when modeling specific cases. QP asks the theorist to think about the attributes of the objects of judgment and the attributes of the questions asked and to represent their similarity to one another. CP models ask the theorist to think about all possible conclusions that one might draw from the evidence and to assign a probability to each one along with an assessment of how those probabilities would change in the face of new evidence. In

50

S. Sloman / Topics in Cognitive Sciences 6 (2014)

most cases, I think the former corresponds more closely to the analysis that people engage in when making a judgment. Lagnado and Sloman (2004) make this argument. We argued for a psychological analog in the domain of probability judgment to the intensional/extensional dichotomy of semantics. On one hand, people make judgments from the inside in which representations of two objects’ attributes are compared as in similarity assessment. Most everyday judgments are made this way, hallmark examples being judgments made using the representativeness heuristic. As pointed out by Pothos and Busemeyer (2013), QP represents these cases nicely: “the QP explanation for the conjunction fallacy can be seen as a formalization of the representativeness heuristic (Tversky & Kahneman, 1983)” (p. 43). I worry about the concerns brought up by Hampton (2013) and Tentori and Crupi (2013), but the approach is by far the most advanced we have for formally modeling these kinds of judgments. On the other hand, with sufficient cuing, people can make judgments from the outside. For instance, when they make sense, frequency formats can reduce base-rate neglect (Cosmides & Tooby, 1996) and conjunction fallacies (Tversky & Kahneman, 1983; see Pleskac, Kvam, & Yu, 2013). More generally, making set relations transparent can reduce error in probability judgment (Barbey & Sloman, 2007). Even though people tend to treat property inferences from a superordinate category to a subordinate as probabilistic, when the question is framed as a logical syllogism, then people apply the logic of set inclusion (Sloman, 1998). Presenting an argument in logical form is a cue to think about the problem in terms of sets. Obviously people have the capacity to think in terms of sets, otherwise nobody would understand Venn diagrams. That is why models that have us thinking in set-theoretic terms can describe performance very well on certain kinds of tasks, namely those tasks that are naturally framed in terms of sets of instances. This occurs when we reason about syllogisms (All As are Bs, All Bs are Cs, are All Cs As?) and that is why mental model theory which represents the objects of reason in terms of sets of possibilities does such a good job on such problems and its variants (e.g., Bucciarelli & Johnson-Laird, 1999). Similarly, cuing people to think in terms of nested sets by presenting problems in terms of frequencies or chances and using proportions, in which the sizes of the target and reference class are spelled out explicitly, aids probability judgment. I would expect CP to do a better job than QP theory at describing judgments made from the outside. After all, the outside view has us thinking in terms of the proportion of items in a set, just like CP does. CP is just a form of coherent reasoning about measures of set size and overlap. CP may of course be limited in its ability to represent humans outside judgment (Fox & Levav, 2004; Johnson-Laird, Legrenzi, Girotto, Legrenzi, & Caverni, 1999). Nevertheless, it certainly provides a better normative theory of such judgments. To the degree the task at hand asks people for a measure over sets, CP must be a better normative theory of task performance than QP. In sum, there are different kinds of judgments that lend themselves to distinct psychological analyses. One analysis describes computations in a similarity space and involves parallel multi-attribute comparisons; the other involves rules that count and combine sets. This distinction maps cleanly onto at least one dual-system theory (Sloman, 1996).

S. Sloman / Topics in Cognitive Sciences 6 (2014)

51

References Barbey, A. K., & Sloman, S. A. (2007). Base-rate respect: From ecological rationality to dual processes. Behavioral and Brain Sciences, 30, 241–254. Blutner, R., Pothos, E. M., & Bruza, P. D. (2013). A quantum probability perspective on borderline vagueness. Topics in Cognitive Science 5(4), 711–736. Bucciarelli, M., & Johnson-Laird, P. N. (1999). Strategies in syllogistic reasoning. Cognitive Science, 23(3), 247–303. Busemeyer, J., & Bruza, P. D. (2012). Quantum models of cognition and decision. Cambridge, England: Cambridge University Press. Cosmides, L., & Tooby, J. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition, 58(1), 1–73. Fox, C. R., & Levav, J. (2004). Partition-edit-count: Naive extensional reasoning in judgment of conditional probability. Journal of Experimental Psychology: General, 133(4), 626. Hampton, J. A. (2013). Quantum probability and conceptual combination in conjunctions. Behavioral and Brain Sciences, 36, 290–291. Jaynes, E. T. (2003). Probability theory: The logic of science. Cambridge, England: Cambridge University Press. Johnson-Laird, P. N., Legrenzi, P., Girotto, V., Legrenzi, M. S., & Caverni, J. P. (1999). Naive probability: a mental model theory of extensional reasoning. Psychological Review, 106(1), 62. Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American psychologist, 39(4), 341. Lagnado, D. A., & Sloman, S. A. (2004). Inside and outside probability judgment. Blackwell handbook of judgment and decision making (pp. 157–176). Oxford, England: Blackwell. Lee, M. D., & Vanpaemal, W. (2013). Quantum models of cognition as Orwellian newspeak. Behavioral and Brain Sciences, 36(3), 295–296. McCloskey, M. E., & Glucksberg, S. (1978). Natural categories: Well defined or fuzzy sets? Memory & Cognition, 6(4), 462–472. Medin, D. L., Goldstone, R. L., & Gentner, D. (1993). Respects for similarity. Psychological Review, 100(2), 254. Morris, M. W., & Larrick, R. P. (1995). When one cause casts doubt on another: A normative analysis of discounting in causal attribution. Psychological Review, 102(2), 331–355. Pleskac, T. J., Kvam, P. D., & Yu, S. (2013). What’s the predicted outcome? Explanatory and predictive properties of the QP framework. Behavioral and Brain Sciences 36(3), 303–304. Pothos, E. M., & Busemeyer, J. R. (2013). Can quantum probability provide a new direction for cognitive modeling? Behavioral and Brain Sciences 36(3), 255–274. Roberts, C. (1996). Information structure: Towards an integrated theory of formal pragmatics. In J.-H. Yoon & A. Kathol (Eds.), OSU Working Papers in Linguistics, Vol 49: Papers in Semantics (pp. 91–136). Columbus, OH: The Ohio State University Department of Linguistics. Shafer, G., & Tversky, A. (1985). Languages and designs for probability judgment. Cognitive Science, 9, 309–339. Sirken, M. G., Herrmann, D. J., Tourangeau, R., Tanur, J. M., Schwarz, N., & Schechter, S. (1999). Cognition and survey research. New York: Wiley. Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119, 3–22. Sloman, S. A. (1998). Categorical inference is not a tree: The myth of inheritance hierarchies. Cognitive Psychology, 35, 1–33. Sloman, S. A. (2005). Causal models: How we think about the world and its alternatives. New York: Oxford University Press. Sloman, S. A., & Fernbach, P. M. (2008). The value of rational analysis: An assessment of causal reasoning and learning. In N. Chater, & M. Oaksford (Eds.), The probabilistic mind: Prospects for Bayesian cognitive science. Oxford, UK: Oxford University Press.

52

S. Sloman / Topics in Cognitive Sciences 6 (2014)

Tentori, K., & Crupi, V. (2013). Why quantum probability does not explain the conjunction fallacy. Behavioral and Brain Sciences 36(3), 308–310. Tversky, A. (1977). Features of similarity. Psychological Review, 84(4), 327. Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90(4), 293. Wang, Z., Busemeyer, J. R., Atmanspacher, H., & Pothos, E. M. (2013). The potential to use quantum theory to build models of cognition. Topics in Cognitive Science 5(4), 672–688. Zadeh, L. A. (2006). Generalized theory of uncertainty (GTU)—principal concepts and ideas. Computational Statistics & Data Analysis, 51, 15–46.

Comments on quantum probability theory.

Quantum probability theory (QP) is the best formal representation available of the most common form of judgment involving attribute comparison (inside...
119KB Sizes 1 Downloads 0 Views