Journal o f Psycholinguistic Research, Vol. 2, No. 3, 1973

Abstracts of Presentations at the Psycholingnistics Circle of New York Doris A a r o n s o n 1

The Psycholinguistics Circle of New York is open to researchers concerned with language and its psychological aspects. Monthly meetings, held at New York University, include speakers, symposia, and debates on topics of interest to psycholinguists. Participants are invited to submit research and theoretical papers to the Journal of Psycholinguistic Research. Speakers for 1972-1973 include: Samuel Anderson, New York State Psychiatric Institute, "The Syllable: A Mental Fiction or a Unit of the Speech Code?"; Terry Winograd, Massachusetts Institute of Technology, "A Model for Language Understanding"; Tom K. Landauer, Bell Research Labs, "Word Frequency and Word Memory"; Herbert Rubenstein, Lehigh University, "The Storage of Inflectional and Derivational Forms in the Internal Lexicon"; Michael Studdert-Kennedy, Queens College, "Cracking the Phonetic Code"; David Meyers, Bell Research Labs, "Activation of Lexical Memory: Contextual Effects on Word Recognition"; Lois Bloom, Columbia University, "Spontaneous Imitation in Children's Speech." Researchers interested in speaking at or becoming members of the Circle should write to Doris Aaronson, Psychology Department, 4 Washington P1., Rm. 858, N.Y.U., New York, N.Y., 10003. Below are abstracts of some of the 1971-1972 programs. Abstracts from Art Reber and Tom Bever are from a debate on the topic: The Click Phenomenon: What is its Relationship to the Attentional &/or Linguistic Processing of Sentences?

1Department of Psychology, New York University, New York, New York. Dr. Aaronson is the Coordinator and Abstract Editor of the Psycholinguistics Circle of New York. 279

~) 1973 Plenum Publishing Corporation, 227 West 17th Street~ New York, N.Y. 10011.

280

Aaronson

Exploration of the Effect of Information Density and the Specificity of Instructional Objectives on Learning from Text Ernst Z. Rothkopf 2 Intentional and incidental learning was studied as a function of (1) the density in the text of sentences relevant to instructional objectives and (2)the specificity with which instructional objectives were described. The major findings were: (1) more intentional learning resulted from specific than broad objectives, but incidental learning was not affected by this factor; (2) increases in density of instructional objectives resulted in decreases in the likelihood that any intentional item was learned, but did not affect performance on incidental items. Intentional learning was generally greater than incidental. Performance on both intentional and incidental items was considerably higher when instructional goals were explicitly described than when directions similar to those commonly employed in learning experiments were used. At least three factors were confounded in the variable characterized as information density in the initial experiment. These were: (1)the number of objectives presented to S, (2)the number of relevant sentences in the text, and (3) the ratio of relevant sentences to the total number of sentences in the text. Subsequent experiments indicated that the number of objectives presented to S and the ratio of relevant sentences to the total number of sentences in the text must play a relatively small role in producing the decreases in performance on intentional items that were found to be associated with increases in information density in the initial study. This conclusion must be limited to the texts varying in length between 500 and 1500 words that were used in our research. We have found that the probability of correct performance on any intentional item was constant for a given information density for passages approximately 500, 1000, and 1500 words in length. On the other hand, the likelihood of correct performance on an incidental item diminished with passage length.

Levels of Syntactic Realization in Oral Reading Eric Brown 3 Two different perspectives on reading theory and research were reviewed. The first was characterized as the "visual" approach to the problem, where 2Bell Laboratories, Murray Hill, New Jersey. 3New York University, New York, New York.

Abstracts of Presentations at the PsycholinguisticsCircle of New York

281

it is hypothesized that the fluent reader is able to go from the visual processing of letter configurations or features directly to some type of semantic interpretation, bypassing any need for auditory or articulatory mediation. The second approach hypothesizes an additional step in this process, involving auditory rehearsal, articulatory referencing or abstract distinctive feature representations of lexical items. A review of certain pertinent research in visual information processing, oral reading, subvocalization, and reading speed suggested that the visual hypothesis in reading had no particular adequacy in explaining the available evidence. In particular, it was proposed that the research in visual information processing demonstrates severe limitations in the temporal capacity of the visual system for language; that even in visual-perception-to-writing tasks some type of auditory rehearsal component must be incorporated to account for what is retained. Moreover, the research on implicit speech or subvocalization in silent reading indicated that interfering with subvocal activity produces marked decrements in silent reading comprehension. Finally, evidence from studies on rapid reading appeared to place definite limits on central processing speeds-speeds which are compatible with some type of auditory or articulatory mediation. In support of a previously published theory of reading by the present experimenter which posits a necessary level of articulatory feature representation between visual perception and semantic interpretation, evidence was presented that a proficient oral reader demonstrates his grammatical comprehension of a passage by the occurrence and duration of his pausing. This experiment investigated the predictability of pause time in a 1537-word spoken message. A professionally read rendition paced at 164 wpm was analyzed from three points of view: (1) an immediate constituent, or surface structure, syntactic analysis; (2) a stochastic information analysis of all lexical items in context; and (3) a deep structure analogue or clause analysis. Results indicated that 64% of the pause Variance could be predicted from the syntactic measures. Both the surface structure and deep structure analogue measures of syntactic complexity yielded reliable predictive variance n o t accounted for in the overlap of the two variables, suggesting that both levels of linguistic representation were important determiners of pause structure in an oral reading performance. It was further suggested that the level of understanding that is necessary for oral reading can be made quite precise. An acceptable oral rendering does not mean that the message in its fullest sense is understood; there is too much evidence from oral reading problems to the contrary. However, it does mean that all grammatical relations are comprehended, including the grammatical categories of words, and that the internal structure of the words themselves is

282

Aaronson

well understood. What is left is the psychological process of understanding language regardless of modality, a process undoubtedly made up of large components of context, reference, set, and attention. Nonetheless, it is at this level of processing that the experimenter has defined reading as a completed act.

Investigations of Precategorical Acoustic Storage

Robert G. Crowder 4 Three kinds of evidence support the existence of a prelinguistic acoustic memory, similar in properties to visual iconic storage but more persistent in time. First, the advantage of auditory over visual presentation in immediate memory is consistent with the idea that subjects have available an additional source of information in the auditory case that is not present in the visual case: Whereas both presentation modalities lead to a form of coding based on each item's name, the auditory modality also eventuates in a form of coding based on its sound. The second source of support comes from the stimulus suffix effect, the finding that a redundant word following the last memory item produces a selective impairment in recall for the final serial positions of immediate memory lists presented acoustically. The argument in this case is that the redundant item displaces information from the prelinguistic acoustic memory, returning performance to what it would have been had there only been coding based on nominal information in the first place, that is, returning performance to that obtained in the visual modality. It is of considerable importance that both the modality and suffix effects are obtained over the same region, the terminal items, of the serial position curve. The third demonstration of precategorical acoustic storage is a situation where the subject receives three simultaneous auditory messages followed by a visual cue instructing him on which channel he should report. The finding is that provided the poststimulus cue is received promptly after stimulus presentation, the subject can improve his report of a channel considerably beyond that expected from conditions in which he must report all channels; however, if the cue is delayed, this advantage of partial over whole report disappears. Two groups of experiments illuminate some of the properties of precategorical acoustic storage. The first group deals with the issue of what types of acoustic events will cause a suffix effect following acoustic presenta4yale University, New Haven, Connecticut.

Abstracts of Presentations at the Psycholinguistics Circle of New York

283

tion of memory span items. Three classes of variables have been investigated with the following results. Similarity between the suffix element and the memory elements along semantic, cognitive dimensions makes absolutely no difference. Similarity between the suffix and the memory elements along physical dimensions such as voice quality and location affect the magnitude of the suffix effect, with dissimilar suffixes producing a result intermediate between control conditions and suffix conditions where the physical properties of the redundant item match those of the memory items. Finally, suffix items which are not speech sounds, such as tones and buzzers, produce no effect whatever on performance. These results conform exactly to the pattern expected of a prelinguistic system. Another way of discovering the type of information held in precategorical acoustic storage is to compare the magnitude of the modality and suffix effects as a function of the type of information to be remembered. Several experiments indicate that when the to-be-remembered information is contained in initial stop consonants of consonant-vowel syllables, these effects do not occur, whereas when the memory information is contained in vowels or in ordinary stimuli they do. Reasons for the selectivity of acoustic memory to certain types of speech sounds may have something to do with mechanisms involved in the perception of speech.

Engaging and Disengaging the Speech Processor

Ruth S. Day s

Some auditory stimuli are perceived as coffee-pot gurgles, bird chirps, and other nonspeech sounds, while other stimuli are perceived as speech. A general auditory system appears to be sufficient to process nonspeech signals, but a specialized linguistic "decoder" is required to process speech. What engages the speech processing system? Previous studies compared perception of speech stimuli in one condition with perception of nonspeech stimuli in another condition. The present approach used only speech stimuli, but varied them simultaneously along a linguistic and a nonlinguistic dimension. The linguistic dimension was place of articulation for stop consonants (e.g., /ba/ v s . /da/) while the nonlinguistic dimension was fundamental frequency (e.g., low vs. high pitch). Each stimulus could thus be categorized along both dimensions 5yale University and Haskins Laboratories, New Haven, Connecticut,

284

Aaronson

(e.g., /ba/-low). Such multidimensional stimuli were used in a variety of paradigms. In each the same objects were required to make judgments along the linguistic dimension in one condition, and along the nonlinguistic dimension in another condition. (1). Diehotic listening. A different item was presented to each ear with one leading by a short interval. When subjects had to determine which consonant began first, there was a right-ear advantage; when the same tape was replayed and they had to determine the pitch level of the leading stimulus, there was a left-ear advantage. These resuits suggest that linguistic and nonlinguistic dimensions of the same signal are capable of engaging different processing systems. (2) Evoked potentials. Neural responses evoked by the same binaural speech stimulus (/ba/-low) were recorded during two-choice identification tasks. In the linguistic task the other stimulus was /da/-low, while in the nonlinguistic task it was /ba/-high. Evoked potentials from the two tasks were significantly different over the left hemisphere but identical over the right hemisphere. These results suggest that the left hemisphere has a special mechanism which is engaged only when a linguistic distinction is made. (3) Speed classification. When subjects had to indicate which consonant occurred in a given syllable, reaction time increased substantially when irrelevant variation in pitch was present. However when they had to indicate which pitch occurred, reaction time increased only slightly when there was irrelevant variation in consonants. These results suggest that it is relatively easy to disengage the speech processor in order to perform a nonlinguistic task, but difficult to accomplish the opposite state of affairs. In conclusion, the presence of speech stimuli does not guarantee that the speech processor will be engaged. Instead, the nature of the task requirements can determine whether items will be processed in a linguistic or a nonlinguistic fashion.

On the Form of a Generative Phonology for Early Stages of Development

Martin D. S. Braine6

The talk was based on a paper "On what might constitute learnable phonology" (Language, in press). A generative treatment of the early stages of development of two children was presented. The treatment leads to a 6New York University, New York, New York.

Abstracts of Presentations at the PsycholinguisticsCircle of New York

285

conception of phonological development according to which lexical representations are acquired by hearing the phonemes in a word. That is, phonemes are auditory units for the child. The main learning process in pronunciation is one of learning how to make these units, i.e., of discovering and gaining control over the articulatory features required to make the auditory units. The lexical representation differs from the phonetic output at all stages of development because actual pronunciations are constrained by primitive or acquired articulatory processes that have the effect of imposing a phonotactic filter on the speech output. The phonological rules at any stage depict the degree of articulatory control or lack of it, including the constraints on coarticulation imposed by the filter. This is the sense in which phonological rules represent "competence." The phoneme concept required by this conception is very similar to Sapir's concept of the phoneme: it is a phoneme concept in which the invariance and biuniqueness conditions are not imposed, hence it is unlike the taxonomic phoneme. At the same time, since the phonemes of a word are acquired by hearing them, it is much more concrete than the phonemes of current generative phonology. It was speculated that some phonological units more abstract than the Sapirian phoneme are probably eventually learned, because the pattern-learning mechanisms used to acquire syntax may well cause common alternations between phonemes to be registered. Hence, it was argued that a bilevel phonology is plausible at relatively late stages of development: a morphoponemic level in which morphophonemes are mapped into Sapirian phonemes, and a lower level which assigns feature representations and develops these, via the phonotactic filter and stress computations, into the phonetic output. As against this reasonable and plausible (I believe)view of phonological learning, the abstract phonology of current generative grammar makes phonological learning an essentially magical process. It is also unsupported by data, since the entire research strategy and logic of argumentation in current work builds in the counterintuitive assumption that the organization of human memory is such as to avoid redundancy of representation even at the cost of extremely complex computations at retrieval.

286

Aaronson

What Clicks May Tell Us About Speech Perception Arthur S. Reber 7 Under the appropriate experimental conditions fluent speakers of English tend to mislocate a spot of interference (a "click") embedded in or superimposed upon a linguistic message. There are currently two theoretical interpretations of the pattern of mislocations. One is based upon the syntactic and semantic features of the message and is refered to here as the linguistic hypothesis. The other is based upon attentional priorities, memory factors, and response biases and is referred to here as the attentional hypothesis. According to the linguistic hypothesis (Fodor & Bever, 1965; Bever, Lackner, & Kirk, 1969; Bever, Lackner, & Stolz, 1969), Ss tend to perceive clicks as occurring in or toward the break between major syntactic stuctures (whether or not they actually occurred there). The argument stems from the Gestalt suggestion that basic psychological units (which may include linguistic units) have a coherency that resists interference by extraneous stimuli. According to the attentional hypothesis (Reber & Anderson, 1970; Reber, 1973), the linguistic material and the click can be regarded as two separate messages, only one of which can be within the S's attentional focus at any one time. Using Titchener's "law of prior entry," the pattern of mislocations is assumed to reflect the S's attentional priorities at the time of occurrence of the click. If the click has priority, it will be perceived earlier than it actually occurred and preposed on the sentence; if the sentence has priority, the click will be perceived as occurring later, or postposed. In addition, the large number of responses made toward and into the major syntactic breaks are assumed to reflect a strong response bias independent of the physical location of the click. The following summary of empirical findings was cited as evidence against the linguistic hypothesis and as support for at least a qualitative version of the attentional hypothesis. (1)Early in an experimental session there is a tendency to prepose clicks; with practice this tendency gradually shifts to postposition. The degree to which the shift occurs is related to the amount of information in the stimulus materials. (2) Clicks occurring early in sentences have overall postpositional mislocations; those occurring late have overall prepositional mislocations. These effects are found regardless of the location of the major break. (3) The pattern of directional errors for clicks on either side of major syntactic breaks is not symmetric, but rather shows the overall pattern characteristic of the temporal location of the click in the 7Brooklyn College, Brooklyn, New York.

Abstracts of Presentations at the Psycholinguistics Circle of New York

287

message. (4) An effort to assess response bias (using "subliminal" clicks) showed that Ss have a strong tendency to locate a click in or immediately adjacent to a major syntactic break, even when no click occurs. Adjusting statistics of correct responding to correct for such response biases showed that, contrary to previous findings, clicks objectively within major breaks are no more easily located than clicks elsewhere in the sentence. (5) Further, it was shown that the distribution of correct placement conforms to simple short-term memory notions. That is, early occurring clicks have a high error rate, late occurring clicks have a lower error rate, and these effects show up regardless of the location of the major linguistic break. The effects of linguistic factors on click migration seem to be primarily on the manner in which the sentential material controls the S's attentiona! priorities and response biases, and these factors in turn determine the pattern of click mislocations. However, the issues are more complicated than had been previously assumed in the literature. Depending on the particular experimental conditions, the data may support either the linguistic or the attentional hypothesis. The most critical of the experimental variables seems to be the subject's task. In much of the work of Bever, the subject is required to reproduce the entire message before making the location response; in the work of Reber, the printed sentences are read by the subject, although not until after the auditory message is concluded.

Serial Position and Response Biases Do Not Account f o r the Effect of Syntactic Structure on the Location of Brief Noises During Sentences

Tom Bever8 It has been found previously that brief noises ("clicks") superimposed on spoken sentences are systematically mislocated as having occurred between clauses, thus indicating the behavioral segmentation of the clause. The sequence of events on each trial is: (A) hear sentence and click; (B) place into STM (short-term memory); (C) extract stimuli from STM; (D) write out sentence; (E) indicate click location in sentence. The previous results have left open various nonperceptual sources of the pattern of mislocation, depending on the point in the sequence at which the systematic displacement Occurs~

8Columbia University, New York, New York.

288

Aaronson

(1) A pure response bias, not based on segmentation in perception or in m e m o r y (i.e., the effect occurs only at E). (2) An effect of writing out the sentence (i.e., at D). (3) An effect of extracting the sentence from memory (i.e., at C). (4) An effect of encoding the sentence into STM (i.e., at B). (5) An effect of perceiving the sentence (i.e., at A). These various possibilities were examined with a technique in which certain trials contain no objective interrupting noise: subjects' responses to these trials elicit the baseline responding bias in each experimental condition. (In all paradigms a soft 35-msec 100-Hz tone in white noise background is used to make it plausible to S that on critical trials with no tone that he/she simply missed it, as opposed to there having been an equipment failure.) Each of the paradigms used the materials described in Bever et al. (1969 Experiment 2) with 20 American-born subjects. The materials vary serial position of the clause break from the fourth word to the eighth word. All effects in Bever et al. and those reported below are significant for each clause-break serial position. 1. S always marks his judgment of the tone location on a pretyped-out script of the sentence-the results show the same effect of clause segmentation on reported tone location, whether a tone was actually present or not. 2. S responds on a typed-out script and is given information as to the approximate click location (-+ 2 syllables around actual tones;-+ 2 syllables around corresponding positions for stimuli lacking tones). Same result as (1). 3. S must be prepared to write out the sentence although on most trials a pretyped script is presented. Even in trials when a script is presented, there is a significantly greater effect of clause structure on the reported tone location when tones are actually presented than for sentences without tones. 4. same as (3) except S is also given information as to approximate location of clicks on the prepared scripts (as in b). Same results as in (c). All paradigms show some effect of clause structure on reported tone location even in trials with no objective tone: that is, a structure-induced response bias occurs at every point in the experimental sequence. However, results (1) and (2) show that a response bias does not account for the magnitude of the effect when S must encode the sentence; results (3) show that recalling and writing out the sentence does not cause the effect and result (4) shows that this is true even when S is given an approximate idea of the location. That is, the processing effect of clause structure occurs before locating the tone on the script, before writing out the sentence, and before extracting the sentence from memory. Rather, the effect occurs while listening to the stimuli and/or while encoding them into STM.

Abstracts of presentations at the Psycholinguistics Circle of New York.

Abstracts of presentations at the Psycholinguistics Circle of New York. - PDF Download Free
565KB Sizes 0 Downloads 0 Views