Noam_Chomsky_-_Properties_of_Language.doc

(23 KB) Pobierz
Properties of Language

Properties of Language

 

Noam Chomsky's linguistic research in the late 1950s and 1960s was one of the first to use the work in formal theories of computation to illuminate some of the properties of the human mind. At this time most of the work in psychology was dominated by the behaviorist point of view. The emphasis was on the learning of 'verbal materials' - nonsense syllables, randomly constructed lists of words, and the like. There was virtually no work on the learning of materials that were syntactically well-structured. And from the behaviorist point of view, to the extent that a theory might be required; the ideal theory was one that predicted 'observed behavior'.

This is quite a tall order if one moves outside the confines of a well-controlled and suitably circumscribed experimental situation. And, even in such an experimental context, a probabilistic prediction seemed to be the best that could be expected. For example, in the early 60s a psychologist, Gordon Bower showed that a one-element Markov process could model the behavior observed in a very circumscribed experimental situation. The experiment was what was known as a paired associate learning task. In such a task there are a number of items, the ‘stimuli’ that are presented and these are followed by a number of items termed the ‘responses’. The subject's task is to learn to respond with the correct response when one of the stimulus items is shown. Presumably, the learning involves building an association between these paired elements. In the Bower experiment the stimuli were single digit integers and the responses were alphabetic materials.

A Markov process shares many of the properties of the finite state machines that we studied earlier. You may recall that one of the ways we used to represent a finite state machine is known as a Markov diagram. In fact Markov was a major figure in the early work on the development of mathematical models of computation (Markov A. A Theory of Algorithms. Moscow: National Academy of Sciences, 1954.). In a Markov process each of the possible transitions from one state to the next has some probability of occurring (of course, this set must sum to one). However, there may be many possible starting states each of which has some initial probability (again these must sum to one). And, there need not be a final state. Thus, you can see that one can view a finite state machine as a special case of the more general idea of a Markov process.

Markov processes, and stochastic processes in general, were being explored during this period in a variety of domains within the area of the social sciences including areas within economics, linguistics, political science and sociology as well as psychology. But how could one predict linguistic behavior even probabilistically. There are a great many possible sentences that anyone might say at any point in time. As I write this, I don't even know exactly what I will say next and exactly how I will say it. And, to make matters worse Chomsky argued that the number of  sentences in any natural language is, in principle, infinite. Perhaps we could come close to predicting the linguistic utterances used in ‘greeting behavior’ using a probabilistic model; but, almost anything else seems hopeless. (Political debates and speeches might be an additional exception).

When you have a game that is impossible to win, then don't play that game. Find one that you have some chance at winning. This is what Chomsky did; he changed the game. His 1956 article (Chomsky, N. Three models for the description of language. IRE Transactions on Information Theory, 1956, IT-2(3), 113-124) defined a new game. In this game a theory is not asked to predict specific behaviors in a specific context. Rather the theory is asked to 'generate all syntactically correct strings of words (and only) the syntactically correct strings of words of some language'. That is, the theory should capture the essential properties of all language behavior.

This eventually led psychologists to shift there attention from the memorization of linguistically related materials to questions about the kind of capacities that the human mind must possess in order to use language. The properties of natural language became more important than some specific linguistic utterance.

The figure above provides a list of various general properties that are exemplified in our use of natural language. Cognitive psychology is still attempting to work out the implications of these properties for our theory of the mind. For example, recall the Gestaltist's interest in ambiguous figures. Do ambiguous sentences suggest similar properties of the mind? What is the relation between metaphor and our experience? How do we determine the meaning and aptness of a metaphor?

The starting point for this work on language and the mind was in the area of syntax and it is to this work that we turn next.

Zgłoś jeśli naruszono regulamin