Syntax a generative introduction 3rd edition pdf


Syntax: a generative introduction by Andrew Carnie. Syntax: a generative Third edition. [Hoboken, New Jersey]: Wiley- eBook: Document. English. . Syntax: A Generative Introduction. Pages An introduction to Japanese - Syntax, Grammar & Language First eBook edition: February Visit our. Andrew Carnie's bestselling textbook on syntax has guided thousands of students through the discipline of theoretical syntax; retaining its popularity due to its.

Language:English, Spanish, German
Published (Last):03.04.2016
Distribution:Free* [*Registration Required]
Uploaded by: LORRETTA

53967 downloads 118056 Views 21.51MB PDF Size Report

Syntax A Generative Introduction 3rd Edition Pdf

Syntax: a generative introduction / Andrew Carnie Carnie, Andrew, · View online 10 editions of this work. Find a specific Third edition. Chichester. Syntax A Generative Introduction Second Edition Andrew Carnie For example, the inflectional suffix -s is found both as a marker of present tense in the third. teachers – a concise and engaging introduction to the central subjects of contemporary Syntax. A Generative Introduction. Third Edition. Andrew Carnie.

Looks like you are currently in Ukraine but have requested a page in the United States site. Would you like to change to the United States site? Andrew Carnie. View Instructor Companion Site. Contact your Rep for all inquiries. View Student Companion Site. This item: A Generative Introduction, 3rd Edition. The Syntax Workbook: A Companion to Carnie's Syntax. He specializes in generative syntactic theory with an emphasis on constituency, VSO languages, copular constructions and Celtic languages. A Companion to Carnie's Syntax

Subconscious knowledge, like how to speak or the ability to visually identify discrete objects, is acquired. In part, this explains why classes in the formal grammar of a foreign language often fail abysmally to train people to speak those languages. By contrast, being immersed in an environment where you can subconsciously acquire a language is much more effective. Not all rules of grammar are acquired, however. Some facts about Language seem to be built into our brains, or innate.

You now have enough information to answer General Problem Set 3 Chapter 1: Generative Grammar 15 4. No one had to teach you to walk despite what your parents might think!

Kids start walking on their own. Walking is an instinct. Many parts of Language are built in, or innate. Much of Language is an ability hard-wired into our brains by our genes. Obviously, particular languages are not innate. So on the surface it seems crazy to claim that Language is an instinct. We call this facility Universal Grammar or UG. The argument in this section is that a productive system like the rules of Language probably have not been learned or acquired.

Infinite systems are in principle, given certain assumptions, both unlearnable and unacquirable. So it follows that it is built in. The argument presented here is based on an unpublished paper by Alec Marantz, but is based on an argument dating back to at least Chomsky Premise i: Syntax is a productive, recursive and infinite system Premise ii: Rule governed infinite systems are unlearnable. Therefore syntax is an unlearnable system.

Since we have it, it follows that at least parts of syntax are innate. In the challenge problem sets at the end of this chapter you are invited to think very critically about the form of this proof. Challenge Problem Set 3 considers the possibility that premise i is false but hopefully, you will conclude that despite the argument given in the problem set, that the idea Language is productive and infinite is correct.

Premise ii is more dubious, and is the topic of Challenge Problem Set 4. Here, in the main body of the text, I will give you the classic versions of the support for these premises, without criticizing them.

You are invited to be skeptical and critical of them when you do the Challenge Problem sets. Language is a productive system. That is, you can produce and understand sentences you have never heard before. For example, I can practically guarantee that you have never heard the following sentence: The magic of syntax is that it can generate forms that have never been produced before.

Another example of the productive quality lies in what is called recursion. It is possible to utter a sentence like It is also possible to put this sentence inside another sentence, like Similarly you can put this larger sentence inside of another one: It is always possible to embed a sentence inside of a larger one. This means that Language is a productive probably infinite system. There are no limits on what we can talk about.

Other examples of the productivity of syntax can be seen in the fact that you can infinitely repeat adjectives 17 and you can infinitely add coordinated nouns to a noun phrase It follows that for every grammatical sentence of English, you can find a longer one based on one of the rules of recursion, adjective repetition, or coordination that is longer.

This means that language is at least countably infinite. This premise is relatively uncontroversial however, see the discussion in Challenge Problem Set 3. The idea that infinite systems are unlearnable. Imagine that the task of a child is to determine the rules by which her language is constructed. This matching of situations with expressions is a kind of mathematical relation or function that maps sentences onto a particular situation.

Another way of put4 The task is actually several magnitudes more difficult than this, as the child has to work out the phonology, etc.

It turns out that this task is, at least very difficult if not impossible. Assign each sentence some number. This number will represent the input to the rule. Similarly we will assign each situation a number. The function or rule modeling language acquisition maps from the set of sentence numbers to the set of situation numbers.

The x value represents the sentences she hears. The y is the number correctly associated with the situation. Most people will jump to the conclusion that the output will be 6 as well. But what if I were to tell you that in the hypothetical situation I envision here, the correct answer is situation number The rule that generated the table in 20 is actually: Is this necessarily the case?

Unfortunately not: Even if you add a sixth line, you have no way of being sure that you have the right function until you have heard all the possible inputs. The important information might be in the sixth line, but it might also be in the 7,,,,th sentence that you hear.

You have no way of knowing for sure if you have heard all the relevant data until you have heard them all. Generative Grammar 19 were to hear 1 sentence every 10 seconds for your entire life. This is a much smaller number than infinity.

Despite this poverty of input, by the age of 5 most children are fairly confident with their use of complicated syntax. Productive systems are possibly unlearnable, because you never have enough input to be sure you have all the relevant facts.

This is called the logical problem of language acquisition. Generative grammar gets around this logical puzzle by claiming that the child acquiring English, Irish, or Yoruba has some help: Universal Grammar restricts the number of possible functions that map between situations and utterances, thus making language learnable. There are many other arguments that support the hypothesis that at least a certain amount of language is built in.

Start with the data in A child might plausibly have heard sentences of these types the underline represents the place where the question word who plausibly starts out — that is either as the object or subject of the verb will question: The child has to draw a hypothesis about the distribution of the word that in English sentences. One conclusion consistent with this observed data is that the word that in English is optional. You can either have it or not.

Unfortunately this conclusion is not accurate. Consider the fourth sentence in the paradigm in This sentence is the same as 22c but with a that: What is important to note is that no one has ever taught you that 22d is ungrammatical. The logical hypothesis on the basis of the data in a—c predicts sentence 22d to be grammatical.

There is nothing in the input a child hears that would lead them to the conclusion that 22d is ungrammatical, yet every English-speaking child knows it is. One solution to this conundrum is that we are born with the knowledge that sentences like 22d are ungrammatical.

Most parents raising a toddler will swear up and down that they are teaching their children to speak; that they actively engage in instructing their child in the proper form of the language. That overt instruction by parents plays any role in language development is easily falsified. The evidence from the experimental language acquisition literature is very clear: Where is that big piece of paper I gave you yesterday?

I writed on it. Want other one spoon, Daddy Adult: You mean, you want the other spoon. There is no disputing the fact that this phenomenon is not learnable. However, it is also a fact that it is not a universal property of all languages. Here is a challenge for those of you who like to do logic puzzles: If the that-trace effect is not learnable and thus must be biologically built in, how is it possible for a speaker of French or Irish to violate it?

This is a hard problem, but there is a solution. It may become clearer below when we discuss parameters. Generative Grammar Child: Now give me other one spoon. When they do occur, they fail.

However, children still acquire language in the face of a complete lack of instruction. Perhaps one of the most convincing explanations for this is UG. In the problem set part of this chapter, you are asked to consider other possible explanations and evaluate which are the most convincing. Statistical Probability or UG? This is a common objection to the hypothesis of UG. For example, English speakers rarely if ever produce sentences with seven embeddings John said that Mary thinks that Susan believes that Matt exclaimed that Marian claimed that Art said that Andrew wondered if Gwen had lost her pen ; yet speakers of English routinely agree these are acceptable.

The actual speech of adult speakers is riddled with errors due to all sorts of external factors: But children do not seem to assume that any of these errors, which they hear frequently, are part of the data that determines their grammars. There are also typological arguments for the existence of an innate language faculty.

All the languages of the world share certain properties for example they all have subjects and predicates — other examples will be seen throughout the rest of this book. These properties are called universals of 22 Preliminaries Language. In addition to sharing many similar characteristics, recent research into Language acquisition has begun to show that there is a certain amount of consistency cross-linguistically in the way children acquire Language.

For example, children seem to go through the same stages and make the same kinds of mistakes when acquiring their language, no matter what their cultural background. Finally, there are a number of biological arguments in favor of UG. As noted above, Language seems to be both human-specific and pervasive across the species. All humans, unless they have some kind of physical impairment, seem to have Language as we know it.

This points towards it being a genetically endowed instinct. Additionally, research from neurolinguistics seems to point towards certain parts of the brain being linked to specific linguistic functions. With very few exceptions, most linguists believe that some Language is innate. What is of controversy is how much is innate and whether the innateness is specific to Language, or follows from more general innate cognitive functions.

We leave these questions unanswered here. You now have enough information to try General Problem Set 4 4. However, we are still left with the annoying problem that languages differ from one another.

This problem is what makes the study of syntax so interesting. It is also not an unsolvable one. One way in which languages differ is in terms of the words used in the language. These clearly have to be learned or memorized.

Other differences between languages such as the fact that basic English word order is subjectverb-object SVO , but the order of an Irish sentence is verb-subject-object VSO and the order of a Turkish sentence is subject-object-verb SOV must also be acquired.

The explanation for this kind of fact will be explored in chapter 6. Language variation thus reduces to learning the correct set of words and selecting from a predetermined set of options. Oversimplifying slightly, most languages put the order of elements in a sentence in one of the following word orders: Two of the possible word orders are not part of UG.

None of the others are consistent with the data.

Syntax: A Generative Introduction, 3rd Edition

The child thus rejects all the other hypotheses. This leaves SVO, which is the correct order for English. So children acquiring English will choose to set the word order parameter at the innately available SVO setting. In his excellent book The Atoms of Language, Mark Baker inventories a set of possible parameters of a language variation within the UG hypothesis.

This is an excellent and highly accessible treatment of parameters. I strongly recommend this book. You now have enough information to try General Problem Set 5 and Challenge set 5 5. In this book we are going to posit many hypotheses. How do we know what is a good hypothesis and what is a bad? Chomsky proposed that we can evaluate how good theories of syntax are, using what are called the levels of adequacy.

Chomsky claimed that there are three stages 7 This is a matter of some debate. Derbyshire has claimed that the language Hixkaryana has object initial order. If your theory only accounts for the data in a corpus say a series of printed texts and nothing more it is said to be an observationally adequate grammar. We also need to know what kinds of sentences are unacceptable, or ill-formed. A theory that accounts for both corpora and native speaker judgments about well-formedness is called a descriptively adequate grammar.

On the surface this may seem to be all we need. Chomsky, however, has claimed that we can go one step better. He points out that a theory that also accounts for how children acquire their language is the best. He calls this an explanatorily adequate grammar.

The simple theory of parameters might get this label. Generative grammar strives towards explanatorily adequate grammars. You now have enough information to try General Problem Set 6 6. This principle is in part a guide to the way in which the rest of this book is structured. Chapters 2 and 3 examine the words these rules use, the form of the rules, and they structures they generate. Chapters 4 and 5 look at ways we can detail the structure of the trees formed by the PSRs.

When faced with more complicated data, we revise our hypotheses, and this is precisely what we do. We develop a special refined kind of PSR known as an X-bar rule. Xbar rules are still phrase structure rules, but they offer a more sophisticated way of looking at trees.

Generative Grammar 25 In chapters Part 3 we consider even more data, and refine our hypothesis again. This time adding a new rule type: Part 4 of the book chapters refines these proposals even further.

With each step we build upon our initial hypothesis, just as the scientific method tells us to. Why should we bother learning Phrase Structure Rules? By proposing a simple hypothesis early on in the initial chapters, and then refining and revising it, building new ideas onto old ones, you not only get an understanding of the motivations for and inner workings of our theoretical premises, but you get practice in working like a real linguist.

Professional linguists, like all scientists, work from a set of simple hypotheses and revise them in light of predictions made by the hypotheses.

Syntax: A Generative Introduction, 3rd Edition

These early versions represent the foundations out of which the rest of the theory has been built This is how science works. It is descriptive and rule based. Further, it assumes that a certain amount of grammar is built in and the rest is acquired. The level of linguistic organization that mediates between sounds and meaning, where words are organized into phrases and sentences. The psychological ability of humans to produce and understand a particular language. Also called the Human Language Capacity or i-Language.

This is the object of study in this book. A language like English or French. These are the particular instances of the human Language. The data source we use to examine Language is language. Also called e-language. A theory of linguistics in which grammar is viewed as a cognitive faculty. Language is generated by a set of rules or procedures.

Observe some data, make generalizations about that data, draw a hypothesis, test the hypothesis against more data. To prove that a hypothesis correct you have to look for the data that would prove it wrong.

The prediction that might prove a hypothesis wrong is said to be falsifiable. Not what you learned in school. This is the set of rules that generate a language. A word that ends in -self or -selves a better definition will be given in chapter 5.

The noun an anaphor refers to. The hash mark, pound, or number sign is used to mark semantically strange, but syntactically wellformed, sentences. Masculine vs. Feminine vs. Does not have to be identical to the actual sex of the referent. For example, a Chapter 1: Generative Grammar 27 dog might be female, but we can refer to it with the neuter pronoun it. The quantity of individuals or things described by a noun.

English distinguishes singular e. Other languages have more or less complicated number systems. The perspective of the participants in the conversation.

The speaker or speakers I, me, we, us are called first person. The listener s you , are called the second person. Anyone else those not involved in the conversation he, him, she, her, it, they, them is referred to as the third person.

The form a noun takes depending upon its position in the sentence. We discuss this more in chapter The form of a noun in subject position I, you, he, she, it, we, they.

The form of a noun in object position me, you, him, her, it, us, them. A collection of real-world language data. Information about the subconscious knowledge of a language. This information is tapped by means of the grammaticality judgment task. A judgment about the meaning of a sentence, often relying on our knowledge of the context in which the sentence was uttered.

A judgment about the form or structure of a sentence. The gathering of conscious knowledge like linguistics or chemistry. The gathering of subconscious information like language. Hard-wired or built in, an instinct.

The ability to embed structures iteratively inside one another. The proof that an infinite system like human language cannot be learned on the basis of observed data — an argument for UG. The idea that we know things about our language that we could not have possibly learned — an argument for UG. A property found in all the languages of the world. A grammar that accounts for observed real-world data such as corpora. A grammar that accounts for observed real-world data and native speaker judgments.

A grammar that accounts for observed real-world data and native speaker judgments and offers an explanation for the facts of language acquisition.

New York: A Life of Dissent. MIT Press. Chomsky, Noam Aspects of the Theory of Syntax. Jackendoff, Ray Patterns in the Mind.

Pinker, Steven The Language Instinct. Harper Perennial.

Syntax : a generative introduction / Andrew Carnie - Details - Trove

Sampson, Geoffrey Educating Eve: The Language Instinct Debate. Uriagereka, Juan Rhyme and Reason: An Introduction to Minimalist Syntax. What are these uses? Why do we maintain prescriptive rules in our society? Generative Grammar 29 2. For each sentence, indicate whether this unacceptability is i ii a prescriptive or a descriptive judgment, and for all descriptive judgments indicate whether the ungrammaticality has to do with syntax or semantics or both.

One- or two-word answers are appropriate. If you are not a native speaker of English, enlist the help of someone who is. You are taller than me. My red is refrigerator. Who do you think that saw Bill? My friends wanted to quickly leave the party. Bunnies carrots eat. Learning is conscious, acquisition is automatic and subconscious. Note that acquired things are not necessarily innate. They are just subconsciously obtained. Other than language are there other things we acquire? What other things do we learn?

What about walking? Or reading? Or sexual identity? An important point in answering this question is to talk about what kind of evidence is necessary to distinguish between learning and acquisition. How might you account for the existence of universals see definition above across languages?

Can you think of an argument that might be raised against innateness? Alternately, could you come up with a hypothetical experiment that could disprove innateness? What would 30 Preliminaries such an experiment have to show? Remember that cross-linguistic variation differences between languages is not an argument against innateness or UG, because UG contains parameters that allow minute variations.

Attribute a level of adequacy to them state whether the grammars they developed are observationally adequate, descriptively adequate, or explanatorily adequate. Explain why you assigned the level of adequacy that you did. He has been looking both at corpora rap music, recorded snatches of speech and working with adult native speakers.

She has been working at the national archives of Wales in Cardiff. He is also conducting a longitudinal study of some two-year-old children learning the language to test his hypotheses.

Students are encouraged to complete the other problem sets before trying the Challenge Sets. Challenge Sets can vary in level from interesting puzzles to downright impossible conundrums. Try your best! We came to the following conclusion about their distribution: An anaphor must agree in person, gender, and number with its antecedent. However, there is much more to say about the distribution of these nouns in fact, chapter 5 of this book is entirely devoted to the question.

Part 1: Consider the data below. Can you make an addition to the above statement that explains the distribution of anaphors and antecedents in the very limited data below? Generative Grammar a b c d 31 Geordi sang to himself. Betsy loves herself in blue leather. Part 2: Now consider the following sentences: Do these sentences obey your revised generalization? Why or why not? Is there something special about the antecedents that forces an exception here, or can you modify your generalization to fit these cases?

Consider the following acceptable sentence. Are all anaphors allowed in sentences like a? Where is the antecedent for yourself? Is this a counter-example to our rule? Why is this rule an exception? What is special about the sentence in a?

This was premise i of the discussion in section 4. The idea is straightforward and at least intuitively correct: Intuitively this leads to an infinitely large number of possible sentences. Pullum and Scholz have shown that one formal version of this intuitive idea is either circular or a contradiction. Here is the structure of the traditional argument paraphrased and simplified from the version in Pullum and Scholz.

This proof is cast in such a way that the way we count the number of sentences is by comparing the number of words in the sentence. If for any extremely high number of words, we can 8 Thanks to Ahmad Lotfi for suggesting this part of the question.

First some terminology: If a sentence x is an element of this set we write E x. The number of words in a sentence is expressed by the variable n. Next the formal argument: Premise 1: There is at least one well-formed sentence that has more than zero words in it.

There is an operation in the PSRs such that any sentence may be embedded in another with more words in it. That means for any sentence in the language, there is another longer sentence. If some expression has the length n, then some other well-formed sentence has a size greater than n. Therefore for every positive integer n, there are well-formed sentences with a length longer than n i.

Sets come of two kinds: There are also infinite sets, which have an endless possible number of members e. Question 1: Assume that E, the set of well-formed sentences, is finite. This is a contradiction of one of the two premises given above.

Which one? Why is it a contradiction? Question 2: Assume that E, the set of well-formed sentences, is infinite. This leads to a circularity in the argument. What is the circularity i. Question 3: If the logical argument is either contradictory or circular what does that make of our claim that the number of sentences possible in a language is infinite? Is it totally wrong? What does the proof given immediately above really prove? Question 4: Given that E can be neither a finite nor an infinite set, is there anyway we might recast the premises, terminology, or conclusion in order not Chapter 1: Generative Grammar 33 to have a circular argument and capture the intuitive insight of the claim?

Try to be creative. Important notes: Try to work out the answers for yourself. While given the extreme view in section 4. We consider only those sentences that are frequent in the input when constructing our rules. To what extent are a , b or c compatible with the hypothesis of Universal Grammar. If a , b or c turned out to be true, would this mean that there was no innate grammar? Explain your answer. How might you experimentally or observationally distinguish between a , b , c and the infinite input hypothesis of 4.

What kinds of evidence would you need to tell them apart? When people speak, they make errors. They switch words around, they mispronounce things, they use the wrong word, they stop midsentence without completing what they are saying etc. Nevertheless children 34 Preliminaries seem to be able to ignore these errors and still come up with the right set of rules. Is this fact compatible with any of the infinite hypothesis, a , b , or c?

For example, in both English and French, pronouns are required. Sentences without them are usually ungrammatical: Furthermore, it seems to help me analyze language too. Great book! By Jingjing Such an useful book which is recommended by my syntax teacher. I recommend it to all undergraduate syntax students! It covers most of syntax theories and points.

Posting Komentar. Senin, 03 November [L Free PDF Syntax: Sales Rank: Wiley-Blackwell Published on: English Number of items: Most helpful customer reviews 2 of 2 people found the following review helpful. It covers most of syntax theories and points 0 of 0 people found the following review helpful.

Diposting oleh alisa di Tidak ada komentar: Within much morpheme-based morphological theory, the two views are mixed in unsystematic ways so a writer may refer to "the morpheme plural" and "the morpheme -s" in the same sentence.

Lexeme-based morphology[ edit ] Lexeme-based morphology usually takes what is called an item-and-process approach. Instead of analyzing a word form as a set of morphemes arranged in sequence, a word form is said to be the result of applying rules that alter a word-form or stem in order to produce a new one. An inflectional rule takes a stem, changes it as is required by the rule, and outputs a word form; a derivational rule takes a stem, changes it as per its own requirements, and outputs a derived stem; a compounding rule takes word forms, and similarly outputs a compound stem.

Word-based morphology[ edit ] Word-based morphology is usually a word-and-paradigm approach. The theory takes paradigms as a central notion. Instead of stating rules to combine morphemes into word forms or to generate word forms from stems, word-based morphology states generalizations that hold between the forms of inflectional paradigms. The major point behind this approach is that many such generalizations are hard to state with either of the other approaches.

Word-and-paradigm approaches are also well-suited to capturing purely morphological phenomena, such as morphomes. Examples to show the effectiveness of word-based approaches are usually drawn from fusional languages , where a given "piece" of a word, which a morpheme-based theory would call an inflectional morpheme, corresponds to a combination of grammatical categories, for example, "third-person plural".

Morpheme-based theories usually have no problems with this situation since one says that a given morpheme has two categories. Item-and-process theories, on the other hand, often break down in cases like these because they all too often assume that there will be two separate rules here, one for third person, and the other for plural, but the distinction between them turns out to be artificial.

The approaches treat these as whole words that are related to each other by analogical rules. Words can be categorized based on the pattern they fit into. This applies both to existing words and to new ones. Application of a pattern different from the one that has been used historically can give rise to a new word, such as older replacing elder where older follows the normal pattern of adjectival superlatives and cows replacing kine where cows fits the regular pattern of plural formation.

Main article: Morphological typology In the 19th century, philologists devised a now classic classification of languages according to their morphology. Some languages are isolating , and have little to no morphology; others are agglutinative whose words tend to have lots of easily separable morphemes; others yet are inflectional or fusional because their inflectional morphemes are "fused" together.

That leads to one bound morpheme conveying multiple pieces of information. A standard example of an isolating language is Chinese. An agglutinative language is Turkish. Latin and Greek are prototypical inflectional or fusional languages.

TOP Related

Copyright © 2019 All rights reserved.
DMCA |Contact Us