Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Grammar isn't merely part of language (2016) (brown.edu)
124 points by mrkgnao on Feb 10, 2017 | hide | past | favorite | 68 comments


>> the feud between Noam Chomsky and Daniel Everett

That "feud" is as much a feud as the "controversy" over climate change. Daniel Everett is very obviously a complete and utter crank, who has made absolutely ridiculous claims about Piraha the language and Piraha the people (e.g. that they can't learn basic arithmetic skills, like 1 + 1), to which he claims to be the sole authority.

To give an account of his "feud" with Chomsky, Chomsky claims that recursion (in the sense of embedding sentences into sentences) is the defining characteristic of human language that sets it apart from other animal languages. Everett claims that the Piraha language doesn't display recursion and therefore Chomsky's claim is wrong.

Stop for a moment and consider this. It's like claiming that, only Europeans have pale skin, but if we find one European population with brown skin, then it's not true that only Europeans have pale skin. What Everett is claiming is exactly that wrong.

Everett has only risen to prominence because he's a big old troll, and there are lots of people in cognitive science and linguistics who are very frustrated with the inability of both fields to make any progress on natural language, despite everyone's best efforts- and who have somehow singled out Chomsky as the culprit, because he had the audacity to hold on to a theory that hasn't really been comprehensively disputed yet.

Far from a feud, this is just a case of shoot-the-messenger. Shoot him with sour grapes that is.


The other side isn't exactly a bunch of careful scientists either [0].

Other people have visited the Pirahã and done experiments with them. It is true that they do not understand counting, for example [1][2], though I'm sure they could learn it if they had to.

Other linguists have also independently claimed that other languages lack recursive embedding, for example Riau Indonesian, so the claim about Pirahã is not isolated or crazy a priori.

The Chomskyan side of the argument has also been characterized by moving the goalposts and formal incoherence. See for example [3].

Basically, amidst a huge storm of verbiage and anger and inflated claims from both sides, all of which is about personalities and definitions and not at all about science, there are a few mildly interesting findings about Pirahã and language.

[0] https://en.wikipedia.org/wiki/Marc_Hauser [1] http://science.sciencemag.org/content/306/5695/496.full [2] http://lchc.ucsd.edu/mca/Mail/xmcamail.2014-12.dir/pdf2Yb7JA... [3] http://www.sciencedirect.com/science/article/pii/S0388000114...


I'll quote from your last reference:

As I have tried to show, those issues, whilst important and interesting in themselves, have little to do with recursion as this concept has been understood within mathematical logic, the connotation Chomsky has always employed, it seems. As such, there is a bit of a disconnect between what Chomsky is talking about when talking about recursion and what other linguists are talking about when talking about recursion. As a result, rather incompatible claims are being put forward in the literature; indeed, to defend that what is universal in linguistics is the recursively-defined operations of a computational system (clearly Chomsky's take) is very different from the claim that all languages must exhibit self-embedded sentences if recursion is to be accepted as such a central concept (Everett is probably the most prominent example of this line of thought).

Like I say- that paper basically claims that Chomsky's version of recursion in language has changed very little if at all, and that Everett is grossly misrepresenting it (or its consistency).

I would add: willfully so.


>> Other people have visited the Pirahã and done experiments with them. It is true that they do not understand counting, for example [1][2],

Those are not "other people". The first reference is a paper where Everett is one of the authors and the other is a paper from a collaborator of Everett: the author even studied the Piraha and their arithmetic skills while living with the Piraha along with the Everetts. Show me a paper on the Piraha language that has nothing to do with Everett and that duplicates Everett's findings.

>> The Chomskyan side of the argument has also been characterized by moving the goalposts and formal incoherence. See for example [3].

I don't think that reference means what you think it means. Actually I think it means exactly the opposite:

What I can say, what I have said, with most certainty is what recursion is supposed to stand for within the theory Chomsky has constructed—with remarkable consistency, I should add.

Which is what I've also heard before from critics of Chomsky in general (frex, I believe Alex Clark has said similar things, but I might be wrong). Chomsky himself has been remarkably consistent, to the point he sounds like a broken record, regarding what recursion in language means.

Whether "his side" (other linguists of his school) has been more or less consistent, I don't know but in any case I'm more interested in the fact that Daniel Everett seems to be a complete and utter sleazebag who is unashamedly taking advantage of a peoples who can't really speak for themselves (because he's standing in the way!) to further some obscure personal agenda.

>> Other linguists have also independently claimed that other languages lack recursive embedding, for example Riau Indonesian, so the claim about Pirahã is not isolated or crazy a priori.

I hadn't heard of Riau Indonesian, so I looked it up in wikipedia and I could not find anything about it lacking recursive structure. Instead it "is considered by linguists to have one of the least complex grammars among the languages of the world,[citation needed] apart from creoles, possessing neither noun declensions, temporal distinctions, subject/object distinctions, nor singular/plural distinction."


> I hadn't heard of Riau Indonesian

Googling "hierarchical syntax riau" seems to pull up the relevant stuff, if you're interested.


Googling that with quotation marks brings up only your own comment- for a moment there I thought you were making a joke about recursion.

I'll have a look at the stuff that comes up without quotes, thanks.


"To give an account of his "feud" with Chomsky, Chomsky claims that recursion (in the sense of embedding sentences into sentences) is the defining characteristic of human language that sets it apart from other animal languages. Everett claims that the Piraha language doesn't display recursion and therefore Chomsky's claim is wrong."

I realize that your summary of Chomsky's theory is short, pithy, and misrepresents Chomsky, but your argument here is incorrect.

You say,

* X is the defining characteristic of Y.

* Z (a purported element of Y) does not have X.

The only possible conclusion is that Z is not an element of Y: the Piraha do not speak a human language.


Sure, that's Everett's (wild, unsubstantiable, absurd) claim: that the Piraha, a human people, speak a nonhuman language.

And that affects the claim that recursion is the defining characterstic of human language -how, again?


I'm a huge fan of Mumford, but I think he's stretching a bit when he says that natural images have grammar like language has grammar. The production processes for the two phenomena are enormously different. A natural image is formed when a collection of objects is illuminated by incoming light, and the resulting image is projected onto the retina. The human brain is not involved at all in this process (leaving aside nitpicking about how humans may have shaped the environment). In contrast a natural language sentence is produced when an idea occurs inside the brain, and then various linguistic production processes transform the idea into a serial form, as text, speech, sign language, etc. The latter process involves constraints, capabilities, and eccentricities of the human brain at every stage.

Maybe you could argue that human brains perceive images using grammar-like structures.


>> I'm a huge fan of Mumford, but I think he's stretching a bit when he says that natural images have grammar like language has grammar.

That seems to be what he's saying- that the process of vision in living things is actually a grammar.

He doesn't have to be entirely correct for his intuition to be that useful though. We don't have to assume an existing grammar to fit a bunch of data to a grammar (like we do in grammar induction, where there's no such assumption when, say, someone models DNA sequences as a grammar etc).

After all, a grammar is just a representation. The question is how good that representation is- in theory as well as in practice. In theory, it's a good representation if it helps us answer questions about the process we're trying to model. In practice it's good if it allows us to reproduce the process, especially automatically, with computers, and predict the behaviour of agents that employ this process etc etc.


I highly recommend you read Arnheim’s 1969 book Visual Thinking, https://amzn.com/0520013786/


Human brains also perceive speech or writing or what have you using grammar-like structures. In the sense brought up by the article, a grammar is just a structured logical representation of some physical phenomenon, like sound or imagery.

In other words, grammar is how we perceive things, and it makes sense that it can be generalized instead of only being applicable to a specific sensory input.

The specific point about parsing visual input is much more obvious when looking at things like graphical charts, user interfaces, etc.; there's clearly a grammar of some form involved in, say, determining which button to press on my phone's on-screen keyboard to create the letter 'b', and further involved when determining what a "button" is or what a "keyboard" is or what a "letter" is. Hell, that seems like the same process that lets me figure out what a "screen" is and what a "phone" is. Eventually we go from "type the letter 'b'" to "move your right thumbtip to this position" (and even that can be broken down further).


You're treating input/output as the same.

We receive speech coming from a source and parse it using a grammar. One could imagine it being a similar process for the perceiving images captured by the retina.

For output, when a human paints an image they are panting from an imaged visualized inside of their mental canvas, just like we realize thoughts produced within our minds as speech.


> when a human paints an image they are panting from an imaged visualized inside of their mental canvas

Yes, but images produced by humans are a tiny fraction of images processed by the eye. But every written or spoken sentence was ultimately created by a human brain.

That's why it seems like a big stretch to claim there is a 'universal grammar' involved in visual processing, if you believe that grammar is primarily a way for brains to encode information for communication purposes...


> Yes, but images produced by humans are a tiny fraction of images processed by the eye.

Processed by the eye, yes, but that rises to 100% for images processed by the brain. The brain appropriates images by imparting its processing on the lower level visual cortex. Perception is an active process.


This seems... Wrong. Consider, kids process images and sounds long before they are capable of sentences.


Vision and language are processed by distinct brain areas, which mature at a different pace.

That doesn't rule out the possibility of grammar-like processing in visual areas.


Apologies for neglecting this over the weekend.

This is true. I did not mean to say that just because I think it is wrong, that it is. However, the claim seemed to be that the images experienced by the brain are fully synthesized by the brain. Which seemed off.

Again, just because it seems off to be does not mean it is wrong. Not my field, and whatnot. I can even see something to be said for visual processing going in stages such that the stage that you are cognizant of is effectively on images constructed by you. That seems to be a different claim, though.


Couldn't it simply take them that long to understand/model the visual input deeply enough to interact with it in complete sentences?


Apologies for neglecting this over the weekend.

The claim was "Processed by the eye, yes, but that rises to 100% for images processed by the brain." That is, that the images processed by the brain were 100% constructed by the brain.

The implication I got was that the images you perceive are entirely of your own devising. This seems off to me. Certainly anyone that is blind but still able to visualize a room is using constructed visualizations. But, that is a different thing than someone that is able to see.

This is different from written words. Which are 100% devised by another being. Maybe assembled by a machine, but the words and the meanings of them are learned and come from taught meanings. Not from raw processed experiences.


I'm not entirely following what you mean, but that's OK. My hunch is our differences lie this concept of "taught meaning". I don't think meanings are taught, in any traditional sense. I think they are absorbed, acquired, and synthesized by the incredible pattern matching of the brain, operating off of direct, perceptual experience. Of course, these experiences includes things like reflection, reading a textbook, having a conversation, watching a movie, daydreaming etc.

When one reads a piece of text, it's being interpreted through the complex mental models of the world and layers of meaning that have been built up in the individual's brain over the years.

I realize we our now squarely off on a tangent :)


>A natural image is formed when a collection of objects is illuminated by incoming light, and the resulting image is projected onto the retina. The human brain is not involved at all in this process

No, the retina is a complex processor, and so is the optic nerve. Brain scientists nowadays say the retina is an extension of the brain.


> A natural image is formed when a collection of objects is illuminated by incoming light, and the resulting image is projected onto the retina.

But that's sort of the point no? The incoming information is not an unstructured white noise of photons striking our retina. There is a sort of structure to the information that can be modelled. One such model of this structured information is as a "grammar tree" (really just a tree, we're coders here.) The example in the article is that the arm occludes the teepee, which occludes the background trees. Any visual system needs to break this hierarchy down.


Recursion was recently shown to enable generalization in neural programming architectures [1] but from a critical inquiry p.o.v. we note that recursion requires a mechanism for maintaining context and look for the existence (or lack of) such mechanism in animal brains.

[1]: https://news.ycombinator.com/item?id=13551298


Why do linguists like Chomsky assume that grammar is a completely radical break from animal brain functioning? I think it is mainly because of a deeply rooted assumption in Western religion and much (but not all) of Western philosophy that human beings have free, moral, non-material souls that animals lack.


Chomsky's reasoning is that the arbitrary complexity (recursive structure) of human language implies some sort of low level computational engine to do the relevant computations. I don't have an opinion either way. We can see that LSTMs can emulate this kind of logic, but they also make mistakes. Also, I'm not sure that human reasoning is as logical as it might seem. E.g. I read somewhere (lost the reference) that the earliest languages may have lacked the ability to arbitrarily nest clauses. So maybe humans only emulate logical thinking.


We can see that LSTMs can emulate this kind of logic, but they also make mistakes.

Humans can also make mistakes when processing language. We're still better than LSTM's, but I'm not sure we can claim a qualitative difference.

Furthermore, even though we can process sentences with very deep embedding like "The rat the cat the dog bit chased escaped", my intuition is that we are not using our normal language processing systems for that. When I read that sentence, I just fail to process it and then I invoke my logic systems to try to determine the structure and decode it, in a way that feels totally different from processing a normal sentence (I'm not understanding it in real time, in a natural way, but rather solving a small puzzle). So I personally don't find the Chomskyan arguments based on that kind of corner cases very convincing.


>> "The rat the cat the dog bit chased escaped"

That is indeed a contrived example of recursion, but recursion (in the sense of embedding) can be much simpler and easier to parse. For example:

"John, my friend from high school, who married your cousin, Mary, is coming over for dinner".

This sort of embedding is what makes human language infinite in scope- you can keep embedding sub-sentences for ever, and so you can produce new utterances forever.

This ability to infinitely extend and recombine the meaning of utterances is what gives human language its expressive power, and what is absent from animal languages, so far as we know.


Examples like that are parsable because they are similar to what we would call, in programming, tail recursion (i.e., recursion that doesn't really need recursion). It's true that you can embed an infinite number of subsentences ("John, my friend from high school, who married your cousin, Mary, who had an affair with the bartender, Jack, who hated his sister, Lisa, who was a fan of Lady Gaga, is coming over for dinner") but you only need two "stack frames", one to remember John and the other for the rest.

The middle part is basically equivalent to saying "Mary had an affair with the bartender, Jack. Jack hated his sister, Lisa. Lisa was a fan of Lady Gaga". My intuition is that it's parsed basically as separate sentences. Once you finish one of them you can just forget it, you only need to remember John (as there is more information about him in the end). Sentences where you need to remember more elements (i.e., you actually need unbounded recursion) become unparsable in real time as my previous example.

Of course, I don't have scientific evidence to back the things I'm saying, it's just intuition, but the same can be said of the Chomskyan theories.


Are there limits to the recursion?


> arbitrary complexity (recursive structure) of human language

Interesting, because I was reading this paper yesterday, where they argue that recursivity is the key to generalization power.

"Making Neural Programming Architectures Generalize via Recursion" - https://openreview.net/forum?id=BkbY4psgg&noteId=BkbY4psgg


I share your suspicions.

> non-material souls

I think this notion of "materiality" is an ancient chain letter that we can discard, or at least leave aside as we consider alternatives that can still support the notional 'self' & individual 'soul' without requiring a dualistic reality.

What the ancients understood as "matter" we can consider as 'localized coherence' (membranes, knots, vortices, etc.), and the "spirit" as fields, in a non-dualistic universe.

So speaking of Western religions, reconsider the meaning of the thought that "the kingdom is in you", and in the spirit of a holistic human metaphysics, note that whether in the "East" or in the "West", the key to transcendence appears to be shedding the illusion of the centrality of the localized "body" as the totality of the self.


What annoys me about linguistics is that higher level languages are quite poor for basic human communication. Non verbal communication is maybe the most important channel.


Actually human languages are much better suited for all kinds of communication : To the best of our knowledge, most animal systems of communication don't allow reference to either the speaker (i.e "I"), the hearer (i.e "you"), or a distant third party (i.e "it"). Even elaborate systems like the bee dance are highly specialized in the type of reference that is possible. In human speech it is possible to talk about things or people that are not here, things that don't exist but may, things that have existed, things that will exist, things that won't exist. Hypotheses, conjectures, and so on, are not part of any other species way of communicating, as far as I know.

Furthermore, all known human languages allow you to attribute utterances to another, earlier speaker, what is known as reported speech (often, though not always as Pirahâ demonstrate, through an embedded clause "He said that X").

Human non-verbal communication (which is EMPHATICALLY NOT to be confused with sign languages), on the other hand, is typically not very expressive. Apart from the admittedly very important, but nevertheless quite specialized domain of human emotion (conveyed by facial expression for instance), it is more or less inadequate for much of what we want to communicate.


I agree, yet non-verbal is extremely important for in-situ, in the now, personal and interpersonal well being. N-Verbal can even make Verbal communication moot since one can say anything depending on its emotional state. Also, some things are extremely hard to describe by words yet can be conveyed with a silence, a look in the eye, a gesture.

I really agree about the abstract possibilities of speech, but I a bit annoyed at the fact in one's existence, the non verbal cues are so important, yet so often boldly ignored, and almost never taught.


You know who definitely agrees with you ? Linguists who specialize in conversational analysis, multimodal discourse analysis and interactional sociolinguistics.

It makes me kind of sad when people assume that "Chomsky's work = the whole of linguistics".


Aight, my failing for not digging deeper. I'm on so many things, I couldn't find out about them.


It clearly can't be. The genetic SNP delta between humans and other primates is too small to record sufficiently complex cortical changes for something like all of neural symbolic representations.


Even simple cellular automata can produce drastically different behavior given slight tweaks to simple rules, so I don't think that a small genetic delta rules out a large structural change.


Not every linguist agree with Chomsky.


Name an animal that uses grammar, or something like it.


This blog post gives examples.


This article doesn't seem to make a strong case that a hierarchical tree is the best model for language (or motor control, or image recognition). Just because you can model something as a tree doesn't mean that's the most parsimonious or effective model. The relative lack of success thus far of symbolic parse tree based techniques in NLP compared to techniques grounded in other models should be a strong hint that trees are not the best map of the territory.


I do not work on NLP, but my understanding was that, for purely syntactic work, standard parse-tree-based techniques had been quite successful in NLP; and that it is only for semantic work that symbolic representations begin to show weaknesses. Since we often care about the meanings of words, this is a pretty strong limitation; still, it suggests that standard grammar-and-parse-tree approaches do capture something significant about how human languages work.

Is this inaccurate?


I don't work in NLP either, but from my understanding, the boundary between syntax and semantics is never as clean as one might imagine, and each language draws the boundary differently, so in general the utility of just looking at syntax can greatly vary.

Another issue is that many real life sentences have more than one possible parse, and we use context and semantics to disambiguate, e.g. how do you parse 'fruit flies like a banana'.


...vs. "time flies like an arrow."

If you try to parse natural language with a strong distinction between syntax and semantics, you get a lot of ambiguous parses.

Most current successful NLP work uses statistical models. (Don't mention those to Chomsky.)


The problem with what you're saying (that grammars are not necessarily the best representations of complex hierarchical structures, with which I agree) is that anything that can represent a complex hierarchical structure as well as a grammar must necessarily be equivalent in computational power to that grammar- and unfortunately, we know of no computational process that cannot be expressed as a grammar at least in principle.

"Unfortunately" because this means that any representation of such a process you may want to pick over a gramar, that is not a grammar, will either have to be reducible to a grammar, or fail to capture the expressive power of the modelled process.

So, sure, a "tree" may not be the best way to model natural language. But we don't have the theoretical tools to figure out what can be a better representation than that.


Is this not almost tautologically true? Will not any collection of related information have the potential to be represented by some sort of hierarchical structure?


It can be represented as a hierarchical structure as long as the relationships are simple. In any of his examples if you added a number of extra relationships to complicate what is there, the hierarchical structure becomes a generic graph.


While we like to think of Grammar as a perfect logical structure, in practice, "the brain" (or some higher level structure) processes things in a highly probabilistic fashion. This is how we are able to understand imperfect information (and the mention of the animal "grammar" is on point - a pet can learn and find their way around a house easily)

It's probably something more complex than a LTSM structure, but the point stands.


The problem with representing processes with grammars is that you can represent pretty much any process with a grammar, right up to the level of Turing machines and the Lambda calculus etc. The question is, always: how useful is it to do this in practice?

With language, we assume that there is some sort of underlying structure that can best be modelled as a grammar (well, some of us do). In simple terms, we assume that natural language already has a grammar so that we can hope to eventually reconstruct it somehow- either by hand, or by automated inference etc.

Unfortunately, in practice every effort to do that sort of thing undertaken since Chomsky's Syntactic Structure was published in 1957, has been met with failure and even theoretically the outlook is bleak (see Gold's result and a ton of bibliography on inductive inference before and after it).

In short, the problem is not that it's hard to convince ourselves that there are processes in nature that are best modelled using hierachical representations, like grammars. The problem is that even armed with those representations we've so far proven incapable, with all our science and technology, to accurately use such representations to fully model those processes.

Basically, our current situation regarding the modelling of complex hierarchical processes is like being given the key to heaven, but no map to the damned gates.


> Basically, our current situation regarding the modelling of complex hierarchical processes is like being given the key to heaven, but no map to the damned gates.

More accurately, you have been given a key that is claimed to be the key to heaven, but no one has yet found a gate that the given key opens, yet you insist on asserting that it is the key to "the damned gates".


Ouch.

Look, no. Grammars look like "the key to the gates" because of the equivalence between grammars-language-automata etc. You can write up grammars for fully Turing-complete languages, or indeed context-free languages that allow you to declare Turing-complete automata.

You can create grammars that display infinite recursion with minimal effort:

  A --> ε
  A --> Aa
  a --> <whatever you please>
So it's not like someone (cough, Chomsky) woke up one nice day and thought "blimey, I'll tell the world that grammars are a powerful tool for modelling hierarchical processes". It's that they are.

We have used grammars in practice to model complex hierarchical processes- except it's only those processes that we already know how to model, because we came up with them ourselves, like the aforementioned Turing-complete ones.

The problems begin when we try to fit a process we hardly understand to a grammar. That is not a limitation of the tool itself. It's a limitation of our ability to use it.


To me, and I think many other outsiders, putting a lot of emphasis on the equivalent of grammars-language-automata looks like mathematical naivety. I don't say this to be rude but because you (and Chomsky) claim to be able to interpret the implications of these mathematical results, but I don't think you are doing so correctly. Grammars look like a human (mathematical) invention and not some deep mathematical structure, and these results appear shallow. In the broader context, lots of mechanisms are able to do Turing complete computation.

This doesn't just apply to grammars. There is a huge array of formalisms (e.g. logics, type systems) out there and most just look like the result of someone saying "what if I did this?".


>> I don't say this to be rude but because you (and Chomsky) claim to be able to interpret the implications of these mathematical results, but I don't think you are doing so correctly.

It's alright- if I'm being naive, I'm being naive.

But- what am I missing? You're saying we're doing it wrong- how? For me the intuition that infinite generative ability flows naturally from unbounded recursion, like an egg from a hen's bottom, is kind of obvious. Is it naive? I guess it's empirical, for me at least.

Also, btw, I was introduced to the idea of language equivalence through Hopcroft and Ullman, so from the point of view of computer science, where it's been very useful, in practical terms. I guess if you're coming from a mathematical or theoretical physics background it might sound a bit silly, but it's allowed us to make a lot of progress, for instance to create a few thousand different architectures and languages... but maybe I shouldn't be bringing that up as progress...

Anyway, I don't know- how would you interpret the observation correctly? Where are we going wrong?


Yeah no,

The problem is that language is not simply a hierarchical representation of reality. Human language involves a number of uniqe specific qualities - enough that I can't list all of them off the top of my head but just for example, symbolic representation.

Sure, an animal represents the world in its brain. But a lot of animals don't get that a symbol in the world can represent another symbol in the world. I know a common example is pointing - only the most representation-adept animals "get" that a person can use their finger to indicate something else.

And this, symbolic representation, is just one aspect of human language broadly.


By recursion Chomsky means of the same component, which is potentially infinite. Hierachical and recursion is like the difference between regular expressions and context free grammars.

Of course, in practice, most people struggle with just a few levels of recursion. So I'm not sure I agree with Chomsky - but that's his theory.

BTW the grammar-related Piraha ideology of immediate experience is appealing, as theories and abstractions can obscure what's actually happening, and reminds me of DNA's ruler of the universe.


Thanks a lot for posting this! It made me very happy. I hold the same opinion as him and for many years I've searched the literature looking for people with similar views!


> the feud between Noam Chomsky and Daniel Everett

"Don't sleep, there are snakes" which recounts the experience of Everett among the Pirahas people is a fantastic book.


Pretty sure Chomsky won that 'fight' hands down.


Pretty sure he didn't. Also, Everett book isn't only about the controversy with Chomsky, it's about his own experience in the jungle and his coming of age. It's a great book even if you absolutely love Chomsky.


He did too. Everett tries to make the point that because Piraha doesn't use recursion (which we only know by his own account, and nobody else's, since nobody else knows Piraha as well as he and the Piraha do, also by his account) then Chomsky must be wrong.

How is Chomsky wrong if Piraha doesn't have recursion? According to Everett, always, Chomsky's position that recursion is the distinguishing characteristic of human language, must necessarily mean that all human languages have recursion.

This is exactly like saying that, because it only snows in winter, it's not winter if it's not snowing.

For Chomsky to be right, it suffices for a single human language to display recursion. If even one human language displays recursion, then humans in general can learn any language that displays recursion- because we know well that human infants learn the language of the linguistic communities they're reared in (and therefore any human can learn any human language). For instance, even a Piraha baby raised in a Brazilian family would learn to speak Portuguese, and Portuguese has recursive embedding.

Everett of course claims that Piraha, somehow magically unlike any other human being in the world, are incapable of learning any other language than Piraha. He also claims that they were unable to learn simple arithmetic, beyond 1 + 1 = 2, despite his, um, best efforts.

In fact, all human languages except Piraha, and only by Everett's account, display recursion. Which makes Everett's claim about Piraha so hard to accept. The fact that he remains the only (self-professed) authority on Piraha makes it even harder to take him seriously.

Generally, it's not so much that Chomsky has won anything here. Everett is so clearly a total troll, and his books the printed equivalent of clickbait, that it's ridiculous to even claim there is anything like a debate to be had. It's like "debating" a climate denialist.


Hasn't Chomsky's idea been fairly comprehensively disproven? I seem to recall a survey of language features that concluded there were no clusters of grammatical constructions of the form his theory predicts.


No, not at all. I think people might be surprised at just how basic the structures that he uses his evidence are.

The Piraha study was a study with poor research practices (subsequent researches have found conflicting findings from Everett's), and always ignored one massive problem, as told by a joke:

"Professor Chomsky! There's a language in Brazil that doesn't seem to have recursion or large numerals!"

Chomsky doesn't turn around.

"Can they learn Portuguese?"

"... Yes."

That's a lot of what his theories are-- that for a set of features in human language including types of structure like recursion, every homos sapiens has the ability to use them in language.


Not a very good joke. Everett doesn't argue that the Pirahas are a different species; they're human so of course they can do what humans do!

What he's saying (and I think demonstrated pretty well) is that not ALL languages are based on the same fundamental structures: to show this, one counterexample is enough.


I don't know anywhere that Chomsky (or anyone) claimed that ALL languages are based on the same fundamental structures, but Wikipedia talks about it, and has links to articles showing that Everett's own examples include recursion. See here: https://en.wikipedia.org/wiki/Pirah%C3%A3_language#Recent_co...


The paragraph on Wikipedia has links to a grand total of 3 articles, one of which by Everett himself, and the other two by the same trio of people. So it's disingenuous to write that Wikipedia "has links to articles".

Wikipedia points to a controversy started by 3 people, to which Everett responded, to which the same 3 people responded again.

It's difficult to go any further since said articles are behind a paywall.


Actually, Everett claims that the Piraha cannot learn Portuguese. He also claims that they were unable to learn simple arithmetic despite his -cough cough- best efforts.

He might as well be claiming that the Piraha are a different species. Note: an inferior species.

He's a gigantic asshole really.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: