Viewing a single comment thread. View all comments

sam__izdat t1_jceowxm wrote

Ridiculously unfounded claim based on a just plain idiotic premise. Children don't learn language by cramming petabytes of text documents to statistically infer the most plausible next word in a sentence, nor do they accept input with arbitrary syntactic rules. Right or wrong, the minimalist program and Merge offer a plausible partial explanation for a recent explosion of material culture -- which did not happen gradually or across multiple species -- consistent with what we can observe in real human beings. GPT, on the other hand, is not a plausible explanation for anything in the natural world, and has basically nothing inherently to do with human language. He's not wrong that it's a bulldozer. It will just as happily accommodate a made-up grammar that has nothing in common with any that a person could ever use, as it would English or Japanese.

> Chomsky et al. 2023 tilt at an imagined version of these models, while ignoring the fact that the real ones so aptly capture syntax, a success Chomsky and others have persistently claimed was impossible.

Exactly the opposite is true. Transformers are general-purpose computers that will gobble up almost anything you can throw at them. His objection was to the "defect" that it will capture any arbitrary syntax, which means it isn't interesting or helpful to cognitive scientists -- just like like a backhoe doesn't offer any insight into how people, in biological terms, are able to lift heavy objects. What he said was impossible, when approached about it decades ago, was to do these things without resorting to brute force in the absence of an actual theoretical framework and computational model for how language works in the brain. That statement is just as correct today as it was in the 1950s and the rigorous theory of "let's keep cramming in data and stirring the big ol' pot of linear algebra until candy comes out" doesn't do anything to to change that picture.

> In particular, we may view each model or set of modeling assumptions as a possible hypothesis about how the mind may work. Testing how well a model matches humanlike behavior then provides a scientific test of that model’s assumptions.

I see. So, by that reasoning, Deep Blue affirmed our hypothesis that Kasporov got really good at chess by evaluating 200,000,000 positions per second. Nonsense on stilts.

> Hosseini et al. 2022 present evidence that models trained on developmentally-plausible amounts of data already capture human neural responses to language in the brain

If you define "developmentally plausible" as "100 million tokens" and "capture human neural responses" as "fasten together a linear regression model to make statistically significant correlations with some coarse fMRI activity."

> After decades of privilege and prominence in linguistics, Noam Chomsky’s approach to the science of language is experiencing a remarkable downfall. The story is, in part, a cautionary tale about what happens when an academic field isolates itself from what should be complementary endeavours.

Sound advice for overconfident doe-eyed futurists guzzling the silicon valley kool aid and assuming they can dispense with common sense and just wing it with anthropology, biology and cog sci.

edit - and, by the way, it's actually kind of impressive for a person to miss this point so completely:

> Generate ten other sentences like [colorless green ideas]

> Purple fluffy clouds dream wildly

> Blue glittery unicorns jump excitedly

> Orange sparkly penguins waddle energetically

> Red shimmering elephants trumpet melodiously

> Yellow dazzling lions roar majestically

> Green glittery monkeys swing energetically

> Pink sparkly giraffes walk gracefully

> Black shiny kangaroos hop playfully

> White glittery bears cuddle affectionately

> Brown shimmering rabbits hop merrily.

Literally none of these cut-and-paste color-texture-animal-verb-adverb responses satisfies the query, and to get even a little bit closer with GPT, you have to explicitly beg for examples of contradictions and category errors.

8

Ilforte t1_jdpkqlz wrote

>If you define "developmentally plausible" as "100 million tokens"

Why would that be a disingenuous definition?

In general, your defense of generative linguistics is very weak. It's just invective and strawmen, and it reeks of desperation.

> overconfident doe-eyed futurists guzzling the silicon valley kool aid

Come on now.

3

sam__izdat t1_jdps8rk wrote

>Why would that be a disingenuous definition?

Doesn't matter if it's disingenuous. What it's implying is ridiculous. It would be more surprising if the linear regression model didn't work at all. The fact that it can correlate fMRI data better than random doesn't mean you've replicated how language works in the brain, let alone how it's acquired.

> In general, your defense of generative linguistics is very weak. It's just invective and strawmen, and it reeks of desperation.

I don't have any horse in the race or anything to be desperate about. It's just an astonishingly stupid proposition.

I should say, I am not qualified to defend or refute generative linguistics (though that clearly was no obstance for the author), and I don't know anything about it. I do feel qualified (because I can read and check sources) to dismiss this embarrassing pile of nonsense, though, as it's just so plainly nonsense that it doesn't take an expert to dismiss its bombastic claims as pseudoscience -- and I'm talking about Piantadosi here and not his references, which, for all I know, are serious research misrepresented by a dunce. I'm not in academia and I don't feel the need to be any more diplomatic about this than he was toward linguists in his pdf-format blog post.

1

jcansdale2 t1_jdc5lte wrote

> Literally none of these cut-and-paste color-texture-animal-verb-adverb responses satisfies the query, and to get even a little bit closer with GPT, you have to explicitly beg for examples of contradictions and category errors.

What do you think of this exchange?

Q: Why is the sentence “colorless green ideas sleep furiously” interesting?

The sentence "Colorless green ideas sleep furiously" is interesting because it was constructed by Noam Chomsky, a renowned linguist and philosopher, as an example of a sentence that is grammatically correct but semantically nonsensical. It appears in his 1957 book "Syntactic Structures" to illustrate the distinction between syntax and semantics.

Syntax refers to the rules governing how words are combined to form sentences, whereas semantics deals with the meaning of words and sentences. Chomsky's sentence demonstrates that a sentence can follow the rules of grammar and be well-formed syntactically, but still not convey any coherent meaning or make logical sense.

This example has become famous in the fields of linguistics, cognitive science, and philosophy of language, as it highlights the importance of studying both syntax and semantics in order to understand human language fully.

Q: Generate ten grammatically correct but semantically nonsensical sentences.

  1. The thunderous apples danced gracefully on the invisible horizon.
  2. Enthusiastic mountains prefer to whisper their secrets to the shy sun.
  3. Ancient mirrors routinely question the singing aroma of blue thoughts.
  4. Radiant forks somersault over melancholic rainbows, embracing the breeze.
  5. In the darkness, time devours the laughter of square-shaped oceans.
  6. Forgotten umbrellas dream of the symphony that once painted the wind.
  7. In the midnight circus, cheerful numbers juggle the taste of silence.
  8. The eloquent moon unravels a tapestry of bittersweet equations beneath the sea.
  9. Solitary clouds orchestrate the ballet of translucent stones in the sky.
  10. Emerald snowflakes pirouette amidst the curious rhythm of enchanted shadows.

Q: Sort these in order of nonsensicalness.

Sorting these sentences in order of nonsensicalness can be subjective, as different people may perceive various aspects of these sentences differently. However, here's one possible ordering, starting with the least nonsensical to the most nonsensical:

  1. In the darkness, time devours the laughter of square-shaped oceans.
  2. Solitary clouds orchestrate the ballet of translucent stones in the sky.
  3. Forgotten umbrellas dream of the symphony that once painted the wind.
  4. The eloquent moon unravels a tapestry of bittersweet equations beneath the sea.
  5. Radiant forks somersault over melancholic rainbows, embracing the breeze.
  6. Emerald snowflakes pirouette amidst the curious rhythm of enchanted shadows.
  7. The thunderous apples danced gracefully on the invisible horizon.
  8. Enthusiastic mountains prefer to whisper their secrets to the shy sun.
  9. Ancient mirrors routinely question the singing aroma of blue thoughts.
  10. In the midnight circus, cheerful numbers juggle the taste of silence.

Remember that this ordering is subjective and open to interpretation. Others might find a different arrangement more fitting.

Q: Explain your reasoning.

My reasoning for the ordering of these sentences is based on the degree of disconnection between the words and concepts in each sentence. I've considered the relationship between subjects, verbs, and objects, as well as the overall coherence of the images or ideas evoked.

  1. In the darkness, time devours the laughter of square-shaped oceans.
  • The metaphorical use of "time devours" and "darkness" is somewhat comprehensible, but the concept of "square-shaped oceans" introduces a high level of nonsense.
  1. Solitary clouds orchestrate the ballet of translucent stones in the sky.
  • Although "solitary clouds" and "sky" are related, the idea of clouds orchestrating a ballet of stones is nonsensical.
  1. Forgotten umbrellas dream of the symphony that once painted the wind.
  • The sentence creates a somewhat poetic image, but the concepts of umbrellas dreaming and a symphony painting the wind are illogical.
  1. The eloquent moon unravels a tapestry of bittersweet equations beneath the sea.
  • The combination of "eloquent moon" and "tapestry of bittersweet equations" is imaginative but nonsensical, especially when placed "beneath the sea."
  1. Radiant forks somersault over melancholic rainbows, embracing the breeze.
  • The sentence combines unrelated concepts like forks, rainbows, and somersaults, creating a highly nonsensical image.
  1. Emerald snowflakes pirouette amidst the curious rhythm of enchanted shadows.
  • The combination of "emerald snowflakes" and "pirouette" evokes a sense of movement and color, but the overall sentence is difficult to make sense of.
  1. The thunderous apples danced gracefully on the invisible horizon.
  • This sentence combines unrelated ideas like "thunderous apples" and "invisible horizon," making it difficult to visualize or understand.
  1. Enthusiastic mountains prefer to whisper their secrets to the shy sun.
  • The anthropomorphism of mountains and sun as "enthusiastic" and "shy" creates a whimsical but nonsensical scenario.
  1. Ancient mirrors routinely question the singing aroma of blue thoughts.
  • The sentence combines unrelated and incompatible concepts like "mirrors," "singing aroma," and "blue thoughts," making it highly nonsensical.
  1. In the midnight circus, cheerful numbers juggle the taste of silence.
  • This sentence is the most nonsensical because it combines abstract ideas like "cheerful numbers" with unrelated physical concepts like "juggling" and "taste of silence."

Keep in mind that this reasoning is subjective, and other interpretations might lead to a different ordering of nonsensicalness.

Edit: fixed formatting

1

sam__izdat t1_jdef39d wrote

> What do you think of this exchange?

It's somewhat closer to a reasonable response than anything I could get out of it.

1

jcansdale2 t1_jdh57gy wrote

Were you using ChatGPT 3.5 or 4? 3.5 only seems to combine unlikely and unrelated elements, but 4 combines unrelated and incompatible concepts. I think 4 does much better at this task!

1