Purplekeyboard

Purplekeyboard t1_j10udbc wrote

The problem I have with anarchism is that it seems to be more of a wish fulfillment fantasy than any sort of reasonable political philosophy.

The obvious response to anarchism goes along the lines of, "What happens to your anarchist society when the tanks come rolling over the border and you get invaded?" And anarchists either get unrealistic, and say "We could fight off a well trained powerful modern military with sticks and hunting rifles", or they admit they have no solution to this and say "But maybe someday".

When the primary criticism of your proposed system of government says that it is impossible to achieve, and your response is, "yes, it is impossible, but maybe some day it will become possible", I have to wonder what the point is of even talking about it.

So anarchists end up claiming their system is impossible to achieve (today), while also claiming it is an ideal to reach for. Why not focus instead on whatever actually is possible? If we can't have anarchy because the powerful will take advantage of it and seize control, then what can we have which is both reasonable and in keeping with the values that anarchists have?

5

Purplekeyboard t1_j0ar5ea wrote

>Can it be measured? Can it be detected in a measurable, objective way?

Yes, we can measure whether someone (or some AI) knows things, can analyze them, take in new information about them, change their mind, and so on. We can observe them and put them in situations which would result in them doing those things and watch to see if they do them.

An AI language model sits there and does nothing until given some words, and then adds more words to the end of the first words which goes with them. This is very different from what an AGI would do, or what a person would do, and the difference is easily recognizable and measurable.

>This is the problem with the "argumentum ad qualia"; qualia is simply asserted as this non-measurable thing that "you just gotta feel, man", and then is supported by these assertions of what AI is not and never can be. And how do they back up those assertions? By saying it all reduces to qualia, of course. And they conveniently hide behind the non-falsifiable shell that their belief in qualia provides. It's exhausting.

I wasn't talking about qualia at all here. You misunderstand what I was saying. I was talking about the difference between an AGI and an AI language model. An AGI wouldn't need to have any qualia at all.

1

Purplekeyboard t1_j0aorp8 wrote

>Rather, if you have a trained model that captures representations that are generalizable and representative of the real world, then I think it'd be reasonable to say that those representations are meaningful and that the model holds an understanding of the real world. So, the extent to which GPT-3 has an understanding of the real world is the extent to which the underlying representations learned from pure text data correspond the real world patterns.

GPT-3 contains an understanding of the world, or at least the text world. So does Wikipedia, so does a dictionary. The contents of the dictionary are meaningful. But nobody would say that the dictionary understands the world.

I think that's the key point here. AI language models are text predictors which functionally contain a model of the world, they contain a vast amount of information, which can make them very good at writing text. But we want to make sure not to anthropomorphize them, which tends to happen when people use them as chatbots. In a chatbot conversation, you are not talking to anything like a conscious being, but instead to a character which the language model is creating.

By the way, minor point:

>If you fed a human nonsense sensory input since birth, they'd produce an "understanding" of that nonsense sensory data as well.

I think if you fed a human nonsense information since birth, the person would withdraw from everything and become catatonic. Bombarding them with random sensory experiences which didn't match their actions would result in them carrying out no actions at all.

2

Purplekeyboard t1_j0amzem wrote

I'm referring to two things here. One is having an experience of understanding the world, which of course GPT-3 lacks as it is not having any experience at all. The other is the state of knowing that you know something and can analyze it, look at it from different angles, change your mind about it given new information, and so on.

You could have an AGI machine which had no actual experience, no qualia, nobody is really home, but still understand things as per my second definition above. Today's AI language models have lots of information contained within themselves, but they can only use this information to complete prompts, to add words to the end of a sequence of words you give them. They have no memory of what they've done, no ability to look at themselves, no viewpoints. There is understanding of the world contained within their model in a sense, but THEY don't understand anything, because there is no them at all, there is no operator there which can do anything but add more words to the end of the word chain.

2

Purplekeyboard t1_j0aayw0 wrote

One thing that impresses me about GPT-3 (the best of the language models I've been able to use) is that it is functionally able to synthesize information it has about the world to produce conclusions that aren't in its training material.

I've used a chat bot prompt (and now ChatGPT) to have a conversation with GPT-3 regarding whether it is dangerous for a person to be upstairs in a house if there is a great white shark in the basement. GPT-3, speaking as a chat partner, told me that it is not dangerous because sharks can't climb stairs.

ChatGPT insisted that it was highly unlikely that a great white shark would be in a basement, and after I asked it what would happen if someone filled the basement with water and put the shark there, once again said that sharks lack the ability to move from the basement of a house to the upstairs.

This is not information that is in its training material, there are no conversations on the internet or anywhere about sharks being in basements or unable to climb stairs. This is a novel situation, one that has not been discussed anywhere likely before, and GPT-3 can take what it does know about sharks and use it to conclude that I am safe in the upstairs of my house from the shark in the basement.

So we've managed to create intelligence (text world intelligence) without awareness.

5

Purplekeyboard t1_j0a7dwb wrote

> You can't really explain those phenomena without hypothesizing that LLMs model deeper relational principles underlying the statistics of the data -- which is not necessarily much different from "understanding". > > > > Sure, sure, it won't have the exact sensori-motor-affordance associations with language; and we have to go further for grounding; but I am not sure why we should be drawing a hard line to "understanding" because some of these things are missing.

AI language models have a large amount of information that is baked into them, but they clearly cannot understand any of it in the way that a person does.

You could create a fictional language, call it Mungo, and use an algorithm to churn out tens of thousands of nonsense words. Fritox, purdlip, orp, nunta, bip. Then write another highly complex algorithm to combine these nonsense words into text, and use it to churn out millions of pages of text of these nonsense words. You could make some words much more likely to appear than others, and give it hundreds of thousands of rules to follow regarding what words are likely to follow other words. (You'd want an algorithm to write all those rules as well)

Then take your millions of pages of text in Mungo and train GPT-3 on it. GPT-3 would learn Mungo well enough that it could then churn out large amounts of text that would be very similar to your text. It might reproduce your text so well that you couldn't tell the difference between your pages and the ones GPT-3 came up with.

But it would all be nonsense. And from the perspective of GPT-3, there would be little or no difference between what it was doing producing Mungo text and producing English text. It just knows that certain words tend to follow other words in a highly complex pattern.

So GPT-3 can define democracy, and it also can tell you that zorbot mo woosh woshony (a common phrase in Mongo), but these both mean exactly the same thing to GPT-3.

There is vast amounts of information baked into GPT-3 and other large language models, and you can call it "understanding" if you want, but there can't be anything there which actually understands the world. GPT-3 only knows the text world, it only knows what words tend to follow what other words.

11

Purplekeyboard t1_izih5hd wrote

>descriptive adjectives attend too broadly.

If this means that words in a prompt modify the whole prompt and not just the phrase the word is part of, everyone who uses Stable Diffusion knows this. If your prompt is "girl, chair, sitting, computer, library, earrings, necklace, blonde hair, hat", and you modify that to specify "red chair", you're likely to also get a red hat, or now the girl will be wearing a red shirt, or various other parts of the image may turn red.

If you change the prompt from library to outdoors, and add the word snow, it will likely be snowing, but also the earrings or a pendant on the necklace may now be in the shape of a snowflake.

This is how stable diffusion works.

−1

Purplekeyboard t1_iybzpsh wrote

In the crypto world, everything is massively manipulated and people on the outside can only guess at how things really work.

Theoretically, everyone buys and sells Tethers, but they just trust Tether so much that any time the price starts to drop below $1, clever people rush in to buy it, knowing they will later be able to sell it for $1.

Actually? Tether itself is probably buying those many of those Tethers to keep them at $1, and buying large numbers of them, and if they ever stopped the price would collapse.

But it's impossible to know. The crypto world is a crooked casino, and you know everything is fixed but you're never sure exactly how. Is the blackjack dealer stacking the deck against you? Are the slot machine payoff tables completely different from what they say? Are the big winners secretly working for the casino? You never really know.

2

Purplekeyboard t1_iy5xabk wrote

>In Mexico men die almost 8 more times than women.

There are two possibilities here. One, Mexican men die eight times, while Mexican women only die once. Which is nice for the men, being resurrected like that all the time.

Two, Mexican men's death rate is 8 times what it is for women. Assuming the average Mexican woman lives to be 80, I believe this means the average life expectancy for Mexican men is 10 years old. This must be rough for women, having to choose between marrying a boy who is under the age of 10, or marrying a corpse.

5

Purplekeyboard t1_iy4g104 wrote

For once someone should post a graph like these for applying for restaurant jobs.

It would go something like this. Went on Indeed, put in 50 applications, got 50 phone calls. Ignored 40 of them, answered 10, got 10 interviews. Blew off 8 of them, went to the other 2, got 2 job offers, accepted them both. Only showed up the first day for one of them.

3