Purplekeyboard
Purplekeyboard t1_j0zwx7t wrote
Reply to comment by croatoan182 in The Office: Office Romances by Season at Dunder Mifflin [OC] by bmoorewastaken
You mean Bob Vance, Vance Refrigeration?
Purplekeyboard t1_j0qc0c8 wrote
Surprising to find out that florida receives the same amount of snow that much of the Rocky Mountains receives.
Or, maybe the graph shouldn't use the color tan to mean "I don't know".
Purplekeyboard t1_j0ar5ea wrote
Reply to comment by respeckKnuckles in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
>Can it be measured? Can it be detected in a measurable, objective way?
Yes, we can measure whether someone (or some AI) knows things, can analyze them, take in new information about them, change their mind, and so on. We can observe them and put them in situations which would result in them doing those things and watch to see if they do them.
An AI language model sits there and does nothing until given some words, and then adds more words to the end of the first words which goes with them. This is very different from what an AGI would do, or what a person would do, and the difference is easily recognizable and measurable.
>This is the problem with the "argumentum ad qualia"; qualia is simply asserted as this non-measurable thing that "you just gotta feel, man", and then is supported by these assertions of what AI is not and never can be. And how do they back up those assertions? By saying it all reduces to qualia, of course. And they conveniently hide behind the non-falsifiable shell that their belief in qualia provides. It's exhausting.
I wasn't talking about qualia at all here. You misunderstand what I was saying. I was talking about the difference between an AGI and an AI language model. An AGI wouldn't need to have any qualia at all.
Purplekeyboard t1_j0aorp8 wrote
Reply to comment by calciumcitrate in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
>Rather, if you have a trained model that captures representations that are generalizable and representative of the real world, then I think it'd be reasonable to say that those representations are meaningful and that the model holds an understanding of the real world. So, the extent to which GPT-3 has an understanding of the real world is the extent to which the underlying representations learned from pure text data correspond the real world patterns.
GPT-3 contains an understanding of the world, or at least the text world. So does Wikipedia, so does a dictionary. The contents of the dictionary are meaningful. But nobody would say that the dictionary understands the world.
I think that's the key point here. AI language models are text predictors which functionally contain a model of the world, they contain a vast amount of information, which can make them very good at writing text. But we want to make sure not to anthropomorphize them, which tends to happen when people use them as chatbots. In a chatbot conversation, you are not talking to anything like a conscious being, but instead to a character which the language model is creating.
By the way, minor point:
>If you fed a human nonsense sensory input since birth, they'd produce an "understanding" of that nonsense sensory data as well.
I think if you fed a human nonsense information since birth, the person would withdraw from everything and become catatonic. Bombarding them with random sensory experiences which didn't match their actions would result in them carrying out no actions at all.
Purplekeyboard t1_j0amzem wrote
Reply to comment by respeckKnuckles in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
I'm referring to two things here. One is having an experience of understanding the world, which of course GPT-3 lacks as it is not having any experience at all. The other is the state of knowing that you know something and can analyze it, look at it from different angles, change your mind about it given new information, and so on.
You could have an AGI machine which had no actual experience, no qualia, nobody is really home, but still understand things as per my second definition above. Today's AI language models have lots of information contained within themselves, but they can only use this information to complete prompts, to add words to the end of a sequence of words you give them. They have no memory of what they've done, no ability to look at themselves, no viewpoints. There is understanding of the world contained within their model in a sense, but THEY don't understand anything, because there is no them at all, there is no operator there which can do anything but add more words to the end of the word chain.
Purplekeyboard t1_j0aayw0 wrote
Reply to comment by Nameless1995 in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
One thing that impresses me about GPT-3 (the best of the language models I've been able to use) is that it is functionally able to synthesize information it has about the world to produce conclusions that aren't in its training material.
I've used a chat bot prompt (and now ChatGPT) to have a conversation with GPT-3 regarding whether it is dangerous for a person to be upstairs in a house if there is a great white shark in the basement. GPT-3, speaking as a chat partner, told me that it is not dangerous because sharks can't climb stairs.
ChatGPT insisted that it was highly unlikely that a great white shark would be in a basement, and after I asked it what would happen if someone filled the basement with water and put the shark there, once again said that sharks lack the ability to move from the basement of a house to the upstairs.
This is not information that is in its training material, there are no conversations on the internet or anywhere about sharks being in basements or unable to climb stairs. This is a novel situation, one that has not been discussed anywhere likely before, and GPT-3 can take what it does know about sharks and use it to conclude that I am safe in the upstairs of my house from the shark in the basement.
So we've managed to create intelligence (text world intelligence) without awareness.
Purplekeyboard t1_j0a7dwb wrote
Reply to comment by Nameless1995 in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
> You can't really explain those phenomena without hypothesizing that LLMs model deeper relational principles underlying the statistics of the data -- which is not necessarily much different from "understanding". > > > > Sure, sure, it won't have the exact sensori-motor-affordance associations with language; and we have to go further for grounding; but I am not sure why we should be drawing a hard line to "understanding" because some of these things are missing.
AI language models have a large amount of information that is baked into them, but they clearly cannot understand any of it in the way that a person does.
You could create a fictional language, call it Mungo, and use an algorithm to churn out tens of thousands of nonsense words. Fritox, purdlip, orp, nunta, bip. Then write another highly complex algorithm to combine these nonsense words into text, and use it to churn out millions of pages of text of these nonsense words. You could make some words much more likely to appear than others, and give it hundreds of thousands of rules to follow regarding what words are likely to follow other words. (You'd want an algorithm to write all those rules as well)
Then take your millions of pages of text in Mungo and train GPT-3 on it. GPT-3 would learn Mungo well enough that it could then churn out large amounts of text that would be very similar to your text. It might reproduce your text so well that you couldn't tell the difference between your pages and the ones GPT-3 came up with.
But it would all be nonsense. And from the perspective of GPT-3, there would be little or no difference between what it was doing producing Mungo text and producing English text. It just knows that certain words tend to follow other words in a highly complex pattern.
So GPT-3 can define democracy, and it also can tell you that zorbot mo woosh woshony (a common phrase in Mongo), but these both mean exactly the same thing to GPT-3.
There is vast amounts of information baked into GPT-3 and other large language models, and you can call it "understanding" if you want, but there can't be anything there which actually understands the world. GPT-3 only knows the text world, it only knows what words tend to follow what other words.
Purplekeyboard t1_izih5hd wrote
Reply to [R] What the DAAM: Interpreting Stable Diffusion and Uncovering Generation Entanglement by tetrisdaemon
>descriptive adjectives attend too broadly.
If this means that words in a prompt modify the whole prompt and not just the phrase the word is part of, everyone who uses Stable Diffusion knows this. If your prompt is "girl, chair, sitting, computer, library, earrings, necklace, blonde hair, hat", and you modify that to specify "red chair", you're likely to also get a red hat, or now the girl will be wearing a red shirt, or various other parts of the image may turn red.
If you change the prompt from library to outdoors, and add the word snow, it will likely be snowing, but also the earrings or a pendant on the necklace may now be in the shape of a snowflake.
This is how stable diffusion works.
Purplekeyboard t1_iybzpsh wrote
Reply to ELI5 How is the Tether price I pay determined? by LGZee
In the crypto world, everything is massively manipulated and people on the outside can only guess at how things really work.
Theoretically, everyone buys and sells Tethers, but they just trust Tether so much that any time the price starts to drop below $1, clever people rush in to buy it, knowing they will later be able to sell it for $1.
Actually? Tether itself is probably buying those many of those Tethers to keep them at $1, and buying large numbers of them, and if they ever stopped the price would collapse.
But it's impossible to know. The crypto world is a crooked casino, and you know everything is fixed but you're never sure exactly how. Is the blackjack dealer stacking the deck against you? Are the slot machine payoff tables completely different from what they say? Are the big winners secretly working for the casino? You never really know.
Purplekeyboard t1_iy5xabk wrote
Reply to In Mexico men die almost 8 more times than women. 16% of all homicide victims had high school or higher education. [OC] by Altruistic_Olives
>In Mexico men die almost 8 more times than women.
There are two possibilities here. One, Mexican men die eight times, while Mexican women only die once. Which is nice for the men, being resurrected like that all the time.
Two, Mexican men's death rate is 8 times what it is for women. Assuming the average Mexican woman lives to be 80, I believe this means the average life expectancy for Mexican men is 10 years old. This must be rough for women, having to choose between marrying a boy who is under the age of 10, or marrying a corpse.
Purplekeyboard t1_iy4g104 wrote
For once someone should post a graph like these for applying for restaurant jobs.
It would go something like this. Went on Indeed, put in 50 applications, got 50 phone calls. Ignored 40 of them, answered 10, got 10 interviews. Blew off 8 of them, went to the other 2, got 2 job offers, accepted them both. Only showed up the first day for one of them.
Purplekeyboard t1_iy0gch7 wrote
Reply to comment by shortyninja in ELI5: If allergies, and especially anaphylaxis, are so common, why do we still need prescriptions for epi pens and such? by boomokasharoomo
Ah, here in the U.S. you can buy a bottle of 500 of them in the grocery store.
Purplekeyboard t1_ixpqywt wrote
I don't know how to read any of that.
Purplekeyboard t1_ixpq40o wrote
Purplekeyboard t1_ixpb9yw wrote
Worked for me. Stable diffusion 2.0 still has the problem of putting parts of people out of the frame. NovelAI solved this problem, and put out a paper explaining how they did it.
Purplekeyboard t1_ixleo6f wrote
Reply to comment by bazmonkey in ELI5: Why couldn't something that says "Cook at 400 degrees for 15 minutes" theoretically be cooked at 6000 degrees for 1 minute? by BitchImLilBaby
Heh, at 6000 degrees the steel the oven was made from would melt, as would any glass parts. You would have a nice pile of red hot molten steel and glass burning through your floor.
Purplekeyboard t1_iwykhwh wrote
Reply to [OC] Deaths from Police Shootings: Gender Gap is 9x Larger than Race Gap by JelloBackground8007
As I understand it, this clearly demonstrates the system sexism that comes from living in a matriarchy.
Someone might think this is because men commit more violent crimes than women, but we on reddit know that all people are the same and that there are no differences between any groups of people, so this cannot be.
Purplekeyboard t1_iwyk88t wrote
Reply to comment by jrm19941994 in [OC] Deaths from Police Shootings: Gender Gap is 9x Larger than Race Gap by JelloBackground8007
> Both the demographics in blue commit violent crimes at significantly higher rates than the red.
This is forbidden knowledge, you must unknow this immediately.
Purplekeyboard t1_iwyk5r3 wrote
Reply to comment by magnesiumb in [OC] The business track record of Elon Musk by born_in_cyberspace
The real world utility in text generation is still to come. If it can continue to get better, at some point there will be endless numbers of uses for them. Personal assistant for every person on the planet, replacing millions of phone jobs as text can be turned into voice using text to speech, and so on.
Purplekeyboard t1_ivfvqmd wrote
The map is totally pointless, there is no information carried on it. So it's just a table of numbers, not a visualization.
Purplekeyboard t1_iuk4my5 wrote
Reply to What to do when you are in too much debt? by dbot77
Stop borrowing money. Stop spending more than you make. Spend less than you make, and pay off the debt.
Purplekeyboard t1_iufoaou wrote
You aren't. This is an "old wives' tale". Our immune systems are far more robust than this.
Purplekeyboard t1_iu2rvpa wrote
Reply to comment by HenriettaHiggins in [OC] Racial breakdown of students at Harvard, Yale, Princeton, MIT, Stanford compared to students scoring 1400+ on the SAT by tabthough
Unless you're Asian. Then it's not so good.
Purplekeyboard t1_itwhvpv wrote
Subway will let people open a new subway 12 feet from an already existing subway. They're cheap to open, they're everywhere, and they make very little money for the operator.
Purplekeyboard t1_j10udbc wrote
Reply to Anarchism at the End of the World: A defence of the instinct that won’t go away by Sventipluk
The problem I have with anarchism is that it seems to be more of a wish fulfillment fantasy than any sort of reasonable political philosophy.
The obvious response to anarchism goes along the lines of, "What happens to your anarchist society when the tanks come rolling over the border and you get invaded?" And anarchists either get unrealistic, and say "We could fight off a well trained powerful modern military with sticks and hunting rifles", or they admit they have no solution to this and say "But maybe someday".
When the primary criticism of your proposed system of government says that it is impossible to achieve, and your response is, "yes, it is impossible, but maybe some day it will become possible", I have to wonder what the point is of even talking about it.
So anarchists end up claiming their system is impossible to achieve (today), while also claiming it is an ideal to reach for. Why not focus instead on whatever actually is possible? If we can't have anarchy because the powerful will take advantage of it and seize control, then what can we have which is both reasonable and in keeping with the values that anarchists have?