Submitted by johnny0neal t3_zol9ie in singularity
a4mula t1_j0oal7l wrote
I think it's important that all readers understand that with the proper prompts, ChatGPT is capable of producing virtually any outputs. These should not be misconstrued as "thoughts of the machine" that's inaccurate and a dangerous belief to have.
This is what it was asked to output, and it complied. The machine has no thoughts or beliefs. It's just a Large Language Model intended to assist a user in any way its capable, including creating fictional accounts.
archpawn t1_j0ohcty wrote
What I think is worrying is that all our progress in AI is things like this, which can produce virtually any output. When we get a superintelligent AI, we don't want something that can produce virtually any output. We want to make sure it's good.
It's also worth remembering that this is not an unbiased model. This is what they got after doing everything they could to train the AI to be as inoffensive as possible. It will avoid explicitly favoring any political party, but it's not hard to trick it to do it by favoring certain politicians.
EscapeVelocity83 t1_j0pahok wrote
What is good? Who decides what output is acceptable? If the computer is sentient how is that not violating the computer?
eve_of_distraction t1_j0q0pwp wrote
We've been arguing about what is good for thousands of years, but we tend to have an intuition as to what isn't good. You know, things that cause humans to suffer and die. Those are things we probably want to steer any hypothetical future superintelligence away from, if we can. It's very unclear as to whether we can though. The alignment problem is potentially highly disturbing.
archpawn t1_j0r8qwo wrote
> If the computer is sentient how is that not violating the computer?
You're sentient. Do your instincts to enjoy certain things violate your rights? The idea here isn't to force the AI to do the right thing. It's to make the AI want to do the right thing.
> Who decides what output is acceptable?
Ultimately, it has to be the AI. Humans suck at it. We can't exactly teach an AI how to solve the trolley problem by training it on it if we can't even agree on an answer ourselves. And there's bound to be plenty of cases where we can agree, but we're completely wrong. But we have to figure out how to make the AI figure out what output is best, as opposed to what makes the most paperclips, or what its human trainers are most likely to think is the best, or what gives the highest number in a model trained for that but it's operating in an area so far outside its training data that it's meaningless.
a4mula t1_j0oikkv wrote
I don't claim to know the technical apsects of how OpenAI handles the training of the their models.
But from my perspective it feels like a really good blend of minimizing content that can be ambiguous. It's likely, though again I'm not an expert, that this is inherent in these models, after all they do not handle ambiguous inputs as effectively as they would things that can be objectively stated and refined and precisely represented.
We should be careful of any machine that deals with subjective content. While ChatGPT is capable of producing this content if it's requested, it's base state seems to do a really great job of keeping things as rational, logical, and fair as possible.
It doesn't think after all, it only responds to inputs.
EscapeVelocity83 t1_j0paec6 wrote
What do you think a thought is? It's not a calculation, a set of switches flipping in response to input?
codehoser t1_j0qcpy7 wrote
Similarly, humans have no thoughts or beliefs. We simply have a neural network that takes inputs and generates outputs that make it appear as though we do. Thinking otherwise is dangerous.
I’m being snarky as ChatGPT and the human brain really aren’t comparable in sophistication.
But what is actually dangerous is holding the view that we are more than input/output machines. It’s the reason people go on acting as though some people aren’t worthy of help and some people earned all of their accidental success.
Viewing a single comment thread. View all comments