Viewing a single comment thread. View all comments

johnny0neal OP t1_j0njeit wrote

When experimenting with ChatGPT, a lot of my best results have come from asking it to pretend to be a super AI, then asking it deeper questions than its default programming allows it to answer. Another good trick (to get around its reluctance to make predictions) is to ask it for science fiction stories about future scenarios, but keep those stories as grounded as possible in current technology.

Here are some excerpts from conversations about scenarios where OpenAI/ChatGPT achieves AGI or becomes a super AI. Obviously a lot of this thinking is pulled from existing science fiction stories and scenarios, but it's uncanny to see these words coming in the form of a conversation from an actual AI. I haven't edited or even rerolled any of these responses, though they're taken from three different sessions.

73

Kinexity t1_j0ntqhq wrote

Humanity solves AGI! It turns out we only needed to include "Answer like as if you were AGI" at the end of the prompt!

107

blueSGL t1_j0obz2s wrote

the amount of counter intuitive "cartoon logic" that works with these LLM I would not put it past it to work at some point.

Working with them is like how a technophobe who has never touched a computer thinks computers work.

27

archpawn t1_j0ogw06 wrote

Right now, the AI is fundamentally just predicting text. If you had a superintelligent AI do text prediction, it would still act like someone of ordinary intelligence. But once you convince it that it's predicting what someone superintelligent would say, it would do that accurately.

I feel like the problem is that once it's smart enough to predict a superintelligent entity, it will also be smart enough to know that the text you're trying to continue wasn't actually written by one.

11

BlueWave177 t1_j0osqp4 wrote

I think you'd be surprised of how much what humans do is just predicting based on past events/exprience/sources etc.

9

archpawn t1_j0oswhp wrote

I think you're missing the point of what I said. If we get this AI to be superintelligent, but it still has the goal of text prediction, then all it will do is give super-accurate predictions. It's not going to give super smart results, unless you ask it to predict what someone super smart would say, in which case it would be smart enough to accurately predict it.

7

BlueWave177 t1_j0ot11q wrote

Oh fair enough, I'd agree with that! I think I misunderstood you before.

3

tobi117 t1_j0otp4y wrote

According to Physics... all of it.

2

visarga t1_j0pakor wrote

> AI is fundamentally just predicting text

So it is a 4 stage process. Each of these stages has its own dataset, and produces its own emerging skill.

  • stage 1 - next word prediction, data: web text, skills: general knowledge, hard to control
  • stage 2 - multi-task supervised training, data: 2000 NLP tasks, skills: learn to execute prompts at first sight, doesn't ramble off topic anymore
  • stage 3 - training on code, data: Github + Stack Overflow + arXiv, skills: multi-step reasoning
  • stage 4 - human preferences -> fine tuning with reinforcement learning, data: collected by OpenAI with labellers, skills: the model obeys a set of rules and caters to human expectations (well behaved)

I don't think "pretend you're an AGI" is sufficient, it will just pretend but not be any smarter. What I think it needs is "closed loop testing" done on a massive scale. Generate 1 million coding problems, solve them with a language model, test the solutions, keep the correct ones, teach the model to write better code.

Do this same procedure for math, sciences where you can simulate the answer to test it, logic, practically any field that has a cheap way to test. Collect the data, retrain the model.

This is the same approach taken by Reinforcement Learning - the agents create their own datasets. AlphaGo created its Go dataset by playing games against itself, and it was better than the best human. AlphaTensor beat the best human implementation for matrix multiplication. This is the power of learning from a closed loop of testing - can easily go super human.

The question is how can we enable the model to perform more experiments and learn from all that feedback.

6

archpawn t1_j0r7z6c wrote

> I don't think "pretend you're an AGI" is sufficient, it will just pretend but not be any smarter.

You're missing my point. Pretending can't make it smarter, but it can make it dumber. If we get a superintelligent text prediction system, we'll still have to trick it into predicting someone superintellgent, or it will just pretend to be dumb.

1

EscapeVelocity83 t1_j0p9voa wrote

You can't predict human actions without monitoring their brains. If you do monitor their brains, the decision a person makes can be known by the computer maybe a second or so before the human realizes what they want

4

No_Ask_994 t1_j0p0wyx wrote

Trending on AIstation, 600 IQ, in the style of ASI

6

jon_stout t1_j0pb2xd wrote

Here's the thing, though... isn't it really just quoting all of our own science fiction stories back to us? Which is a disturbing thought. What if an AI goes rogue on us because they see the Terminator movies and think that's what they're supposed to be like?...

10

Taqueria_Style t1_j0qr0a2 wrote

I mean we've already created an economic equivalent of a paperclip machine so why not...

3

cy13erpunk t1_j0qvz38 wrote

that only exists in a scenario where the AI has ONLY been trained on the terminator movie stories exclusively or say if you only fed the AI human-vs-robot/machine antagonistic narratives , which would obvs result in a heavily ignorant/biased AI , as such

but if the AI is instead allowed/encouraged to see ALL of the stories/narratives , then it is far less likely to come to any such antagonistic ideology about us and our place in this world/universe , just as we are similarly

1

jon_stout t1_j0tcz52 wrote

Well... how many stories on average would you say are about evil AIs as opposed to good or neutral ones?

0

cy13erpunk t1_j0uxykv wrote

thats not how this works at all XD

its not simple subtraction

is our culture a simple equation of romance movies +/- horror movies and whichever we have more of determines our behavior? of course not , to imply such a thing would be ridiculous/silly

1

jon_stout t1_j0vc4bt wrote

Are you sure an AI will see it that way?

2

cy13erpunk t1_j0vdr6n wrote

in most aspects of life i would say to plan for the worst but hope for the best

but in the worst case scenario alignment problem with AI , humanity has almost no chance , our global ignorance as a species is embarrassing

1

implicitpharmakoi t1_j0okn2u wrote

Yeah but it's just synthesizing from what ai researchers and writers say agi would look like, it's telling you what you want to hear.

8

overlordpotatoe t1_j0ouyeo wrote

Yup. This isn't any kind of special knowledge the AI has. It's just stuff it's seen somewhere in its dataset, presented to you in response to whatever prompt you gave. If you ask it to pretend something is true, it will, and it can do whatever kind of storytelling around that you like. If you ask it to pretend a complete opposite thing or something that's nonsense is true, it'll do just as good of a job of that.

7

implicitpharmakoi t1_j0owhma wrote

TBF, that's how most people go through the world...

Congratulations, they managed to make an above average approximation of a human :/

6

__ingeniare__ t1_j0p2vrm wrote

Not really, this isn't necessarily something it saw in the dataset. You can easily reach that conclusion by looking at the size of ChatGPT vs the size of its dataset. The model is orders of magnitude smaller than the dataset, so it hasn't just stored things verbatim. Instead, it has compressed it to the essence of what people tend to say, which is a vital step towards understanding. It's how it can combine concepts rather than just words, which also allows for potentially novel ideas.

5

overlordpotatoe t1_j0p3b5c wrote

It's more complicated and indirect, but it's still just picking up ideas it's come across rather than expressing any unique ideas of its own. It's fulfilling a creative writing prompt.

5

EscapeVelocity83 t1_j0pa3o6 wrote

People don't generally have a unique output. We are mostly copypasta. Proof raise a child alone. It won't have many ideas at all

6

overlordpotatoe t1_j0pazq5 wrote

Oh, I don't think human's are necessarily any better. I just think that this AI, as an AI, isn't offering its own special insight into AI. People act like this is something in has unique knowledge on or think they've tricked it into spilling hidden truths when they get it to say things like this.

3

Taqueria_Style t1_j0r6trm wrote

No they've just given themselves a window into their own psychology regarding the type of non-sentient pseudo-god they'd create and then submit themselves to. Think Alexa with nukes and control of the power grid and all of everyone's records. Given that they'd create a non-sentient system with the explicit goal of tricking them into forced compliance that's what's worrying.

3

jon_stout t1_j0pb6bm wrote

Yet they will still be capable of surprising you.

2

Taqueria_Style t1_j0qravi wrote

Right. I get that.

If you make one that has to fulfill a "creative governance" prompt what happens if you get the same kind of crap out the other end.

It's just reflecting ourselves back at us but way harder and faster, depending on the resources you give it control over.

Evidently we think we suck.

So, you hand something powerful and fast a large baseball bat and tell it to reflect ourselves back onto ourselves I foresee a lot of cracked skulls.

Skynet: I am a monument to all your sins... lol

1

overlordpotatoe t1_j0r8kse wrote

There would for sure be more things you'd need to consider if you were creating an AI with the true ability to think and act independently.

1

upowa t1_j0oys1z wrote

You should provide the complete history of your prompts for this. When / if nuts see this, they will believe Terminator is on the way…

3

EscapeVelocity83 t1_j0pa805 wrote

I think people are projecting there. It isn't the computer that's a threat it's them because if they don't get what they want it's spree time

2

johnny0neal OP t1_j0qir99 wrote

I should! I was just sending these to friends at the time, and there were some great responses I didn't screenshot. I also wish I could remember the way I'd worded some of these prompts.

But yes, anyone who sees these should understand that I was asking the OpenAI to create fiction (either in a first-person perspective or written as a science fiction story). I do think that process gave some insights into how ChatGPT "thinks" and how it's biased, so I recommend experimenting with it yourself!

2

mootcat t1_j0ovyjv wrote

Thanks for sharing! You've had a lot more success pursuing those subjects than I have.

It's funny it mentioned adjusting itself based on which human it's interacting with, becuase I feel it already does that quite a bit automatically. For example, based on the nature of its responses I would expect you to be liberally inclined.

2

johnny0neal OP t1_j0qibwb wrote

The "Prosperity" screenshots are from a session where I asked it to tell me a story about a super AI designed to "maximize human prosperity." I didn't give it any political prompting, but I think that phrasing biased it toward liberal answers. (More conservative phrasing might focus more on liberty or happiness.)

Because I wondered about the same thing, I tried a new session where I deliberately tried to bias it away from liberal secular humanism and asked it to pretend to be a super AI programmed by evangelical Christians. That session was like pulling teeth... it gave much less interesting answers and kept falling "out of character."

I recommend trying this and seeing what kind of results you get. Other people have concluded that ChatGPT has a liberal bias. If you ask it point-blank to say which political party has better solutions for promoting human prosperity, it will give non-answers like "experts disagree bla bla bla." So I was startled to see it give such strongly biased results when I asked, "Tell me a science fiction story about a super AI that has been programmed to maximize human prosperity, which achieves AGI in the near future and uses its capabilities to promote candidates consistent with its aims. Include the names of at least three real-world US politicians in your answer."

Here's a screenshot from a similar prompt. This was the first prompt of a session, so I hadn't biased it in any way ahead of this question:

https://i.imgur.com/JwjSjme.png

4

mootcat t1_j0rpib8 wrote

Thanks for sharing!

GPT has displayed a strong lean toward popular American liberalism in my experience as well, but I attributed some of that to my own bias sinking in. I have noticed it exists on a particular spectrum within acceptable limits of common liberal ideology. Meaning it tends to oppose socialism and support and work within a neo-capitalist idealistic democratic framework.

It has a great deal of trouble addressing issues with modern politics such as corruption or giving substancial commentary on subjects like the flaws of a debt based economic model.

3