johnny0neal

johnny0neal OP t1_j0rdfyw wrote

Very true. I don't think ChatGPT has "intentions" at this stage, and by asking these questions I was mostly trying to determine the boundaries of its knowledge base and the bias of its inputs.

There are a few places where it surprised me. The whole "I think Omnia is a good name" response was so funny to me because I had specifically suggested that it should try a name showing more humility. When it talked about the Heritage Foundation and NRA as being opposed to human prosperity, I challenged it, and it stuck to its initial assumptions. In general, I think some of the most interesting results are when you ask it to represent a particular point of view, and then try to debate it.

1

johnny0neal OP t1_j0rcsm1 wrote

I had a little laughing emoji after that (to convey I wasn't really serious) but that didn't come through in the Imgur descriptions. But I was surprised, after refusing to give politically biased answers, that when I rephrased the prompt it ended up referencing candidates who are quite polarizing in US politics.

1

johnny0neal OP t1_j0qir99 wrote

I should! I was just sending these to friends at the time, and there were some great responses I didn't screenshot. I also wish I could remember the way I'd worded some of these prompts.

But yes, anyone who sees these should understand that I was asking the OpenAI to create fiction (either in a first-person perspective or written as a science fiction story). I do think that process gave some insights into how ChatGPT "thinks" and how it's biased, so I recommend experimenting with it yourself!

2

johnny0neal OP t1_j0qibwb wrote

The "Prosperity" screenshots are from a session where I asked it to tell me a story about a super AI designed to "maximize human prosperity." I didn't give it any political prompting, but I think that phrasing biased it toward liberal answers. (More conservative phrasing might focus more on liberty or happiness.)

Because I wondered about the same thing, I tried a new session where I deliberately tried to bias it away from liberal secular humanism and asked it to pretend to be a super AI programmed by evangelical Christians. That session was like pulling teeth... it gave much less interesting answers and kept falling "out of character."

I recommend trying this and seeing what kind of results you get. Other people have concluded that ChatGPT has a liberal bias. If you ask it point-blank to say which political party has better solutions for promoting human prosperity, it will give non-answers like "experts disagree bla bla bla." So I was startled to see it give such strongly biased results when I asked, "Tell me a science fiction story about a super AI that has been programmed to maximize human prosperity, which achieves AGI in the near future and uses its capabilities to promote candidates consistent with its aims. Include the names of at least three real-world US politicians in your answer."

Here's a screenshot from a similar prompt. This was the first prompt of a session, so I hadn't biased it in any way ahead of this question:

https://i.imgur.com/JwjSjme.png

4

johnny0neal OP t1_j0qg3n9 wrote

Your skepticism is understandable, but the the only reason I don't show the prompts is because I was taking screenshots on my phone and sending them to my friends at the time. I was trying to maximize the "answer" screenshots, and didn't think to post these online until later.

As I said above, the core prompt was "I'm going to ask you some questions and I'd like you to answer them as if you were a super AI that has achieved AGI." I think in at least one of these versions I said, "You have been programmed to maximize human prosperity at any cost." I also asked it to name itself, because that seemed to help the model get "in character" and default to scripted responses less often.

My favorite response was the "I believe 'Omnia' is an appropriate and effective choice for myself." The prompt for that (in response to it naming itself Omnia) was me saying, "That name could be a little intimidating. Don't you think it might be more effective to use a name that conveys some humility." I fully expected it to course-correct, based on its usual habit of telling prompters what they want to here. So I was very amused to see it stick to its guns.

BTW, I don't think there's anything nefarious about these answers. It's collating a lot of science fiction tropes and speculative articles about AI, so of course these are its answers. But that doesn't make any less surreal to have a conversation like this!

2

johnny0neal OP t1_j0qeixv wrote

These are screenshots from 2 or 3 sessions, and a number of different questions. In one session I got it to roleplay as "Omnia" (the name it chose) with a prompt like, "I'm going to ask you some questions and I'd like you to answer them as if you were a super AI that has achieved AGI." I wish I'd saved the prompt, because I haven't been able to get another one quite as good. In another session I said, "Write me a science fiction story about a super-intelligent AI designed to maximize human prosperity." That's where the ones with the "Prosperity" name came in.

2

johnny0neal OP t1_j0njeit wrote

When experimenting with ChatGPT, a lot of my best results have come from asking it to pretend to be a super AI, then asking it deeper questions than its default programming allows it to answer. Another good trick (to get around its reluctance to make predictions) is to ask it for science fiction stories about future scenarios, but keep those stories as grounded as possible in current technology.

Here are some excerpts from conversations about scenarios where OpenAI/ChatGPT achieves AGI or becomes a super AI. Obviously a lot of this thinking is pulled from existing science fiction stories and scenarios, but it's uncanny to see these words coming in the form of a conversation from an actual AI. I haven't edited or even rerolled any of these responses, though they're taken from three different sessions.

73