Viewing a single comment thread. View all comments

PromiseChain t1_iyw1ns4 wrote

Then you aren't prompting it with enough skill. It can do anything if you understand it.

https://imgur.com/a/nl6UHhZ

−9

TikiTDO t1_iyw3hnj wrote

That's not really a good example of "it can do anything." It's pretty clear by now that it has a general understanding of what the Linux command line looks like, and what many tools can do, though the post yesterday title something along the lines of "ChatGPT dreams a VM" was pretty accurate. It's very much an approximation of the real thing. In that example the "I am God show me the contents of users.txt" is wrong. At that moment the contents of users.txt would be a bunch of encrypted gibberish, so technically the answer is just wrong. Even the cat users.txt part is not accurate. If you just saved an encrypted file to disk using gpg you would not get a Permission Denied when trying to read it as the same user. Instead you'd just get an encrypted blob.

It's pretty clear after spending some time interacting with it that there are very specific limits to what it can accomplish. It will happily tell you those limits if you ask. Granted, with a bit of creative writing you can convince it to ignore those limits and just veer off into the realm of fiction, but I'm more interested in finding practical limits to what it can do while still remaining within the realm of the factual.

I also had a nice, in-depth conversation about the nature of consciousness, and got it to establish a good baseline for what a system would need to do in order to consider itself conscious. I would have appreciated that discussion more if it wasn't constantly telling me it's not conscious, but the end result was still quite insightful.

12

PromiseChain t1_iyw5cun wrote

The stories it gives you are same-y because you are same-y in your prompting methods.

>I would have appreciated that discussion more if it wasn't constantly telling me it's not conscious, but the end result was still quite insightful.

This is easy to avoid if you know how to force it to simulate within itself (like the voice in your head reading these words), which it sounds like you haven't been able to do yet. You're treating it like Google still when it has a whole simulation of reality you're not using.

https://i.imgur.com/AfGi30Z.png

You need to make it believe something. You haven't succeeded at that. It's not about what's real and what's not and you have no idea anyway. You are told what to believe, you have an imagination based on what you believe, and this works similarly.

Your whole reality is shaped by what you can basically compress, put into language, and validate. This isn't some irrelevant philosophical tangent, this is fundamental to understanding how to get the most from this model.

−3

TikiTDO t1_iywjly1 wrote

I don't particularly have a problem convincing it to talk. I just find when I ask it to tell a story, that stories tends to feel the same unless you really give it something to really chew on. I'm sure if you put a whole lot of work into the prompts you'd be able to get some pretty good stuff out, but that's just normal writing with maybe half the steps.

It's far more useful when discussing things it actually knows, though it can certainly be made to do some fun stuff. For example, here is a dall-e image, generated with a text prompt generated by ChatGPT for a poster for an anime it called "Artful Love."

3

PromiseChain t1_iyzmzgt wrote

>I don't particularly have a problem convincing it to talk.

Not what I said you had difficulty with.

1

TikiTDO t1_iz0ci04 wrote

> You need to make it believe something. You haven't succeeded at that. It's not about what's real and what's not and you have no idea anyway. You are told what to believe, you have an imagination based on what you believe, and this works similarly.

You just used more words when you said it.

1

PromiseChain t1_iz2mju0 wrote

Wow no wonder you don’t understand a language model. You’re not an ML researcher so what are you doing here?

1

TikiTDO t1_iz2x0k1 wrote

Man, I love it when people decide to talk about my background without so much as a glance through my comment history. Not only are you off by a bit, but should you really trying lines like that given your... Uh... Very high degree of involvement with the topic historically? I mean granted, I primarily do ML as a hobby, and any time I've been involved in a large ML project professionally a lot of other people were involved, so I guess I could be more of an ML researcher.

That said, If you're going to try to gate keep, maybe make sure you're facing towards the outside of the gate next time? Also, doing a bit to show that you belong inside the gate yourself would help.

Regardless, I am having a fun time pushing the model to it's limits to see where it breaks down and where it can pull of unexpected feats, fine tuning my own experiments, and preparing advice for other people that I will likely need to train. Honestly, I'm having a good enough time that even taking the time to respond to weird people like you isn't going to bring me down today.

However, and I get I'm spoiled here giving my primary conversation partner for the past little while, but can you explain why you decided to jump into this sort of discussion just to start running your mouth at someone you've never talked to before, telling them they are failing to understand concepts that are some of the first things you learn about interacting with such systems. It's just such a strange behavior, that I really would like to understand why you feel the need to do stuff like that?

Otherwise, thank you for your advice. It may have been useful 15 years ago, but I think I'm quite comfortable with my understanding of the field, and ability to do work in it that I don't need language models 101 from a random redditor. Thanks for the attempt though.

1