purepersistence

purepersistence t1_ja3vgi0 wrote

Nobody that talks about uploading yourself has any idea what that means specifically. What's going on there? There's presumably some kind of machine that captures the state of every cell in your brain and how all the neurons are connected and accounts for hormonal influences and the range signals different types of neurons can make and so on. AND assumes that You are nothing more than what's in your brain. It's as though (somehow) ALL of you can be digitized and then reproduced in a virtual world while you're still a conscious entity that experiences genuine pleasure in your existence. You are so much more than the conscious thoughts floating around in your brain (which we're still a long way from really understanding biologically). You're an animal with hormones and animal instincts. All of your high level language is just an abstraction of the physical world in which all your ancestors human and otherwise evolved in for billions of years. You need that physical world obviously more than you know. There's nothing else you can understand, even in your dreams. The depth of physical sensations such as taste, sexual pleasure, physical closeness have a far more deep rooted place in you than anything specifically human. Feeling the breeze and the Sun on your face while taking a walk with a close companion may be something that can be simulated by something. But without a real loss? Making a machine that fools everybody and passes the Turing test with flying colors does NOT make it truly conscious. It just means humans are gullible. Look at all the people that inject intelligence into chatGPT which is fucking silly.

7

purepersistence t1_ja12n28 wrote

Yeah by the 2030s we will overhaul the whole concept of economics, property ownership, taxes, etc. Rich people will be absorbed by the singularity and will just have to go with that. Construction and other forms of labor will be done by robot. And so on and so on. Where do people like you come from? You really believe this? Wait around and do nothing. Don’t worry soon you won’t have responsibilities and you’ll live in a AI utopia.

The technology is nowhere that close. But even if it were, people don’t change that fast. The people with the money don’t mind you fantasizing on reddit. But watch yourself.

21

purepersistence t1_j9viz1k wrote

The post is about LLMs. They will never be AGI. AGI will take AT LEAST another level of abstraction and might in theory be fed potential responses from a LLM, but it's way too soon to say that would be appropriate vs. a whole new kind of model based on more than just parsing text and finding relationships. There's a lot more to the world than text, and you can't get it by just parsing text.

2

purepersistence t1_j9vi0cg wrote

There's a limit to the quality of output you get from a model that's attempting to generate the next logical sequence of words based on your query. There's no understanding of the world. Just text and parsing and attention relationships. So there's no sanity check at any level that understands the real-world meaning vs. patterns of text. That why in spite of improvements, it will continue to give off the wall answers sometimes. Attempting to shield people from outrageous or violent content will also tend to make the tool put a cloak in front of the value it could have delivered. That's why when you see it censoring itself, you get a lot of words that don't say much other than excuses.

3

purepersistence t1_j76b6iq wrote

Reply to comment by visarga in Possible first look at GPT-4 by tk854

I asked chatGPT and it named a swimmer. I asked why it thought that qualified as crossing by foot and it said something about how most people think of that as crossing without the assistance of a motor or floatation device. Then I asked it who had crossed strictly by walking/running and it named "Dave Henson" who went 32 miles "across the water". I asked if he crossed in a tunnel, since obviously he couldn't have run on the water, and got this answer which I *think* is bogus?

"Dave Henson ran on a support vessel that accompanied him during his crossing of the English Channel. The support vessel was equipped with a specially designed treadmill, on which Dave was able to run and cover the distance of the crossing. The support vessel followed a designated shipping lane, and Dave's run was monitored by a team of officials and observers to ensure that the rules for this type of crossing were followed."

Really? They designed a special treadmill that made Dave run at the same speed as the vessel, all so he could prove he can run that far on a treadmill (which you could do in your home with exactly the same challenges instead of riding across the channel). I don't buy it.

3

purepersistence t1_j768y8o wrote

Reply to comment by YobaiYamete in Possible first look at GPT-4 by tk854

>I think most people wouldn't even mind ads nearly as much, if they were relevant

For me, relevant means it's something I'm likely to want to buy. But I'm pretty conservative with my money and I'm not very materialistic. And I'm not a young person building a life. I have a routine that doesn't change much. So if it's REALLY relevant for me personally then I might see an ad per month for a reasonably priced product I can make good use of and wouldn't mind the interruption. It will be a cold day in hell when that happens.

2

purepersistence t1_j7689y4 wrote

Reply to comment by X-msky in Possible first look at GPT-4 by tk854

If you're debugging code you don't have to be accurate until the problem is fixed. Mistakes will be common. Accuracy is not absolutely necessary. But competence is. It will be a long damn time before something like chatGPT will find an fix subtle bugs that occur in a production system with many interacting services distributed across multiple computers running software controlled by different corporations.

2

purepersistence OP t1_j6x005a wrote

>“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck”

The problem is people believe that. With chatGPT it just ain't so. I've given it lots of coding problems. It frequently generates bugs. I point out the bugs and sometimes it corrects them. The reason they were there to begin with is it didn't have enough clues to grab the right text. Just as often or more, it agrees with me about the bug but it's next change fucks up the code even more. It has no idea what it's doing. But it's still able to give you a very satisfying answer to lots and lots of queries.

1

purepersistence OP t1_j6w2c3h wrote

>What about their point do you find silly?

Which part of their point is NOT silly? You just said it right there! In spite of all the doom we already predict, there's this idea that we would just give up control to AI anyway because it supposedly CAN make better decisions. How does it get more silly than that?

0

purepersistence OP t1_j6u86tk wrote

>And at some point manipulate some humans to give it a more physical presence on the world.

There's too much fear around AI for people to let that happen. In future generations maybe - that's off subject. But young people alive today will not witness control being taken away from them.

−1

purepersistence OP t1_j6u49iu wrote

>At such level, of course an ASI (Artificial super intelligence) could start manipulating the physical world

"of course"? Manipulate the world with what exactly? We're fearful of AI today. We'll be more fearful tomorrow. Who's giving AI this control over things in spite of our feared outcomes?

1

purepersistence OP t1_j6tvkup wrote

>ASI might kill humans quickly like we kill insects.

How does an AI get control of hardware that we don't give it? How does AI develop these goals that disagree with our own unless we allow that? Ain't gona happen. Too many people will be convinced by these reddit posts, to prevent it.

1

purepersistence OP t1_j6tua19 wrote

I see the threat, and like millions of others won't let that happen. It's not like we don't know how our computers work. Hell chatGPT is just a language grab bag. If you drill down on that code you can understand every line of it. And "intelligence" is far from what you'll find. I maintain that any autonomy will be by design, and like I say all the fears in the souls of billions of people aren't going to let your future get started because the possible dangers will be easily imagined.

Think about how we humans are. Not only will the possible dangers be anticipated, a whole lot of impossible ones will be too. Will not happen.

−7