purepersistence
purepersistence t1_jabsw4d wrote
Reply to comment by DandyDarkling in How can I adapt to AI replacing my career in the short term? Help needed by YaAbsolyutnoNikto
>no job is safe in the advent of AGI
AI will replace some jobs and not others before it becomes AGI which might be never or hundreds of years. A better LLM is not AGI. AGI requires new algorithms and levels of abstraction that nobody has specifically defined.
purepersistence t1_ja7nqng wrote
Reply to An ICU coma patient costs $600 a day, how much will it cost to live in the digital world and keep the body alive here? by just-a-dreamer-
I don't want to be uploaded unless orgasms exist up there. The virtual world is a great stimulant. But only if I have a fist and something to grab ahold of.
purepersistence t1_ja444wd wrote
Reply to comment by turnip_burrito in AI technology level within 5 years by medicalheads
>I think we're at the start of the technological singularity right now.
The big bang was actually the start.
purepersistence t1_ja3zj6q wrote
Reply to comment by just-a-dreamer- in An ICU coma patient costs $600 a day, how much will it cost to live in the digital world and keep the body alive here? by just-a-dreamer-
>I see little beauty in the physical world.
Since you've never experienced anything else, what are you comparing it to? Can it be visualized? Tasted? Heard? Felt?
purepersistence t1_ja3vgi0 wrote
Reply to An ICU coma patient costs $600 a day, how much will it cost to live in the digital world and keep the body alive here? by just-a-dreamer-
Nobody that talks about uploading yourself has any idea what that means specifically. What's going on there? There's presumably some kind of machine that captures the state of every cell in your brain and how all the neurons are connected and accounts for hormonal influences and the range signals different types of neurons can make and so on. AND assumes that You are nothing more than what's in your brain. It's as though (somehow) ALL of you can be digitized and then reproduced in a virtual world while you're still a conscious entity that experiences genuine pleasure in your existence. You are so much more than the conscious thoughts floating around in your brain (which we're still a long way from really understanding biologically). You're an animal with hormones and animal instincts. All of your high level language is just an abstraction of the physical world in which all your ancestors human and otherwise evolved in for billions of years. You need that physical world obviously more than you know. There's nothing else you can understand, even in your dreams. The depth of physical sensations such as taste, sexual pleasure, physical closeness have a far more deep rooted place in you than anything specifically human. Feeling the breeze and the Sun on your face while taking a walk with a close companion may be something that can be simulated by something. But without a real loss? Making a machine that fools everybody and passes the Turing test with flying colors does NOT make it truly conscious. It just means humans are gullible. Look at all the people that inject intelligence into chatGPT which is fucking silly.
purepersistence t1_ja12n28 wrote
Reply to The 2030s are going to be wild by UnionPacifik
Yeah by the 2030s we will overhaul the whole concept of economics, property ownership, taxes, etc. Rich people will be absorbed by the singularity and will just have to go with that. Construction and other forms of labor will be done by robot. And so on and so on. Where do people like you come from? You really believe this? Wait around and do nothing. Don’t worry soon you won’t have responsibilities and you’ll live in a AI utopia.
The technology is nowhere that close. But even if it were, people don’t change that fast. The people with the money don’t mind you fantasizing on reddit. But watch yourself.
purepersistence t1_j9viz1k wrote
Reply to comment by fangfried in What are the big flaws with LLMs right now? by fangfried
The post is about LLMs. They will never be AGI. AGI will take AT LEAST another level of abstraction and might in theory be fed potential responses from a LLM, but it's way too soon to say that would be appropriate vs. a whole new kind of model based on more than just parsing text and finding relationships. There's a lot more to the world than text, and you can't get it by just parsing text.
purepersistence t1_j9vi0cg wrote
Reply to What are the big flaws with LLMs right now? by fangfried
There's a limit to the quality of output you get from a model that's attempting to generate the next logical sequence of words based on your query. There's no understanding of the world. Just text and parsing and attention relationships. So there's no sanity check at any level that understands the real-world meaning vs. patterns of text. That why in spite of improvements, it will continue to give off the wall answers sometimes. Attempting to shield people from outrageous or violent content will also tend to make the tool put a cloak in front of the value it could have delivered. That's why when you see it censoring itself, you get a lot of words that don't say much other than excuses.
purepersistence t1_j773lz8 wrote
Reply to comment by datsmamail12 in OpenAI To Launch ChatGPT App Soon by vadhavaniyafaijan
>buy it for lifetime for 250 dollars
If AI takes off then in a few years using gpt4 will be like using a trs80 with 4KB ram and a tape drive.
purepersistence t1_j76b6iq wrote
Reply to comment by visarga in Possible first look at GPT-4 by tk854
I asked chatGPT and it named a swimmer. I asked why it thought that qualified as crossing by foot and it said something about how most people think of that as crossing without the assistance of a motor or floatation device. Then I asked it who had crossed strictly by walking/running and it named "Dave Henson" who went 32 miles "across the water". I asked if he crossed in a tunnel, since obviously he couldn't have run on the water, and got this answer which I *think* is bogus?
"Dave Henson ran on a support vessel that accompanied him during his crossing of the English Channel. The support vessel was equipped with a specially designed treadmill, on which Dave was able to run and cover the distance of the crossing. The support vessel followed a designated shipping lane, and Dave's run was monitored by a team of officials and observers to ensure that the rules for this type of crossing were followed."
Really? They designed a special treadmill that made Dave run at the same speed as the vessel, all so he could prove he can run that far on a treadmill (which you could do in your home with exactly the same challenges instead of riding across the channel). I don't buy it.
purepersistence t1_j768y8o wrote
Reply to comment by YobaiYamete in Possible first look at GPT-4 by tk854
>I think most people wouldn't even mind ads nearly as much, if they were relevant
For me, relevant means it's something I'm likely to want to buy. But I'm pretty conservative with my money and I'm not very materialistic. And I'm not a young person building a life. I have a routine that doesn't change much. So if it's REALLY relevant for me personally then I might see an ad per month for a reasonably priced product I can make good use of and wouldn't mind the interruption. It will be a cold day in hell when that happens.
purepersistence t1_j7689y4 wrote
Reply to comment by X-msky in Possible first look at GPT-4 by tk854
If you're debugging code you don't have to be accurate until the problem is fixed. Mistakes will be common. Accuracy is not absolutely necessary. But competence is. It will be a long damn time before something like chatGPT will find an fix subtle bugs that occur in a production system with many interacting services distributed across multiple computers running software controlled by different corporations.
purepersistence OP t1_j71zof1 wrote
Reply to comment by Terminator857 in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
The AI might modify our DNA so we'll walk around happy all day without the urge to make decisions, since we're often our worst enemies doing that. People won't disagree about things anymore. Heaven on Earth.
purepersistence OP t1_j6x005a wrote
Reply to comment by CertainMiddle2382 in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
>“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck”
The problem is people believe that. With chatGPT it just ain't so. I've given it lots of coding problems. It frequently generates bugs. I point out the bugs and sometimes it corrects them. The reason they were there to begin with is it didn't have enough clues to grab the right text. Just as often or more, it agrees with me about the bug but it's next change fucks up the code even more. It has no idea what it's doing. But it's still able to give you a very satisfying answer to lots and lots of queries.
purepersistence OP t1_j6w2xl7 wrote
Reply to comment by CertainMiddle2382 in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
Starting with language is a great way to SIMULATE intelligence or understanding by grabbing stuff from a bag of similar text that's been uttered by humans in the past.
The result will easily make people think we're ahead of where we really are.
purepersistence OP t1_j6w2m21 wrote
Reply to comment by GPT-5entient in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
>You should try it. It is 275 B parameters (numbers) which drive how ChatGPT responds.
You don't get the difference between parameters and lines of code.
purepersistence OP t1_j6w2c3h wrote
Reply to comment by TFenrir in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
>What about their point do you find silly?
Which part of their point is NOT silly? You just said it right there! In spite of all the doom we already predict, there's this idea that we would just give up control to AI anyway because it supposedly CAN make better decisions. How does it get more silly than that?
purepersistence OP t1_j6w131j wrote
Reply to comment by Reasonable-Soil125 in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
>Can't wait for this to happen
It's good when people admit to having a stake in the game instead of just predicting rational outcomes.
purepersistence OP t1_j6u9a8d wrote
Reply to comment by TFenrir in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
> Do you look down on people
If I differ with your opinion then I'm not looking "down". Sorry if fucking-crazy is too strong for you. Just stating my take on reality.
purepersistence OP t1_j6u86tk wrote
Reply to comment by just-a-dreamer- in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
>And at some point manipulate some humans to give it a more physical presence on the world.
There's too much fear around AI for people to let that happen. In future generations maybe - that's off subject. But young people alive today will not witness control being taken away from them.
purepersistence OP t1_j6u6db6 wrote
Reply to comment by TFenrir in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
>Are you confident that it couldn't trick that person to let it out?
Yes. We'd be fucking crazy to have a system where one crazy person could give away control of 10 billion people.
purepersistence OP t1_j6u49iu wrote
Reply to comment by just-a-dreamer- in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
>At such level, of course an ASI (Artificial super intelligence) could start manipulating the physical world
"of course"? Manipulate the world with what exactly? We're fearful of AI today. We'll be more fearful tomorrow. Who's giving AI this control over things in spite of our feared outcomes?
purepersistence OP t1_j6tvkup wrote
Reply to comment by just-a-dreamer- in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
>ASI might kill humans quickly like we kill insects.
How does an AI get control of hardware that we don't give it? How does AI develop these goals that disagree with our own unless we allow that? Ain't gona happen. Too many people will be convinced by these reddit posts, to prevent it.
purepersistence OP t1_j6tua19 wrote
Reply to comment by Surur in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
I see the threat, and like millions of others won't let that happen. It's not like we don't know how our computers work. Hell chatGPT is just a language grab bag. If you drill down on that code you can understand every line of it. And "intelligence" is far from what you'll find. I maintain that any autonomy will be by design, and like I say all the fears in the souls of billions of people aren't going to let your future get started because the possible dangers will be easily imagined.
Think about how we humans are. Not only will the possible dangers be anticipated, a whole lot of impossible ones will be too. Will not happen.
purepersistence t1_jabt29k wrote
Reply to comment by nillouise in How can I adapt to AI replacing my career in the short term? Help needed by YaAbsolyutnoNikto
>If you think AI will develop quickly,
the dumbest thing you can do is respond to the hype and quit improving yourself.