User1539
User1539 t1_jefk4rg wrote
Reply to comment by FaceDeer in The only race that matters by Sure_Cicada_4459
Definitely wearying ...
But, also, asked them why the AI in Terminator went bad? The only answer, because none is even given, is 'Because the plot needed to happen'.
The official story is that it just became sentient and said 'Yeah, those humans that have learned and created and ultimately organized themselves into countries and finally built me from the ground up? Terrible! Get rid of them!'
It never says why, we're just expected to be so self loathing that it makes sense, so we never question it.
User1539 t1_jeeucdg wrote
Reply to comment by Sure_Cicada_4459 in The only race that matters by Sure_Cicada_4459
OMG This ... i'm so tired of hearing about Terminator!
User1539 t1_jeeu7wd wrote
Reply to comment by NonDescriptfAIth in The only race that matters by Sure_Cicada_4459
> Allow me to be incredibly clear. If we continue on the path we are on. We will die.
Okay, I was kind of there with you, taking it with a grain of salt, until that statement.
Take a deep breath, there's a lot you haven't considered.
First, you're assuming AGI will happen, and immediately result in ASI, which will be used by some huge government to immediately take control, or have missiles launched on them to prevent that.
If China could wipe us off the face of the earth, or Russia for that matter, as easily as that, don't you think they would have? I mean, what are they waiting for?
We're already utilizing the most powerful algorithms to farm dopamine ... and it's not working. Something no one talks about is how, after all the social cost of social media, almost none of those companies are actually profitable. Sure, they post profits, because they're publicly traded, and their value is decided by the investor. But, if they were businesses and not publicly traded corporations? Twitter has never brought in more money than it has spent. Neither has Dischord. Almost no one has!
So, we're sort of already running aground on that whole idea, and when people don't have money, because there's no work to do, there's no reason to want their attention.
A lot of things you assume will happen would have already happened if it could, and a lot of the other stuff sort of assumes an innate cruelty. Like governments and corporations will needlessly, and pointlessly court rebellion by going out of their way to torture their citizens.
Why?
For the most part, what governments have been building towards since the dawn of time is stability. You see fewer turnovers in countries, you see less overt war, and when it does happen, you see more and more unity to stop that war.
Stability is not necessarily good, since what we're keep stabil is not the greatest system, but it's not like these governments that have been building towards stability are going to suddenly go nuts and start destroying themselves by torturing their citizens for no reason at all.
I get it ... even being a little paranoid, and seeing this pace, you'd come to these conclusions. But, you need to get out of your echo chamber and remember that technology almost always serves to empower the individual, and most individuals are not cruel.
User1539 t1_jebvoxm wrote
Reply to Ray Kurzweil Predicted Simulated Biology is a Path to Longevity Escape Velocity by Dr_Singularity
Fuck it, I'm all in ... he's been better than Nostradamus about this shit.
Just tell me who to vote for to funnel the tax money that way.
User1539 t1_je5co7e wrote
Reply to comment by Neurogence in Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
It's all silly. There's no way it'll ever happen, and all of this is just pissing in the wind.
No one is going to stop because it's a highly competitive space, and anyone who does stop is just giving time to the competition to either catch up or get further ahead.
Even if OpenAI and Google said they were stopping, I wouldn't believe them.
User1539 t1_je2f9u0 wrote
Reply to comment by JVM_ in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
yeah, AGI is likely to be the result of self-improving non-AGI AI.
It's so weird that it could be 10 years, 20 years, or 100 and there's no really great way to know ... but, of course, just seeing things like LLMs explode, it's easier to believe 2 years than 20.
User1539 t1_je1q3go wrote
Reply to comment by JVM_ in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Also, it's a chicken-> Egg problem, where they're looking at eggs saying 'No chickens here!'.
Where do you think AGI is going to come from?! Probably non-AGI AI, right?!
User1539 t1_jdzsxbk wrote
Reply to comment by Shiningc in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
My point is that we don't need AGI to be an incredibly disruptive force. People are sitting back thinking 'Well, this isn't the end-all be-all of AI, so I guess nothing is going to happen to society. False alarm everybody!'
My point is that, in terms of traditional automation, pre-AGI is plenty to cause disruption.
Sure, we need AGI to reach the singularity, but things are going to get plenty weird before we get there.
User1539 t1_jdy4x5l wrote
Reply to comment by Sashinii in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Some people are already on opposing ends of that spectrum. Some people are crying that ChatGPT needs a bill of rights, because we're enslaving it. Others argue it's hardly better than Eliza.
Those two extremes will probably always exist.
User1539 t1_jdy4opa wrote
Reply to comment by EnomLee in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
I've been arguing this for a long time.
AI doesn't need to be 'as smart as a human', it just needs to be smart enough to take over a job, then 100 jobs, then 1,000 jobs, etc ...
People asking if it's really intelligence or even conscious are entirely missing the point.
Non-AGI AI is enough to disrupt our entire world order.
User1539 t1_jdy4ig4 wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
We need real, scientific, definitions.
I've seen people argue we should give ChatGPT 'rights' because it's 'clearly alive'.
I've seen people argue that it's 'no smarter than a toaster' and 'shouldn't be referred to as AI'.
The thing is, without any clear definition of 'Intelligence', or 'consciousness' or anything else, there's no great way to argue that either of them are wrong.
User1539 t1_jd2la65 wrote
Reply to comment by ground__contro1 in Teachers wanted to ban calculators in 1988. Now, they want to ban ChatGPT. by redbullkongen
oh, yeah, I've played with it for coding and it told me it did things it did not do, and couldn't read the code it produced after, so there's no good way to 'correct' it.
It spits out lots of 'work', but it's not always accurate and people who are used to computers always being correct are going to have to get used to the fact that this is really more like having a personal assistant.
Sure, they're reasonably bright and eager, but sometimes wrong.
I don't think GPT is leading directly to AGI, or anything, but a tool like this, even when sometimes wrong, is still going to be an extremely powerful tool.
When you see GPT passing law exams and things like that, you can see it's not getting perfect scores, but it's still probably more likely to get you the right example of case law than a first year paralegal, and it does it instantly.
Also, in 4 months, it's basically become accurate the way you'd expect a human to improve on things like the bar exam in 4 years of study.
It's a different kind of computing platform, and people don't know quite how to take it yet. Especially people used to the idea that computers never make mistakes.
User1539 t1_jczslss wrote
Reply to comment by magnets-are-magic in Teachers wanted to ban calculators in 1988. Now, they want to ban ChatGPT. by redbullkongen
yeah, that reminds me of when it confidently told me what the code it produced did ... but it wasn't right.
it's kind of weird when you can't say 'No, can't you read what you just produced? That's not what that does at all!'
User1539 t1_jczs306 wrote
Reply to comment by Ricky_Rollin in Teachers wanted to ban calculators in 1988. Now, they want to ban ChatGPT. by redbullkongen
Yeah, in how people can use it, that's definitely a good description and I've been asking google straight up questions for years already.
I do think it's changing the game for a lot of things, like how customer service bots are going to be actually good now.
User1539 t1_jcz0uft wrote
Reply to comment by ErikaFoxelot in Teachers wanted to ban calculators in 1988. Now, they want to ban ChatGPT. by redbullkongen
Yeah, I've definitely found that in coding. It does work at the level of a very fast and reasonably competent junior coder. But, it doesn't 'understand' what it's doing, like it's just copying what looks right off stack overflow and gluing it all together.
Which, if I need a straight forward function written might be useful, but it's not going to design applications you'd want to work with in its current state.
Of course, in a few weeks we'll be talking about GPT5 and who even knows what that'll look like?
User1539 t1_jcyh91j wrote
Reply to comment by SirEblingMis in Teachers wanted to ban calculators in 1988. Now, they want to ban ChatGPT. by redbullkongen
I think it can cite sources if you ask it to, or at least it can find supporting data to back up its claims.
That said, my personal experience with ChatGPT was like working with a student who's highly motivated and very fast, but only copying off other people's work without any real understanding.
So, for instance, I'd ask it to code something ... and the code would compile and be 90% right, but Chat GPT would confidently state 'I'm opening port 80', even though the code was clearly opening port 8080, which is extremely common in example code.
So, you could tell it was copying a common pattern, without really understanding what it was doing.
It's still useful, but it's not 'intelligent', so yeah ... you'd better check those sources before you believe anything ChatGPT says.
User1539 t1_jcyase7 wrote
Reply to comment by SirEblingMis in Teachers wanted to ban calculators in 1988. Now, they want to ban ChatGPT. by redbullkongen
Did you read the article, though?
"quote from content created by ChatGPT in their essays"
They're allowed to use it as a source, not to write an entire essay.
User1539 t1_jcxyqcn wrote
Reply to comment by Siddhanta101 in Teachers wanted to ban calculators in 1988. Now, they want to ban ChatGPT. by redbullkongen
Yeah, I think the teachers won this argument.
I can't imagine a world where they allow GPT to write essays for them either.
My daughter has already had 'practicals' in her science class in middle school, and it's basically a 15 minute conversation about the subject so the teacher can assess if you're getting the material and not just memorizing the book.
I think we're just going to have to do more of that, and less rote testing. We'll have more short essays written in class and things like that.
I know people who teach online for university, and they say they wouldn't trust an online degree. They know their kids are cheating, but if you can't make them sit in front of you to take tests, there's no way to know.
User1539 t1_jaaxjcc wrote
Reply to comment by Zermelane in Leaked: $466B conglomerate Tencent has a team building a ChatGPT rival platform by zalivom1s
I don't know about 'behind'. LLMs are a known technology, and training them is still a huge undertaking.
I can imagine a group coming in and finding a much more efficient training system, and eclipsing OpenAI.
The AI aren't self-improving entirely on their own yet, so the race is still a race.
User1539 t1_jaado2e wrote
Reply to comment by czk_21 in Singularity claims its first victim: the anime industry by Ok_Sea_6214
I don't want to get too deep into what I do for a living, but 'tradition' could probably keep it going for 4-5 generations ... long past my retirement age.
That same sense of tradition will keep people in charge for at least as long.
User1539 t1_ja96kkl wrote
Reply to comment by ahtoshkaa2 in Singularity claims its first victim: the anime industry by Ok_Sea_6214
I write software and I've been sort of avoiding a management position for a while.
I'm starting to think I'm going to end my career managing AI to write the software underpinning the processes management comes up with.
If there's any work to be done at all, it'll probably be in a middle-man position like that, because I can tell you from experience the people making the decisions just don't think logically, and will still need someone to point out logical inconsistencies in their ideas, and work through them to something that can be implemented.
Communication with illogical humans has always been the hardest part of my job, so it'll probably be the last thing AI figures out how to do.
User1539 t1_ja8nijl wrote
Reply to comment by ahtoshkaa2 in Singularity claims its first victim: the anime industry by Ok_Sea_6214
It happened overnight too. Some package got popular and it was something where you could download a file to the package and get a sample, and everyone did it, and never went back.
The place she was working folded by the end of the summer.
User1539 t1_ja7mvf4 wrote
This workflow is going to be something highschool students are making compelling anime with in a few months.
I've already seen an industry basically disappear over night. A friend of mine did work where she'd listen to a meeting, and type it out, highlight important sections, etc ... and she was pretty well paid.
One day they just quit getting work. The head of the company realized that most of their clients went with an AI solution to do the same job for pennies instead being charged $100/hour.
I also had some friends who'd supplement their income doing drawings for people, and that all dried up almost entirely, overnight, last summer when all the AI art generation stuff came out.
Again, just, one day they were making decent money drawing things for people on demand, the next day no one was calling them.
We're at the very, very, beginning stage of this, but we're already seeing it happen, and it's so fast it's insane. One day, people need you to produce something. The next day, they don't, and never will again.
User1539 t1_j8dant5 wrote
Reply to comment by TwoWheelAddict in These prosthetics break the mold with third thumbs, spikes, and superhero skins by ChickenTeriyakiBoy1
Most of these seem to be self-made, or made with the help of a single technical person.
Your cousin just needs to hit up his nerdy friend with a 3D printer, apparently.
User1539 t1_jeflzlr wrote
Reply to comment by FaceDeer in The only race that matters by Sure_Cicada_4459
In the TV show, the system that eventually becomes skynet is taken by a liquid terminator and taught humanity. The liquid terminator basically has a conversation with Sarah Conner where it says 'Our Children are going to need to learn to get along'.
So, that's where they were going with it before the series was cancelled, and I was generally pretty happy with that.
I like Terminator as a movie, and the following movies were hit or miss, but the overall fleshing out of things at least sometimes went in a satisfying direction.
So, yeah, they eventually got somewhere with it, but the first movie was just 'It woke up and launched the missiles'.
Which, again, as entertainment is awesome. But, as a theory of how to behave in the future? No.