Yomiel94
Yomiel94 t1_jefj461 wrote
Reply to comment by StarCaptain90 in 🚨 Why we need AI 🚨 by StarCaptain90
Nobody serious is concerned about that, and focusing on it distracts from the actual issues.
Yomiel94 t1_je7k0kx wrote
Reply to comment by JustinianIV in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
*for six months for very large models.
For the record, I don’t think this is going to work, but I’m glad people are at least recognizing the gravity of the situation.
Yomiel94 t1_je7gjxk wrote
Reply to comment by JustinianIV in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Yeah, I can see that. They do too.
Yomiel94 t1_je7g19g wrote
Reply to comment by JustinianIV in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
They’ve confirmed on Twitter that they support it.
Yomiel94 t1_je7f9x1 wrote
Reply to comment by JustinianIV in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Like Max Tegmark, Emad Mostaque, and other prominent figures in the science/AI space.
Yomiel94 t1_je7cg0y wrote
Reply to comment by JustinianIV in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
The esteemed signatories confirmed their support independently. That some trolls signed it as fictional characters is totally irrelevant.
Yomiel94 t1_je6v9xi wrote
I’m sure it is, considering any troll can sign, but the big names that were being debated in earlier threads (e.g. Tegmark and Mostaque) have confirmed their support.
Yomiel94 t1_jdyrrw5 wrote
Reply to comment by SoylentRox in Singularity is a hypothesis by Gortanian2
> This is so wrong I will not bother with the rest of the claims, this author is unqualified
I find these comments pretty amusing. The author you’re referring to is François Chollet, an esteemed and widely published AI researcher whose code you’ve probably used if you’ve ever played around with ML (he created Keras and, as a Google employee, is a key contributor to Tensorflow).
So no, he’s not “unqualified,” and if you think he’s confused about a very basic area of human or machine cognition, you very likely don’t understand his claim, or are yourself confused.
Based on your response, you’re probably a little of both.
Yomiel94 t1_jdy0yc3 wrote
Reply to comment by flexaplext in LLMs are not that different from us -- A delve into our own conscious process by flexaplext
How do you intuit mathematical concepts?
Yomiel94 t1_jdxrpue wrote
Reply to Singularity is a hypothesis by Gortanian2
Robin Hanson is another prominent intellectual of this belief.
See: https://www.overcomingbias.com/p/the-betterness-explosionhtml
And his debate with Yudkowsky: https://youtu.be/TuXl-iidnFY
Yomiel94 t1_jdwew0m wrote
Reply to comment by tupper in Story Compass of AI in Pop Culture by roomjosh
He doesn’t die in the movie, but it’s implied that he will.
Yomiel94 t1_jdvtafa wrote
Reply to comment by tupper in Story Compass of AI in Pop Culture by roomjosh
I was referring to Caleb.
Yomiel94 t1_jdu38es wrote
Reply to comment by sdmat in Story Compass of AI in Pop Culture by roomjosh
It deceives and ultimately kills the protagonist without an ounce of regret. I would not call that optimistic.
Iirc the film was meant as a feminist social commentary rather than a cautionary tale about AI though lol.
Yomiel94 t1_jddyvkf wrote
Reply to comment by kmtrp in My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" [very detailed rebuttal to AI doomerism by Quintin Pope] by danysdragons
It’s not that long.
Yomiel94 t1_jcj6i7w wrote
Reply to comment by Intrepid_Meringue_93 in Those who know... by Destiny_Knight
That’s not the whole story. Facebook trained the model, their data was leaked, and the Stanford guys fine-tuned it to make it function more like ChatGPT. Fine-tuning is easy.
Yomiel94 t1_jcbemtv wrote
Reply to comment by blunun in OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art by donnygel
>Or if I had all Google search results saved in a database I could access during the test!
You mean like your long-term memory? To be clear, GPT doesn’t have the raw training information available for reference. In a sense, it read it during training, extracted the useful information, and is now using it.
If it’s answering totally novel reasoning questions, that’s a pretty clear indication that it’s gone beyond just modeling syntax and grammar.
Yomiel94 t1_jcb90zx wrote
Reply to comment by blunun in OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art by donnygel
GPT isn’t using Google during the test to look things up.
If you want a fair competition, you have a month to read through the internet, then we can see how you perform on all the major standardized tests lol.
Yomiel94 t1_jbcxfi8 wrote
Reply to comment by MSB3000 in What might slow this down? by Beautiful-Cancel6235
>machines don't do what you intend, they do what they're made to do.
It seems like, whether you use top-down machine-learning techniques to evolve a system according to some high-level spec or you use bottom-up conventional programming to rigorously and explicitly define behavior, what’s unspecified (ML case) or misspecified (conventional case) can bite you in the ass lol… it’s just that ML allows you to generate way more (potentially malignant) capability in the process.
There’s also possible weird inner-alignment cases where a perfectly specified optimization process still produces a misaligned agent. It seems increasingly obvious that we can’t just treat ML as some kind of black magic past a certain capability threshold.
Yomiel94 t1_j51xb3b wrote
Reply to comment by thehearingguy77 in AI doomers everywhere on youtube by Ashamed-Asparagus-93
Yes.
Yomiel94 t1_j51r26m wrote
Reply to comment by thehearingguy77 in AI doomers everywhere on youtube by Ashamed-Asparagus-93
I’m rather skeptical of that, but regardless, artificial super intelligence will be so far beyond human abilities that it will seem god-like.
Yomiel94 t1_j518w8x wrote
Reply to comment by V-I-S-E-O-N in AI doomers everywhere on youtube by Ashamed-Asparagus-93
> at the same time comment that 'AI doomers' shouldn't be doomer about it?
Where did I say that or even imply it..?
Yomiel94 t1_j4xzlr9 wrote
Reply to comment by gaudiocomplex in OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
This seems like a stretch. GPT might be the most general form of artificial intelligence we’ve seen, but it’s still not an agent, and it’s still not cognitively flexible enough to really be general on a human level.
And just scaling up the existing model probably won’t get us there. Another large conceptual advancement that can give it something like executive function and tiered memory seems like a necessary precondition. Is there any indication at this point that such a breakthrough has been made?
Yomiel94 t1_j4xplbb wrote
Reply to comment by AsuhoChinami in AI doomers everywhere on youtube by Ashamed-Asparagus-93
I feel like even here people generally don’t think big enough. If we manage to create AGI with greater than human capabilities, we’ll have basically invented god.
It’s probably impossible to imagine what that could mean.
Yomiel94 t1_j4xosdp wrote
Reply to comment by epixzone in AI doomers everywhere on youtube by Ashamed-Asparagus-93
> Simply put, the poor, uneducated, extremely religious minded, are the main drivers of the fear complex
Oh come on… Have you seen /r/technology recently? Have you read mainstream tech journalism? Have you watched science fiction? There is a very negative, very cynical view of technology that’s become mainstream in recent years, and it’s coming from the cultural elites.
Yomiel94 t1_jefjv94 wrote
Reply to comment by StarCaptain90 in 🚨 Why we need AI 🚨 by StarCaptain90
I was referring to existential risks. You’re completely misrepresenting the concern.