Present_Finance8707
Present_Finance8707 t1_ja1em06 wrote
Reply to comment by NoidoDev in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
You’re literally saying “put a filter on the Ai”. That’s like “just unplug it lolz” levels of dumb. Give me a break.
Present_Finance8707 t1_j9tos8p wrote
Jesus man, how many Tesla call options did you buy?? This is pure Elon-worship…
Present_Finance8707 t1_j9qfavu wrote
Reply to comment by Molnan in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
His arguments don’t hold up. For one thing we already have powerful generalist agents. Gato is one and it’s clear that advanced LLMs can do all sorts of tasks they weren’t trained to. Prediction of next token seems as benign and narrow as it can get but if you don’t think a LLM can become dangerous you aren’t thinking hard enough. CAIS also assumes people won’t build generalist agents to start with but that cat is well out of the bag. Narrow agents can also become dangerous on their own because of instrumental convergence but even if you restrict building only weak narrow agents/services the profit incentive for building general agents will be too strong since they will likely outperform narrow ones.
Present_Finance8707 t1_j9q91s2 wrote
Reply to comment by CellWithoutCulture in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
Well aware of Drexler and have Nanosytems on my shelf but sure kiddo. I find his AI arguments lacking seriousness.
Present_Finance8707 t1_j9q8x1g wrote
Reply to comment by Molnan in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
Eliezers actual conclusion is that no current approach can work and there are none on the horizon that can.
Present_Finance8707 t1_j9myqs2 wrote
Reply to comment by Molnan in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
If you don’t even know who Gwern is I can’t really take you seriously about alignment. You can’t possibly have a deep understanding of the various arguments in play.
Present_Finance8707 t1_j9my8wl wrote
Reply to comment by NoidoDev in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
Like I said you really really don’t understand alignment. Imagine thinking a “filter” is what we need to align AIs or completely lacking understanding of instrumental convergence. You don’t understand even the utter basics but think you know enough to dismiss Eliezers arguments out of hand??? Thankfully I think you’re also too stupid to contribute meaningfully to capabilities research so thanks for that.
Present_Finance8707 t1_j9mx6m2 wrote
Reply to comment by cwallen in Ramifications if Bing is shown to be actively and creatively skirting its own rules? by [deleted]
This is a joke right. Some random reporter puts this article out and people think it’s a golden rule? It’s bs
Present_Finance8707 t1_j9l5tzs wrote
Reply to comment by Molnan in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
Two problems. It doesn’t work and the current models are already way down the Agent line and there’s no going back. Yawn. https://gwern.net/tool-ai
Present_Finance8707 t1_j9l3o1u wrote
This thread is completely full of hopium and laymen saying “I disagree with his Views because their implications make me uncomfortable and here are 5 bad reasons that AI won’t kill us that people already squashed as useful 30 years ago.”
Present_Finance8707 t1_j9l398v wrote
Reply to comment by beachmike in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
Why would one who wants to solve alignment try to advance AI. Lol. Oxymoron.
Present_Finance8707 t1_j9l32gz wrote
Reply to comment by Molnan in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
Your link completely fails to align an AGI. You aren’t offering anything interesting here
Present_Finance8707 t1_j9l2vjp wrote
Reply to comment by NoidoDev in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
You really really really don’t understand the alignment problem. You really don’t know the field if you’re trying to understand by watching videos of Eliezer and not his writing. What a joke
Present_Finance8707 t1_j9l2oq5 wrote
Reply to comment by Melveron in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
Any company serious about alignment would not pursue capabilities research, full stop. OpenAI is perhaps the most dangerous company on earth.
Present_Finance8707 t1_j9l2f4b wrote
Reply to comment by tomorrow_today_yes in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
If you think Eliezer reached his conclusions by “extrapolating trends” you don’t have a single clue about his views.
Present_Finance8707 t1_ja1f835 wrote
Reply to comment by NoidoDev in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
You’re mentally ill. Please remember this conversation when foom starts and you start dissolving into grey goo. Absolute degenerate.