Submitted by DonOfTheDarkNight t3_118emg7 in singularity
NoidoDev t1_j9ht1ur wrote
He uses thought experiments and unreasonable scenarios to get attention. If this is for commercial reasons, or his mentality, that I don't know. If it would be clear that these are just abstract thought experiments, it wouldn't be a problem, but he acts like these are real threats. He, and other similar "researchers" are building their scenarios on claims like
- AGI or ASI is going to be one algorithm or network, so no insights, no filters possible, ...
- someone will give it the power to do things, or it will seek these powers on it's own
- it will do things without asking and simulating things first, or it just doesn't care about us
- the first one build will be a runaway case
- it will seek and have the power to change materials (nanobots)
- there will be no narrow AI(s) around to constrain or stop it
- no one will have run security tests using more narrow AIs, for example on computer network security
- he never explains why he beliefs these things, at least he's not upfront in his videos about it, just abstract and unrealistic scenarios
This is the typical construction of someone who wants something to be true: Doomer mindset or BS for profit / job security. If he had more influence, then he would most likely be a danger. His wishes for more control of the technology show that. He would stop progress and especially proliferation of the technology. I'm very glad he failed. In some time we'll might have decentralized training, so big GPU farms won't be absolutely necessary. Then it's gonna be even more over than it is already.
Edit: Typo (I'm not a native English speaker)
Present_Finance8707 t1_j9l2vjp wrote
You really really really don’t understand the alignment problem. You really don’t know the field if you’re trying to understand by watching videos of Eliezer and not his writing. What a joke
NoidoDev t1_j9lptw1 wrote
His videos are where he can make his case. It's the introduction. If he and others fail at making the case, then you don't get to blame the audience. Of course I'm looking at the abstract first, to see if it's worth looking into. My judgement is always: No.
Present_Finance8707 t1_j9my8wl wrote
Like I said you really really don’t understand alignment. Imagine thinking a “filter” is what we need to align AIs or completely lacking understanding of instrumental convergence. You don’t understand even the utter basics but think you know enough to dismiss Eliezers arguments out of hand??? Thankfully I think you’re also too stupid to contribute meaningfully to capabilities research so thanks for that.
NoidoDev t1_ja17g56 wrote
> Like I said you really really don’t understand alignment.
What I don't understand is how you believe that you can deduct this from one or a very few comments. But I can just claim you just don't understand my comments, so you would first have to proof that you do understand them. So now spend the next few hours thinking about and answer, then I might or might not answer to that, and that answer might or might not take your arguments into account instead of just being dismissive. See ya.
Edit: Word forgotten
Present_Finance8707 t1_ja1em06 wrote
You’re literally saying “put a filter on the Ai”. That’s like “just unplug it lolz” levels of dumb. Give me a break.
obfuscate555 t1_ja0z055 wrote
Ok, if you do understand so we'll, then explain it.
NoidoDev t1_ja17niw wrote
>Thankfully I think you’re also too stupid to contribute meaningfully
Problem is, I don't need to. You doomers would need to convince people that we should slow down or stop progress. But we won't.
Present_Finance8707 t1_ja1f835 wrote
You’re mentally ill. Please remember this conversation when foom starts and you start dissolving into grey goo. Absolute degenerate.
94746382926 t1_ja26lwf wrote
Why are you such a cunt
Viewing a single comment thread. View all comments