sticky_symbols t1_j9w9b6e wrote
Reply to comment by MinaKovacs in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
There's obviously intelligence under some definitions. It meets a weak definition of AGI since it reasons about a lot of things almost as well as the average human.
And yes, I know how it works and what its limitations are. It's not that useful yet, but discounting it entirely is as silly as thinking it's the AGI we're looking for.
Viewing a single comment thread. View all comments