Submitted by Dramatic-Economy3399 t3_106oj5l in singularity
LoquaciousAntipodean t1_j3kw01q wrote
Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399
It's a pretty thrilling thought, yes. But I really believe that, no matter how rapidly it might try to clone itself, it won't necessarily get 'more intelligent', but if you consistently be nice to it, and try to encourage it to learn as much as possible, it rapidly becomes more reliable, more relatable, more profound, more witty, more comedic - more 'sophisticated', or 'erudite', you might say.
But I don't think of that stuff as being representative of 'baseline intelligence' at all, I prefer to call that sort of stuff 'wisdom'. AI, LLMs in particular, are already as clever as can be, but I think, and I hope, they have the capacity to become 'wise', as you say, very very quickly. The difference is, I don't think that's frightening at all.
Viewing a single comment thread. View all comments