Submitted by AutoModerator t3_122oxap in MachineLearning
fishybird t1_jdtf6jh wrote
Anyone else bothered by how often LLMs are being called "conscious"? in AI focused YouTube channels and even in this very sub, comments are getting dozens of upvotes for saying we're getting close to creating consciousness.
I don't know why, but it seems dangerous to have a bunch of people running around thinking these things deserve human rights simply because they behave like a human.
pale2hall t1_jdva40w wrote
Great point! I
actually really enjoy AIExplained's videos on this. There are a bunch of different ways ways to measure 'consciousness' and many of them are passed by GPT4, which really just means we need new tests / definitions for AI models.
fishybird t1_jdvy0h1 wrote
Well yeah that's the whole problem! Why are we even calling them "tests for consciousness"? Tests for consciousness don't exist and the only reason we are using the word "consciousness" is pure media hype. If an AI reporter even uses the word "conscious" I immediately know not to trust them. It's really sad to see that anyone, much less "experts", are seriously discussing whether or not transformers can be conscious
[deleted] t1_jeanzis wrote
[removed]
Viewing a single comment thread. View all comments