Submitted by QuicklyThisWay t3_10wj74m in news
Rulare t1_j7p8sut wrote
Reply to comment by No-Reach-9173 in ChatGPT's 'jailbreak' tries to make the A.I. break its own rules, or die by QuicklyThisWay
> When you look at that and things like the YouTube algorithm being so complex that Google can no longer predict what it will offer someone before hand you have to realize we are sitting on this cusp where while not a complete accident it will most certainly be an accident when do create an AGI.
There's no way we believe it is sentient when it does make that leap, imo. Not for a while anyway.
[deleted] t1_j7q87ib wrote
[removed]
Viewing a single comment thread. View all comments