tt54l32v
tt54l32v t1_jdyc1h3 wrote
Reply to comment by WarAndGeese in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
So the second app might would fare better leaning towards search engine instead of LLM but some LLM would ultimately be better to allow for less precise matches of specific set of searched words.
Seems like the faster and more seamless one could make this, the closer we get to agi. To create and think it almost needs to hallucinate and then check for accuracy. Is any of this already taking place in any models?
tt54l32v t1_jdvlsg3 wrote
Reply to comment by WarAndGeese in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
Ok so how does one simplify that? Also why does it have to be separate? Genuinely curious.
tt54l32v t1_j98kmdw wrote
Reply to comment by FusionRocketsPlease in How to definitely know if a system is conscious: by FusionRocketsPlease
No
tt54l32v t1_j98d6df wrote
We assume the brain creates consciousness. What if it's consciousness that creates the brain? Biology already gives us the ability to create a new consciousness sheet by reproducing.
I watched ex machina again last night, the programming is not the far fetched part anymore, the hardware is.
Consciousness is no longer the hard problem anymore in my mind. It's moving an uniformed, unemphatic civilization to alignment. This post proves it.
tt54l32v t1_jeb4i8v wrote
Reply to comment by generalbacon965 in Humanoid robots using cameras for eyes will likely experience issues and accidents around spinning objects such as propellers, due to frame rates by scarronline
That's what the camera does, it's not looking for brake lights.