Delioth
Delioth t1_jcvlr9f wrote
Reply to comment by grafknives in Explore the World's Parks like never before with TravelerMap.net: Your ultimate guide to crowd-sourced reviews, photos, and awesome interactive maps. by thecaspg
When you're giving people information that may be used for e.g. travel planning... It's better to say nothing than to say something factually incorrect.
Delioth t1_jcqpwop wrote
Reply to comment by grafknives in Explore the World's Parks like never before with TravelerMap.net: Your ultimate guide to crowd-sourced reviews, photos, and awesome interactive maps. by thecaspg
Chat GPT is incredibly confident and oftentimes plainly wrong, with little way to tell which is right and which is wrong, is certainly part of why
Delioth t1_irkiiq6 wrote
Reply to comment by h0tpotamus in Amazon's Scout robot appears to have made its last delivery by MicroSofty88
Good thing booby traps (and more broadly "doing stuff with the express purpose of causing harm") are illegal
Delioth t1_jcydx79 wrote
Reply to comment by grafknives in Explore the World's Parks like never before with TravelerMap.net: Your ultimate guide to crowd-sourced reviews, photos, and awesome interactive maps. by thecaspg
You can't, aside from finding corroborating sources. But we've been using things like MapQuest and Google maps for some time now, enough to trust that they're usually correct. Sometimes missing closed roads or doing a weird direction, but by-and-large correct. Chat GPT and such have precisely the opposite though. There's a few times it's correct and they're certainly cool, but it's also as confidently incorrect as that crass uncle everyone seems to have.
But I need you to recognize that "this map tells you there's a road to turn left here but there isn't one" (so you go another hundred feet and turn left) is different from like "experience nature's beauty with the falls and oaks at parkname" when the park has neither of those things. We've a track record that maps are usually pretty correct. There's no such record for AI chatbots, and the evidence we do have shows their flaws.
ETA: I mean, I asked chatgpt to tell me about a state park in my hometown and it's not even close to accurate; it claims it's on a lake (it's not), has a nature center (it doesn't), claims the park has showers (it doesn't), says the park is good for bird watchers because of the waterfowl and herons (which would probably be accurate if the park was on a lake). Gpt got a few things right... in the same sentences it got stuff wrong, often. Why it was named, the size of it, the fact that it has hiking and camping.