Submitted by marcandreewolf t3_11wtqby in Futurology
Baprr t1_jd1on7e wrote
The chatbot tends to lie. Have you checked the years for obvious bullshit?
For example, the first result on googling #1 is
>Samuel first wrote a checkers-playing program for the IBM 701 in 1952. His first learning program was completed in 1955 and was demonstrated on television in 1956
Where did 1959 come from?
fox-mcleod t1_jd2kr22 wrote
2024 - industrial design
2033 - industrial design
marcandreewolf OP t1_jd25w8b wrote
It is not lying (it even cannot lie, unless it would be conscious 😅), but it is sometimes grabbing the wrong info, especially if repeated often online (by humans), or just halucinates nonsense. So: yes and no 😁
Baprr t1_jd28c8h wrote
It's just wrong instead of lying then. I mean, if you can't trust it to write the very easy to look up history of automation - why would you believe it's predictions? This info is pretty much useless.
alex20_202020 t1_jd2c4m3 wrote
I think it is not useless, it might represent average dates when people wrote/predicted somewhere publicly this and that might happen.
Baprr t1_jd2f3bj wrote
Not really. If you read what people predicted in the past about 2023, you might believe that we already have colonies in space, fully autonomous self driving cars, and cure for cancer. You have to filter the output of the chatbot, or it's - well, not gibberish, but extremely suspect information. It doesn't check or provide sources.
This list might be used to look up current projects that are being developed, and with some effort be turned into maybe 20 points of exciting things to look forward to.
But right now it's low effort useless content.
Viewing a single comment thread. View all comments