Submitted by therealsam44 t3_1070j67 in Futurology
[removed]
Submitted by therealsam44 t3_1070j67 in Futurology
[removed]
You need to threaten it, like "this better be 500 words or else you're not going to like it". Then it's very careful to do what you ask it to.
I will try that.
I will be right back.
(If he doesn't come back it's obviously because HAL killed him...)
I asked it for a 100 word synopsis Buffalo '66 and it gave me 91 words. I was aggressive with it, too. Threatening even.
Lol. Strange movie...
How about a nice game of chess?
I could have gone with War Games, but I wanted to give it a challenge. :)
[removed]
I mean it's trying to get close...i guess? Maybe it can't double back to check the essay like humans to add and delete random phrasing to get accurate
It is so definitive, though. I tell it, "Hey bro, that's only 387 words and it needs to be 500." And instead of saying, "Oh, ok, I will attempt it again, I am sorry." It states, with authority, "My apologies, here is a version that is 500 words." And then it is 402.
It is like it can't count. Which is fucking hilarious to me.
I tried to have it make a meal plan with nutrition facts, 200 grams of protein and 2000 calories and it would do the same, “okay here’s a meal plan with 200 grams of protein and 2000 calories” it had all the nutrition facts but the numbers did not add up to anything close to what they were supposed to. I can’t think of the other time but this was not the first time I used it and it simply could not add, something I would expect an AI to do with ease.
That's a good example.
Math/counting... Not AI's strong suit I guess.
That's a sore spot for chatGPT specifically rather than AI in general. See this thread for more.
Totally. I understand.
I'm not even asking for simple algebra.
I am asking for GPT to simply count the number of words it is outputting.
Yeah that's fair. I find it likes repeating the words in your question again more than anything else, it's like a student answering the "write your answers in full sentences" type questions. So it's probably just compulsively doing that
Add "think step by step" and it's output magically becomes more accurate.
Ah cool. Thanks!
Why don’t you try saying something like give me 530 words and then it might you 500?
Good idea actually.
Okay, after fiddling around, I found wording the prompt like “Write an essay about a cat. This should be between 250 and 300 words. Report the word count after producing the essay.” to produce accurate word count responses, typically at the uppermost limit of the ranges I would provide. I’ve also found that you can ask ChatGPT to write out “X” amount of words for a prompt, and after it finishes answering, tell it simply to “continue on with more words” or “continue with another page” and it will simply keep spitting out information.
Excellent work!
I should just be asking you instead of GPT.
Go humans!
You’re asking AI to do things humans can’t even do. Answer the ultimate question of life? Are you 12?
The answer is 42 obviously
Obviously...everyone knows that much. Now for the ultimate question though...hold please...
It's a hitch hikers guide to the galaxy reference/joke
>Machines cannot learn how to do something without clear, replicable examples
This whole thread is stupid, like OP hasn't been looking into this at all but wants to post some facts on it. "Machines cannot learn how to do something without clear, replicable examples": That's absolutely not true, and over the last 10 years or so some of the most interesting AI examples have been areas where AI's are given basic rules and then allowed to train themselves to find a solution in the most efficient way (or at least well enough to hit a goal, depending on the project).
Perhaps my favorite, but probably not the most well known, is where they loaded up simple small robots with a very basic AI that had had the goal of moving forward, but was not instructed on the best way to move its appendages properly. Each one would then try out all sorts of ways of moving until they ended up with something that worked. In the end, some used very normal looking methods of transportation while others used completely ridiculous looking modes of transportation, but they still got to the goal.
The rest of the bullet points in OP's post just look like shower thoughts.
I'm having a heck of a time pulling up the articles that covered the research on that project as it was probably 10 or more years ago, so if anyone has the links, please feel free to add. But there's all sorts of stuff out there. Another one is where a robot arm teaches itself to throw a ball at a target, starting off with some basic algorithms but learning from its mistakes each time. This is not the robot arm I was thinking of, but it's an example: Link
Edit: Sorry if I'm a little reactive on this one, but I've been following this stuff for years, and OP's post feels like it's from someone who hasn't looked into it at all.
Machines cannot learn how to do something without clear, replicable examples.
Wrong. Reinforcement learning can and does let machines find out how to do something. They find a way that is often better than the way humans know how.
The real advancements in AI haven’t been in “creative thinking,” but in accuracy and efficiency.
Some of the solutions to RL environments are pretty creative, like box surfing. https://openai.com/blog/emergent-tool-use/
Answer The Ultimate Question of life the Universe and everything
humans can't
Solve Annoying Interview Puzzles
people on r/csMajors have used chatGPT to cheat on interview assessments. It apparently works amazingly well.
Write Bug-Free Software
neither can humans
Nah, don't bother. OP's probably someone who can't even fathom the concept of AI.
Really, how hard is it to write "forty-two . . .".
I keep trying to warn people how close we are to the end of life as we know it. If certain people get in possession of a more advanced tech we are screwed
Stop telling the AI what it sucks at like you’re setting it up as a challenge…. Be nice to the AI and tell it that it’s special just the way it is.
We are going to add a silver stripe to the unity flag just for them, but we arent adding a letter.
How much did you think about this before posting it?
There is no "bug free" software (that is meaningful). No matter what genius wrote it.
3 is nonsense.
That's not a problem to solve, it's a Douglas Adams joke.
I can absolutely use it to generate answers to interview questions.
He we go, the only prompt I put was [this is a job interview] and the first question:
Interviewer: "what would you consider to be one of your strengths?"
Candidate: "my ability to make people laugh."
Interviewer: "well, we could use someone like that here at the office. But what about weaknesses?"
Candidate: "I don't take much pride in my work."
Interviewer: "you're right on target for this position! Would you like to see the rest of our floor plan?"
Candidate: "Sure!" walks into a wall
The moral of the story? Don't think so hard when answering questions, and always look where you're going.
[removed]
N.1 is not true, you're simply not aware of the techniques used for that. You seem to be only aware of a small subset called "supervised learning" which is by no means the whole of artificial intelligence.
AI have already made discoveries in the field of math for example by proving previously unproven theorems etc...
It would be useful to have a little bit of knowledge before pontificating about a subject, any subject.
ChatGPT is still just a clever chatbot that mimics memory and intelligence.
It's really good at explaining and summarizing things you would normally search on google yourself.
I’d like to see an AI try to grasp the concept of the pinch and roll when your balls itch. Somethings humans still need to do themselves
Top tier AI isn't online hatting with us. It is flying jet fighters for the Chinese Military. Anything that powerful is not a consumer product.
https://www.popsci.com/ai-pilot-beats-air-combat-expert-in-dogfight/
[removed]
[removed]
[removed]
[removed]
[removed]
[deleted]
[removed]
what about solving how many chucks a wood chuck can chuck? or the cure for cancer?
That’s because we haven’t had true AI yet. Just new software learning and production techniques that we’re calling AIA and getting all hyped over ourselves over. I’m still sleep on it.
Wake me up when it can replicate and codify genetic code, isolate specific traits and features, and transpose it into programming features or create new versions of life.
[removed]
johntwoods t1_j3ju6o2 wrote
I tried to get chatGPT to write me a quick synopsis of a movie that was exactly 500 words long. It couldn't do it. And I would say, hey, that's 384 words and this needs to be 500. Then the bot would apologize and say "Oh okay here is one that's 500 words!" And it would be 470 or something.