Comments

You must log in or register to comment.

[deleted] t1_j3jrepv wrote

You’re asking AI to do things humans can’t even do. Answer the ultimate question of life? Are you 12?

27

Nightshade_Ranch t1_j3jtc29 wrote

I can absolutely use it to generate answers to interview questions.

He we go, the only prompt I put was [this is a job interview] and the first question:

Interviewer: "what would you consider to be one of your strengths?"

Candidate: "my ability to make people laugh."

Interviewer: "well, we could use someone like that here at the office. But what about weaknesses?"

Candidate: "I don't take much pride in my work."

Interviewer: "you're right on target for this position! Would you like to see the rest of our floor plan?"

Candidate: "Sure!" walks into a wall

The moral of the story? Don't think so hard when answering questions, and always look where you're going.

3

SoylentRox t1_j3jtjb1 wrote

Machines cannot learn how to do something without clear, replicable examples.

Wrong. Reinforcement learning can and does let machines find out how to do something. They find a way that is often better than the way humans know how.

The real advancements in AI haven’t been in “creative thinking,” but in accuracy and efficiency.

Some of the solutions to RL environments are pretty creative, like box surfing. https://openai.com/blog/emergent-tool-use/

Answer The Ultimate Question of life the Universe and everything

humans can't

Solve Annoying Interview Puzzles

people on r/csMajors have used chatGPT to cheat on interview assessments. It apparently works amazingly well.

Write Bug-Free Software

neither can humans

15

johntwoods t1_j3ju6o2 wrote

I tried to get chatGPT to write me a quick synopsis of a movie that was exactly 500 words long. It couldn't do it. And I would say, hey, that's 384 words and this needs to be 500. Then the bot would apologize and say "Oh okay here is one that's 500 words!" And it would be 470 or something.

30

-Kerrigan- t1_j3jwxpw wrote

There is no "bug free" software (that is meaningful). No matter what genius wrote it.

4

johntwoods t1_j3k0ebm wrote

It is so definitive, though. I tell it, "Hey bro, that's only 387 words and it needs to be 500." And instead of saying, "Oh, ok, I will attempt it again, I am sorry." It states, with authority, "My apologies, here is a version that is 500 words." And then it is 402.

It is like it can't count. Which is fucking hilarious to me.

11

serifsanss t1_j3k10dx wrote

Stop telling the AI what it sucks at like you’re setting it up as a challenge…. Be nice to the AI and tell it that it’s special just the way it is.

12

NotASuicidalRobot t1_j3k1b1e wrote

Yeah that's fair. I find it likes repeating the words in your question again more than anything else, it's like a student answering the "write your answers in full sentences" type questions. So it's probably just compulsively doing that

1

alwaysuseswrongyour t1_j3k3ace wrote

I tried to have it make a meal plan with nutrition facts, 200 grams of protein and 2000 calories and it would do the same, “okay here’s a meal plan with 200 grams of protein and 2000 calories” it had all the nutrition facts but the numbers did not add up to anything close to what they were supposed to. I can’t think of the other time but this was not the first time I used it and it simply could not add, something I would expect an AI to do with ease.

4

luckymethod t1_j3k4i0h wrote

N.1 is not true, you're simply not aware of the techniques used for that. You seem to be only aware of a small subset called "supervised learning" which is by no means the whole of artificial intelligence.

AI have already made discoveries in the field of math for example by proving previously unproven theorems etc...

It would be useful to have a little bit of knowledge before pontificating about a subject, any subject.

2

Crit0r t1_j3k4l3t wrote

ChatGPT is still just a clever chatbot that mimics memory and intelligence.

It's really good at explaining and summarizing things you would normally search on google yourself.

2

lilsasuke4 t1_j3k6ug7 wrote

How much did you think about this before posting it?

12

Str8kush t1_j3kan12 wrote

I’d like to see an AI try to grasp the concept of the pinch and roll when your balls itch. Somethings humans still need to do themselves

2

freshgrilled t1_j3keha6 wrote

>Machines cannot learn how to do something without clear, replicable examples

This whole thread is stupid, like OP hasn't been looking into this at all but wants to post some facts on it. "Machines cannot learn how to do something without clear, replicable examples": That's absolutely not true, and over the last 10 years or so some of the most interesting AI examples have been areas where AI's are given basic rules and then allowed to train themselves to find a solution in the most efficient way (or at least well enough to hit a goal, depending on the project).

Perhaps my favorite, but probably not the most well known, is where they loaded up simple small robots with a very basic AI that had had the goal of moving forward, but was not instructed on the best way to move its appendages properly. Each one would then try out all sorts of ways of moving until they ended up with something that worked. In the end, some used very normal looking methods of transportation while others used completely ridiculous looking modes of transportation, but they still got to the goal.

The rest of the bullet points in OP's post just look like shower thoughts.

I'm having a heck of a time pulling up the articles that covered the research on that project as it was probably 10 or more years ago, so if anyone has the links, please feel free to add. But there's all sorts of stuff out there. Another one is where a robot arm teaches itself to throw a ball at a target, starting off with some basic algorithms but learning from its mistakes each time. This is not the robot arm I was thinking of, but it's an example: Link

Edit: Sorry if I'm a little reactive on this one, but I've been following this stuff for years, and OP's post feels like it's from someone who hasn't looked into it at all.

3

Plokmijn27 t1_j3kjk50 wrote

what about solving how many chucks a wood chuck can chuck? or the cure for cancer?

1

Gilded-Mongoose t1_j3klba6 wrote

That’s because we haven’t had true AI yet. Just new software learning and production techniques that we’re calling AIA and getting all hyped over ourselves over. I’m still sleep on it.

Wake me up when it can replicate and codify genetic code, isolate specific traits and features, and transpose it into programming features or create new versions of life.

1

moki69 t1_j3km9t2 wrote

Okay, after fiddling around, I found wording the prompt like “Write an essay about a cat. This should be between 250 and 300 words. Report the word count after producing the essay.” to produce accurate word count responses, typically at the uppermost limit of the ranges I would provide. I’ve also found that you can ask ChatGPT to write out “X” amount of words for a prompt, and after it finishes answering, tell it simply to “continue on with more words” or “continue with another page” and it will simply keep spitting out information.

1