liqui_date_me
liqui_date_me t1_je2h8vu wrote
Reply to [D] With ML tools progressing so fast, what are some ways you've taken advantage of them personally? by RedditLovingSun
I’ve used chatGPT-4 for very specific relationship and career advice, it’s surprisingly good at understanding corporate jargon
liqui_date_me t1_jdt531o wrote
Reply to comment by ArcticWinterZzZ in Why is maths so hard for LLMs? by RadioFreeAmerika
You would think that GPT would have discovered a general purpose way to multiply numbers, but it really hasn’t, and it isn’t accurate even with chain-of-thought prompting.
I just asked GPT4 to solve this: 87176363 times 198364
The right answer should be 17292652070132 according to wolfram alpha.
According to GPT4 the answer is 17,309,868,626,012.
This is the prompt I used:
What is 87176363 times 198364? Think of the problem step by step and give me an exact answer.
liqui_date_me t1_jdt48m5 wrote
Reply to comment by ArcticWinterZzZ in Why is maths so hard for LLMs? by RadioFreeAmerika
You would think that GPT would have discovered a general purpose way to multiply numbers, but it really hasn’t, and it isn’t accurate even with chain-of-thought prompting.
I just asked GPT4 to solve this: 87176363 times 198364
The right answer should be 17292652070132 according to wolfram alpha.
According to GPT4 the answer is 17,309,868,626,012.
This is the prompt I used:
What is 87176363 times 198364? Think of the problem step by step and give me an exact answer.
liqui_date_me t1_jds3b6q wrote
Reply to comment by ngildea in [D] GPT4 and coding problems by enryu42
I would say it's controversial around many folks who aren't directly involved in programming and who get impressed by cute demos on Twitter. People who actually know how to code see it as a superpower to make themselves more efficient, while also lamenting about how it makes silly mistakes.
https://www.reddit.com/r/cscareerquestions/comments/1226hcn/im_worried_about_ai_taking_our_jobs/
I highly doubt that software engineering jobs will become obsolete. There's going to be a lot of disruption and there might be some wage deflation too (imagine the price of writing the boilerplate components of an iOS app goes from 50,000 dollars to 50 dollars), but so much of software engineering is testing, QA and human collaboration. I think we're just going to have to re-orient our careers around correcting code from LLMs.
liqui_date_me t1_jdrd9dx wrote
Reply to comment by enryu42 in [D] GPT4 and coding problems by enryu42
One could argue that even standardized tests are somewhat boilerplate - if you practice enough SAT tests you’ll eventually do quite well at them, the questions are quite similar to each other from exam to exam. Ditto for AP exams.
I think a serious test for GPT4’s intelligence will be on one of the competitive entrance exams for some countries, like the IIT-JEE or the Gaokao or the International Math Olympiad, where the questions are made by domain experts and are designed to be intentionally difficult and specialized to solve.
liqui_date_me t1_jdr8516 wrote
Reply to [D] GPT4 and coding problems by enryu42
This comment about GPT-4’s limited abilities in solving arithmetic was particularly interesting: https://www.reddit.com/r/singularity/comments/122ilav/why_is_maths_so_hard_for_llms/jdqsh5c/?utm_source=share&utm_medium=ios_app&utm_name=iossmf&context=3
Controversial take: GPT-4 is probably good for anything that needs lots of boilerplate code or text, like ingesting a book and writing an essay, or drafting rental contracts. There’s a lot of value in making that area of the economy more efficient for sure.
But for some of the more creative stuff it’s probably not as powerful and might actually hinder productivity. It still makes mistakes and programmers are going to have to go and fix those mistake’s retroactively.
liqui_date_me t1_jdr7pnr wrote
Reply to comment by RadioFreeAmerika in Why is maths so hard for LLMs? by RadioFreeAmerika
Tough to say, probably in 10-20 years at the very least. Modern LLMs are transformers which are architected to predict the next token in a sequence in O(1) time, regardless of the input. Unless we get a radically different neural network architecture it’s not possible we’ll ever get GPT to perform math calculations exactly
liqui_date_me t1_jdr7fob wrote
Reply to comment by CommunismDoesntWork in Why is maths so hard for LLMs? by RadioFreeAmerika
All GPT does is next token prediction, where tokens = words. The lag you see is probably network/bandwidth/queuing issues on the server side rather than the model itself.
liqui_date_me t1_jed117i wrote
Reply to comment by CrelbowMannschaft in When will AI actually start taking jobs? by Weeb_Geek_7779
Interest rates. They can’t borrow as cheaply as before to buy back their stock and hired too many people for ambitious projects that didn’t end up making any revenue