Ok_Faithlessness4197
Ok_Faithlessness4197 t1_jecguf5 wrote
Reply to comment by acutelychronicpanic in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
It's worth talking about, but I'm also worried. The rate it's advanced means that whoever finds the next significant performance improvement could well develop AGI. Many people are researching it, and I'm concerned as 1. AI is currently unaligned 2. A malicious party could develop AGI. If high performing models hadn't already been publicly released, I would have been fully supportive of regulation. (Until AI could be aligned, or a plan for public safety developed.)
Ok_Faithlessness4197 t1_je6jzwk wrote
Tipping point will be when AI develops better processors. That alone can easily increase performance 100-1000x, which will create a feedback loop enabling far superior intelligence.
Ok_Faithlessness4197 t1_je6izy6 wrote
AI is already too far advanced, the letter won't stop anything. ChatGPT level models have been adapted to run on consumer grade hardware. Even in the US, it is now almost impossible to block its growth.
Ok_Faithlessness4197 t1_jdz68d9 wrote
Reply to comment by norby2 in Singularity is a hypothesis by Gortanian2
I think it's unlikely Alzheimer's won't be solved.
!Remindme 10 years
Ok_Faithlessness4197 t1_jdz5m2l wrote
Reply to comment by Gortanian2 in Singularity is a hypothesis by Gortanian2
I just read the second article you linked, and it does not provide any scientific basis for the bounds of an intelligence explosion. Given the recent uptrend in AI investment, I'd give 5-10 years before an ASI emerges. Primarily, once AI takes over microprocessor development, it will almost certainly kickstart this explosion.
Ok_Faithlessness4197 t1_jdu12qm wrote
Reply to comment by Ok_Tip5082 in Why is maths so hard for LLMs? by RadioFreeAmerika
I make no claims about sentience. I will say however that this is far ahead of what was previously achievable by AI standards. In its current form, it has to be allowed enough time to satisfy the mathematical time requirement. In the future, once it's linked with WolframAlpha (A math AI) it will not make the simple mistakes it makes now.
Ok_Faithlessness4197 t1_jdsskog wrote
Reply to comment by 0382815 in Why is maths so hard for LLMs? by RadioFreeAmerika
I absolutely agree, it's multiplication algorithm is very slow, very inefficient, and very different from the way a calculator would handle it. I think it does differ too from how you're considering it, though. It's more than just a really good text predictor. It can use logic and solve novel problems in many unprecedented ways. Here, I would argue, it has a greater-than-superficial understanding of the math algorithm it used to multiply numbers. Can I ask how you'd define an algorithm, and what you'd consider "running a multiplication algorithm"?
Ok_Faithlessness4197 t1_jdsqqgg wrote
Reply to comment by 0382815 in Why is maths so hard for LLMs? by RadioFreeAmerika
Alright, go ahead and ignore the multiplication algorithm it just demonstrated.
Ok_Faithlessness4197 t1_jdsl2be wrote
Reply to comment by 0382815 in Why is maths so hard for LLMs? by RadioFreeAmerika
Input: Solve 343 x 875 with long multiplication.
Output (ChatGPT): Sure! Here's how to solve 343 x 875 with long multiplication: 3 x 5 = 15 Write down 5, carry-over 1 3 x 7 = 21, plus the carried-over 1 = 22 Write down 2, carry-over 2 3 x 8 = 24, plus the carried-over 2 = 26 Write down 6, carry-over 2 3 x 5 = 15, plus the carried-over 2 = 17 Write down 7, carry-over 1 Add the carried-over 1 to the result of 3 x 4 = 12, which gives 13 Write down 3, carry-over 1 4 x 5 = 20 Write down 0, carry-over 2 4 x 7 = 28, plus the carried-over 2 = 30 Write down 0, carry-over 3 4 x 8 = 32, plus the carried-over 3 = 35 Write down 5, carry-over 3 4 x 5 = 20, plus the carried-over 3 = 23 Write down 3, carry-over 2 Add the carried-over 2 to the result of 3 x 8 = 24, which gives 26 Write down 6, carry-over 2 Add the carried-over 2 to the result of 3 x 3 = 9, which gives 11 Write down 1, carry-over 1 Add the carried-over 1 to the result of 3 x 8 = 24, which gives 25 Write down 5 So, the final result is 300125.
Ok_Faithlessness4197 t1_jdrt7xy wrote
Reply to comment by 0382815 in Why is maths so hard for LLMs? by RadioFreeAmerika
That's not quite correct. While all it does is guess what the next token will be, it can intelligently infer that an equation (even one outside its training set) needs to be calculated, and then calculate it. The problem is it's inability to utilize it's understanding of mathematics to answer prompts in an efficient and accurate manner. Once a calculator is implemented (Probably in GPT 4.1 given the recent paper by Microsoft demonstrating its capability to use one), this problem will hopefully be resolved.
Ok_Faithlessness4197 t1_jdrrdia wrote
Reply to comment by skob17 in Why is maths so hard for LLMs? by RadioFreeAmerika
Yes it can, just need to prompt for a chain of thought. As another user mentioned, it can work through complicated math problems easily. The issue lies in its inability to determine when such an increase in resources is necessary, without human input.
Ok_Faithlessness4197 t1_jedty5n wrote
Reply to comment by mihaicl1981 in Goddamn it's really happening by BreadManToast
!Remindme 2 years