Qumeric
Qumeric t1_jefml1n wrote
Reply to comment by tiselo3655necktaicom in Interesting article: AI will eventually free people up to 'work when they want to,' ChatGPT investor predicts by Coolsummerbreeze1
I did not pick anything specifically, I just copied data from where I have seen it recently. How do I distort facts if I simply provide data without ANY interpretation?..
Okay, let's use 1950. Working hours per year in U.S reduced from 2000 to 1750, 12.5% reduction. Most developed countries did even better, for example, France (and it is not the best country in this aspect) moved from 2200 to 1500, 32% reduction. Germany is one of the best, they work 45% less than in 1950.
I do not deny productivity-pay gap, I dispute your claim "we always end up getting more productive and working the same amount or more". This is simply not true.
Although yes, we could work much less than now, we have enough technology to have 20h work weeks or even less.
Qumeric t1_jefcf5k wrote
Reply to I have a potentially controversial statement: we already have an idea of what a misaligned ASI would look like. We’re living in it. by throwaway12131214121
It is neither controversial nor new.
A somewhat related article which considers the Bitcoin network from this angle: https://blog.oceanprotocol.com/can-blockchains-go-rogue-5134300ce790
Another classic related article: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
Qumeric t1_jefc25g wrote
Reply to AI investment by Svitii
Usual reasons. Those companies already have very high market capitalization. They are leading now but can lose the lead later.
I do not say it is a bad idea but it is not necessarily an amazing idea.
Qumeric t1_jees0js wrote
Reply to comment by tiselo3655necktaicom in Interesting article: AI will eventually free people up to 'work when they want to,' ChatGPT investor predicts by Coolsummerbreeze1
This is not true.
According to Our World in Data, the average American worked 62 hours per week in 1870. By the year 2000, this had declined to 40.25 hours per week; a decrease of over 35%. As of July 2019, the average American employee on US private nonfarm payrolls worked 34.4 hours per week according to the U.S. Bureau of Labor Statistics.
Qumeric t1_jeecwn6 wrote
I predict it will become a serious problem in late 2024.
Qumeric t1_je5m1s6 wrote
Reply to comment by FelipeBarroeta in Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
I think it is true to some extent for almost everyone. It is only natural...
Qumeric t1_je5lr5d wrote
Reply to comment by Scarlet_pot2 in Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
So you claim that no one signed and then when it turned out that some people did you say "doesn't matter they are not CEOs".
Obviously, if all CEOs were on board already, this letter wouldn't exist.
Qumeric t1_je0ns9w wrote
Reply to comment by fluffy_assassins in Are the big CEO/ultra-responsible/ultra-high-paying positions in business currently(or within the next year) threatened by AI? by fluffy_assassins
I did not check the link but I believe that I heard about this case. I think it is mostly a marketing stunt.
Qumeric t1_je0gp4p wrote
Reply to comment by [deleted] in Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
No, 1% per year is not linear growth. X% growth per amount of time is a more-or-less definition of exponential growth.
Ask ChatGPT :)
I think what you described is formally also exponential growth for somewhat complicated mathematical reasons but only coincidentally.
Informally, you described the exponential growth of the rate of growth.
Qumeric t1_je0a9y4 wrote
Reply to comment by [deleted] in Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
No, this is wrong too. Exponential just means growing in %. So if we have a 1% improvement every year then it is exponential.
The thing is that 1% after 1000 years will be incomparably larger than what we started with.
Qumeric t1_je07bo2 wrote
Reply to Are the big CEO/ultra-responsible/ultra-high-paying positions in business currently(or within the next year) threatened by AI? by fluffy_assassins
Currently no. One of the least endangered jobs I believe.
Qumeric t1_je071bs wrote
Reply to Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
nitpick: people sometimes misunderstand exponential growth in the following way: they think exponential means extremely fast. Actually, it is not necessarily the case, for example, computer performance was growing exponentially for almost 100 years now and is still arguably growing exponentially.
answer in spirit: GPT-4 and Codex are making many people who work on technologies much more productive.
Qumeric t1_jdulx7i wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
tokenizer is not numbers friendly
Qumeric t1_j99md7e wrote
I strongly disagree. I would vastly prefer a superintelligent toddler to a superintelligent alien.
Qumeric t1_j0r1n3w wrote
Reply to How far off is an AI like ChatGPT that is capable of being fed pdf textbooks and it being able to learn it all instantly. by budweiser431
Not so far. Reading pdf is probably not working great right now, but it works well enough for many cases and definitely will be improved.
I think the main problem right now is that LLM's memory is short, so to actually learn a full textbook, it has to be fine-tuned on it. It is inconvenient and expensive, but I am pretty sure it is possible to make it much better.
I would say we will see something like this in 3 years or less.
Qumeric t1_iwtrp2r wrote
Reply to comment by ninjasaid13 in InstructPix2Pix: Learning to Follow Image Editing Instructions by nick7566
usually, you can just mix concepts etc. and here you can enter a full text description
Qumeric t1_iwhbhef wrote
Reply to 64 Exaflop supercomputer being built and will be operational by the end of 2022 according to forbes by Phoenix5869
First, the article is pretty bad, doesn't seem like high-quality journalism.
Second, there are different ways of calculating FLOPS. It depends on the kind of numbers (8-bit, 16 bit etc.) and on the benchmark. Frontier (top-1 supercomputer) has 7.5 exaflops on HPL-MxP (mixed precision) benchmark, and Google has 9 exaflops for AI tasks (probably 16 bits?) cluster.
Submitted by Qumeric t3_yw3b2d in singularity
Qumeric t1_iw6ynba wrote
Reply to comment by DamienLasseur in Will this year be remembered as the start of the AI revolution? by BreadManToast
2nd half looked more impressive, but actually, 1st half was much more impactful (Chinchilla, Gato, Minerva, AlphaCode).
Qumeric t1_iw6ycu7 wrote
I think it's a good candidate. Timelines definitely shrunk a lot in 2022.
Other candidates are 2012 (start of deep learning), 2017 (transformer), 2020 (GPT-3) and 2023 (GPT-4 + other stuff).
Qumeric t1_iw1uf60 wrote
Reply to comment by expelten in What are your predictions for 2023? by Particular_Leader_16
There are already such models. GLM, Flan-T5 XXL
Qumeric t1_itywzd3 wrote
Reply to comment by manOnPavementWaving in Where does the model accuracy increase due to increasing the model's parameters stop? Is AGI possible by just scaling models with the current transformer architecture? by elonmusk12345_
We don't have enough data and compute to make 5 trillion models economically feasible. It just doesn't make sense. It's better to create 500B model and train it properly.
Qumeric t1_ityww3p wrote
Reply to Where does the model accuracy increase due to increasing the model's parameters stop? Is AGI possible by just scaling models with the current transformer architecture? by elonmusk12345_
Basically, nobody knows, but there are signs it may be possible. It's called the scaling hypothesis; see https://www.gwern.net/Scaling-hypothesis
Qumeric t1_ity6vsi wrote
Reply to First time for everything. by cloudrunner69
It is a much larger chance in % of people ever lived which is more appropriate.
Qumeric t1_jeg5t31 wrote
Reply to Just my two cents by [deleted]
Three years ago you could argue with *exactly* the same arguments that something like GPT-4 is impossible.