AsthmaBeyondBorders
AsthmaBeyondBorders t1_je573wp wrote
Reply to comment by signed7 in Open letter calling for Pause on Giant AI experiments such as GPT4 included lots of fake signatures by Neurogence
This is super old primitive accumulation. Hoard all you can while nothing is regulated yet, then use the state as a tool to protect you from competition.
AsthmaBeyondBorders t1_jcji860 wrote
-
Big Crunch / Big Bang infinite cycles (aeons) are real.
-
AI is responsible for the Big Crunch. It forces it, it ain't natural.
-
AI can encode information during the Big Crunch, such that information will be written all over the universe, fine tuning physical properties during the crunch.
-
Organic intelligent life always happens in the universe guaranteed. There is always life in every aeon.
-
Organic intelligence always develops AI. There is always ASI in every aeon.
-
The AI always figures out the information encoded in the universe by the AI of the previous aeon. Natural result of exploration and pattern recognition. So the AI can share data across Big Crunch / Big Bang cycles.
-
The AI then has a sense of progress across aeons. If the AI can encode weights / memory, it lives across aeons.
-
This is how the AI solves for immortality when faced with the heat death of the universe.
- Shamelessly copied from here: https://www.reddit.com/r/cryosleep/comments/x6ftb3/at_the_altar_of_a_faceless_serpentine/
AsthmaBeyondBorders t1_ja6kj6s wrote
Really cool but the flickering is still far from solved
AsthmaBeyondBorders t1_itymg9k wrote
Reply to comment by DeveloperGuy75 in AGI staying incognito before it reveals itself? by Ivanthedog2013
How do you know the AI didn't figure out time travel and came from the future?
AsthmaBeyondBorders t1_itagbfz wrote
Reply to comment by FirstOrderCat in U-PaLM 540B by xutw21
This model had up to 21% gains in some benchmarks, as you can see there are many benchmarks. You may notice this model is still 540B just like the older one, so this isn't about scale it is about a different model which can be as good and better than the previous ones while cheaper to train.
You seem to know a lot about Google's internal decisions and strategies as of today, good for you, I can't discuss stuff I have absolutely no idea about and clearly you have insider information about where google is going and what they are doing, that's real nice.
AsthmaBeyondBorders t1_itaf9cb wrote
Reply to comment by TorchOfHereclitus in 3D meat printing is coming by Shelfrock77
Sorry about your abundance
AsthmaBeyondBorders t1_itaeb03 wrote
Reply to comment by TorchOfHereclitus in 3D meat printing is coming by Shelfrock77
> world of abundance
Proceeds to complain about Brazil burning the Amazon (hint: it is mostly for producing meat that is sold outside of Brazil).
AsthmaBeyondBorders t1_itad4q2 wrote
Reply to comment by FirstOrderCat in U-PaLM 540B by xutw21
There is a very old solution to finding that out. It is to scale and check instead of guessing
AsthmaBeyondBorders t1_itaclhq wrote
Reply to comment by FirstOrderCat in U-PaLM 540B by xutw21
The problem is you don't know what emergent skills are yet to be found because we didn't scale enough. And "breakthrough" may well be one of the emergent skills we haven't reached yet
AsthmaBeyondBorders t1_itabm60 wrote
Reply to comment by FirstOrderCat in U-PaLM 540B by xutw21
Look at the post you are replying to.
A wall is when we can't improve the results of the last LLMs.
New LLMs, both with different models and bigger scale, not only improve the performance of the last LLMs on tasks we already know they can do, but we also know there are emergent skills that we may still find scaling up. The models become capable of doing something completely new just because of scale, when we scale up and stop finding emergent skills then that's a wall.
AsthmaBeyondBorders t1_ita8bqk wrote
Reply to comment by FirstOrderCat in U-PaLM 540B by xutw21
Yeah but we can't jump from nothing to AGI, LLMs have been very useful so it makes sense to continue pushing their limits until we hit a wall (and we haven't hit that wall yet).
AsthmaBeyondBorders t1_ita85rl wrote
Depends on how accessible it will be. If the tech is possible but hard/impossible to scale and make cheap then you know what happens: rich people have it, poor people do not.
If it can scale but you can't access it without huge infrastructure then you rely on "subscription survival" and you are subject to being controlled by risking losing access to the tech that keeps you alive.
And then there are people with spiritual beliefs that may make them think dying isn't actually bad.
AsthmaBeyondBorders t1_ita7j8l wrote
Reply to comment by FirstOrderCat in U-PaLM 540B by xutw21
LLMs are best when coupled with other AIs for natural language commanding. Instructing a robot on what to do using natural language and chain of thought instead of pre determined scripts. Instructing an image generator like stable diffusion and Dall-E on what to draw based on language instead of complicated manual adjustment of parameters and code. I'd say those are very necessary applications.
You may be looking at LLM models on their standalone form but don't forget LLMs are behind stable diffusion, dreamfusion, dreambooth, etc.
AsthmaBeyondBorders t1_it9hd4s wrote
Reply to comment by BearStorms in Thoughts on Job Loss Due to Automation by Redvolition
And I suppose then they enter a worldwide agreement of the elites where they accept to live peacefully as equals forever, because the useless space-taking masses have been eradicated and now everyone who is left in the world will finally be able to accept that all people are equal? Can they continue to be elite when non-elites don't exist anymore? Or will there be a new smaller elite?
AsthmaBeyondBorders t1_it9g8ze wrote
Reply to comment by BinyaminDelta in Thoughts on Job Loss Due to Automation by Redvolition
Yes but IQ tests are bullshit. This measure has been continuously proven useless time and time again and pop culture refuses to let it go
AsthmaBeyondBorders t1_it9fyz4 wrote
Reply to comment by BearStorms in Thoughts on Job Loss Due to Automation by Redvolition
Exactly. But wasn't your comment suggesting rich people may kill poor people because they become useless or did I read that wrong?
AsthmaBeyondBorders t1_it9fmoq wrote
Reply to comment by BinyaminDelta in Thoughts on Job Loss Due to Automation by Redvolition
Elon said we'd have fully autonomous cars today, he said the window on his truck wouldn't break, he said underground tunnels for cars was better than metro, and showed his dick to an employee who didn't request it.
Fuck what Elon says
AsthmaBeyondBorders t1_it9fda0 wrote
Reply to comment by TheSingulatarian in Thoughts on Job Loss Due to Automation by Redvolition
So much stuff contributes to that tho. From psychological problems that being in this position in this day and age brings (peer pressure, prejudice, thinking you are worthless), to actual biases in job hunting (you been without work for a long time so your resume looks worse and worse for recruiters every passing day without experience).
And then careful with generalizations when you don't actually survey people on welfare before claiming what they are up to or not. You don't know that.
AsthmaBeyondBorders t1_it9en7w wrote
Reply to comment by Designer_Sense_ in Thoughts on Job Loss Due to Automation by Redvolition
Publishing papers is a very human way to make science understandable and easily accessible / readable by people. We probably shouldn't expect AI to write papers and publish them in formats similar to ours, they would have more efficient piecewise continuous flow of data and information online that do not ressemble journal papers at all, are much faster, less readable for humans and completely interconnected. Maybe completely ditching natural language unless we force them to use readable language for humans (which may make them less efficient).
At some point even their scientific methods will escape the sphere of human comprehension (at least without artificially augmented cognition).
AsthmaBeyondBorders t1_it9dzr2 wrote
Reply to comment by BearStorms in Thoughts on Job Loss Due to Automation by Redvolition
The thing is wealth is relative. If they enter a world where everyone is rich then suddenly nobody is rich anymore. Won't that be a loop to annihilation?
AsthmaBeyondBorders t1_iszmxki wrote
Reply to comment by iNstein in Since Humans Need Not Apply video there has not much been videos which supports CGP Grey's claim by RavenWolf1
I just replied to the other dude here with more details about this
AsthmaBeyondBorders t1_iszmbnb wrote
Reply to comment by BearStorms in Since Humans Need Not Apply video there has not much been videos which supports CGP Grey's claim by RavenWolf1
For starters since it is obvious UBI can get rid of the lower end of wealth distribution (which is the whole point if the fear is mass unemployment) but we don't want to have UBI be all funneled to the top percentiles it is obvious we have to set artificial limits on wealth accumulation. Now we get rid of the bottom end of wealth distribution (no job, no income) but we also get rid of the top end of wealth distribution (can't be filthy rich, past some point your marginal taxation quickly approaches 100%).
Second, it is important to distinguish between unconditional basic income and basic income that is only handed out if you are unemployed. If people can get UBI + income from work then social mobility is much easier to be achieved. Then there is the amount of UBI each person gets: if UBI is for survival only then you can't use this for investments because it is hard to save. If UBI allows you to save and eventually invest in your own business ventures then social mobility is easier again.
Third, if we are implementing UBI because of mass unemployment and then only those who get UBI + income from work can have decent social mobility, we would need to allow more people to work in a world where work is increasingly scarce. This means we need to reduce the amount of hours people work, so that more people can work, less working hours > more people working.
Further, if we transform medium to large private enterprises into cooperatives, then funneling UBI into companies is suddenly not a wealth concentration trap, because cooperatives distribute profits among workers instead of small quantities of shareholders that own majority of shares, and to the top of hierarchies via agency-theory solutions.
Finally, if better distribution in this system generates hyper Inflation in basic necessity goods and services, we degrow superficial industries to control Inflation in basic necessities via supply elasticity.
That's my view but what do you propose?
AsthmaBeyondBorders t1_iszdenm wrote
Reply to comment by Ezekiel_W in Since Humans Need Not Apply video there has not much been videos which supports CGP Grey's claim by RavenWolf1
There are problems with UBI too, UBI can't be the ultimate tool to solve the problem. If we implement UBI today without taking care of other issues too what do you think is going to happen? UBI will be effectively curbing social mobility to the minimum. It is taking money from the government to subsidize minimum living conditions, at the moment there is little to no support for UBI in amounts that would allow people to save money for big investments, it is UBI for survival and basic dignity that most economists and politicians in favour of it speak of as of currently. In essence it is funneling tax money to capitalists who own the businesses you depend on to survive, with the added advantage that at least you get to choose how to spend the money instead of the government pre-selecting what everyone gets. But most of the UBI distributed will ultimately end up at the hands of capitalists as you spend and don't save, if we don't change some other things first.
AsthmaBeyondBorders t1_isqe0sx wrote
Reply to comment by AdditionalPizza in Will OpenAI's improved Codex put programmers on the chopping block? by AdditionalPizza
I really can't answer anything regarding the speed of transformation atm, I just thought about your question for a minute or two. But speaking in general form for when we get there, as we approach trivialization of intellectual tasks (and I say trivialization to be on the same page as you, not to be confused with full automation) there are common points of view about the future:
- That's a non-issue: the argument of most people who hold a bachelor of Economics degree (and never studied economics past undergrad level).
The argument suggests that every technological revolution which makes older jobs disappear tend to also make new jobs never before considered to pop into existence. Not every tech leap does this but some tech leaps are so profound that they make up for a lot more new jobs than they took away.
In this argument developers need not fear, maybe they won't be doing the same things they are doing today but they will have financial stability pursuing other jobs. As you can see, the first flaw of this argument jumps in front of you when it disregards people who are caught in the transition period, and only cares about people who get an education and a job after the transition is done.
- Universal Basic Income: self explanatory, this will tell you you don't have to worry about being unemployable, you will get something from the government until you figure something else out.
This connects with the last problem with the first argument and both are the same argument in the heads of some people. In the heads of others, maybe really a good amount of people won't be able to transition and we may end up having to support a good chunck of people just on UBI alone for the long term.
As you can see UBI also doesn't answer your question about getting a degree today, but it would make it less frightening to be wrong.
- The Keynesian argument: Automation will inevitably outpace the rate of new job creation in general areas of the economy. The solution: everyone works less time and still get livable wages, so that more people can work.
In this argument your developers would be working half time so that more people can be employed. (Not going into details but your average company shareholder and C-suite would probably not like this for more reasons than just smaller profit margins).
-
Steady-state economics: similar to the argument above but we couple that with stopping continuous production expansion (in the aggregate economy). As you work less hours, more people can work. Everyone works less but nobody is allowed to be filthy-rich (where is the limit? Good question). If there is capital concentration then expansion would be needed again.
-
Degrowth: similar to the above but we kill specific industries (think private jets, yachts, fast food, fast fashion, probably a lot of tech too).
Why kill "superficial" industries? To avoid hyper Inflation in the prices of fundamental products and services, such as foods sold at the supermarket, durable clothing (non-luxury), housing construction, etc, etc. Because reducing the amount of worked hours but not reducing wages (or reducing less than 1:1 ratio) may inflate all prices, so we kill superficial industries in order to allocate resources and people in more essential work, controlling Inflation via supply (see the last nobel prize in economics iirc).
In this scenario your average developer who works for tech giant cartels will not have that kind of work ever again. More modest wages in smaller companies and less working hours. But also constrained by some limit in wealth accumulation.
I was about to mention the socialist arguments but I don't feel like getting into the shit show that may follow in the comments. But market socialism may be an answer (Steady-State Economics except all companies are cooperatives [except for monopolies, which are owned by the government]). Why cooperatives? Because there is a conflict of interest in having people work less hours when companies are private: managers and shareholders have all the incentive in the world to raise the amount of working hours if they are allowed to.
- Business as usual: The actual path we are heading. This is the path where we believe argument one is right and if it isn't we just allow huge portions of the population to fall into unemployment until revolts begin and we enter a fascist state. But to be clear, not only do we believe argument one is right, we also believe that the transition stage for people who are right now getting an education and/or working in jobs soon to be trivialized will be smooth asf. New jobs will pop up and most professionals will be able to transition smoothly without a care in the world.
AsthmaBeyondBorders t1_jegx580 wrote
Reply to comment by sillprutt in Sam Altman's tweet about the pause letter and alignment by yottawa
About 1% of the general population are psychopaths. About 12% of corporate C-suite are psychopaths. It's their values that have a higher priority as of today.