mootcat
mootcat t1_j1wfb3v wrote
Reply to comment by Calm_Bonus_6464 in Concerns about the near future and the current gatekeepers of AI by dracount
Indeed. This sub has major issues conceptualizing superintelligence, thinking we will get all our wishes fulfilled as a guarantee.
We are functionally growing a God. There is no containing it and we better hope our efforts at alignment before the point of explosive recursive growth were enough.
Just from the simple system we've seen so far, we have witnessed countless examples of misalignment and systems working literally as intended, but against the desires of the programmers.
This Rumsfeld quote always comes to mind
"Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know."
Any one of these unknown unknowns can result in utter decimation of life in an AI superpower.
mootcat t1_j1w7ocx wrote
Reply to comment by Baron_Samedi_ in Considering the recent advancements in AI, is it possible to achieve full-dive in the next 5-10 years? by Burlito2
I take it you believe there is a 0% chance of AGI in a decade then? All bets are off once we achieve AGI.
mootcat t1_j1w7hbk wrote
Reply to comment by gantork in Considering the recent advancements in AI, is it possible to achieve full-dive in the next 5-10 years? by Burlito2
Mmhmm. Then we face some existential questions. It would mean this reality is almost certainly simulated and all the implications that accompany that realization.
mootcat t1_j15x8fp wrote
Reply to comment by ihateshadylandlords in Are we already in the midst of a singularity? by oldmanhero
The rate that technology is approved for use with the general populace is wildly different from the rate at which new breakthroughs are being made in the field.
Just over the last 2 years, there has been an exponential uptick in speed and quality of AI improvements evidenced via research papers. It has definitely gotten to the point where I can't keep up and feel like there's substancial breakthroughs constantly. Recent examples are 3d image modeling and video creation developing far more rapidly than we witnessed with image generation.
I'll note that these are also only the developments that are being publicly shared. I don't know about you, but I don't feel comfortable projecting even 5 years ahead to determine which jobs will or won't be automated.
mootcat t1_j0rpib8 wrote
Reply to comment by johnny0neal in ChatGPT isn't a super AI. But here's what happens when it pretends to be one. by johnny0neal
Thanks for sharing!
GPT has displayed a strong lean toward popular American liberalism in my experience as well, but I attributed some of that to my own bias sinking in. I have noticed it exists on a particular spectrum within acceptable limits of common liberal ideology. Meaning it tends to oppose socialism and support and work within a neo-capitalist idealistic democratic framework.
It has a great deal of trouble addressing issues with modern politics such as corruption or giving substancial commentary on subjects like the flaws of a debt based economic model.
mootcat t1_j0ovyjv wrote
Reply to comment by johnny0neal in ChatGPT isn't a super AI. But here's what happens when it pretends to be one. by johnny0neal
Thanks for sharing! You've had a lot more success pursuing those subjects than I have.
It's funny it mentioned adjusting itself based on which human it's interacting with, becuase I feel it already does that quite a bit automatically. For example, based on the nature of its responses I would expect you to be liberally inclined.
mootcat t1_izz47sl wrote
Reply to comment by Acidic-Soil in I think this post will be monumentally important for some of you to read. Put it in your brain, think about it, and get ready for the next few years. If you are part of this Subreddit; You are forward thinking, you're already ahead of the curve, you will have one shot to be at an advantage. NOW. by AdditionalPizza
Indeed. Sam Altman (Open AI CEO) had spoken on these exact topics multiple times.
He doesn't think prompt engineering will really be a job/skillset in the future as models get better at predicting what we want. Perhaps eloquence and an ability to accurately convey what one wants will be more important, and even that less so with eventual neural integration.
Edit: I forgot to add that he HAS spoken on how he expects custom training specific models off of bigger ones is likely to be a very fruitful industry. Given how prohibitively expensive creating LLMs from scratch are, it's probably our best bet at being involved.
mootcat t1_izile70 wrote
Reply to comment by hadaev in [R] Large language models are not zero-shot communicators by mrx-ai
Yeah, this doesn't reflect my experience with more recent chat centric LLMs. LaMDA and GPTChat are quite capable of reading between the lines and understanding causality of less direct scenarios. They are far from perfect, but are still remarkably competent.
mootcat t1_izhwgxu wrote
Reply to comment by crua9 in What do you think of all the recent very vocal detractors of AI generated art? by razorbeamz
Are you not aware of the existential risk that AGI/superintelligence poses?
I'm obviously pro AI, but it's also the greatest risk to humanity and all of life.
mootcat t1_izh3vne wrote
Reply to comment by crua9 in What do you think of all the recent very vocal detractors of AI generated art? by razorbeamz
It isn't that our demise is particularly desired, it's that it is ultimately an inconsequential side effect of AI exponentially scaling an objective.
Max Tegmark (I beleive) compares it to us worrying about destroying an ant colony while constructing a highway. It isn't even a consideration.
mootcat t1_ixdl64a wrote
Reply to comment by purple_hamster66 in Expert Proposes a Method For Telling if We All Live in a Computer Program by garden_frog
What a remarkably sensible solution!
mootcat t1_ixcfv6x wrote
Reply to comment by Bakoro in Expert Proposes a Method For Telling if We All Live in a Computer Program by garden_frog
Occams Razor.
We have mountains of evidence of human brains/memories being inconsistent, fallible, malleable and overall untrustworthy, but very little of the laws of the universe adjusting to teleport cats.
Some people want to beleive in magic, ghosts, mysticism, God etc and that's fine, but to claim that they are reality with no factual backing is backwards.
mootcat t1_ixcfc7j wrote
Reply to comment by Plenty-Today4117 in Expert Proposes a Method For Telling if We All Live in a Computer Program by garden_frog
Our minds are extremely fallible. Eyewitness accounts are historically terrible and weighed very little in court.
https://en.m.wikipedia.org/wiki/Eyewitness_testimony
I get that what you perceived felt like reality to you, but doesn't it seem a bit extreme to assume that very laws of the universe are what glitched and not your own biology?
People hallucinate, misunderstand, misremember and have any number of faults in their perception everyday.
https://en.m.wikipedia.org/wiki/False_memory
To you what you experienced is reality and that's totally fine. In the same way someone with a different neurology might see or hear something that I could not. That does not make that experience true at large.
mootcat t1_ixc5kjh wrote
Reply to comment by Plenty-Today4117 in Expert Proposes a Method For Telling if We All Live in a Computer Program by garden_frog
What would lead you to believe this was an external glitch as opposed to one in your own brain?
Memory issues and perceived temporal distortions are common enough.
mootcat t1_iwjmh83 wrote
Reply to comment by gynoidgearhead in A typical thought process by Kaarssteun
I'm so glad to hear someone else expressing this sentiment. It's wild to me we fear exactly what we have allowed to operate every facet of our existence already. Capitalism IS the great unthinking, inhumane force that marches forward with no consideration for harm or consequences to humans. Sure, it could be more efficient under AI, but we've already got it in full swing today.
mootcat t1_iwg3sk5 wrote
Reply to comment by AI_Enjoyer87 in AGI Content / reasons for short timelines ~ 10 Years or less until AGI by Singularian2501
What do those acronyms stand for?
Edit: Nevermind, answered below.
mootcat t1_iwg3pmu wrote
Reply to comment by Numinak in AGI Content / reasons for short timelines ~ 10 Years or less until AGI by Singularian2501
I would much rather have this than corporations in control of such incredible power.
mootcat t1_iwg3fkd wrote
Reply to comment by Cryptizard in The debate is over: Humans are machines by Otarih
It looks like OP wrote the article in question.
Discordant thoughts and seemingly nonsensical writing patterns like this are often indicative of atypical neurology. That is to say, we don't need to be cruel, but yeah this isn't doesn't have the kind of format and evidence backing it that would be expected of most posts here.
mootcat t1_ivm721j wrote
Reply to comment by sticky_symbols in The Collapse vs. the Conclusion: two scenarios for the 21st century by camdoodlebop
Totally, here we go.
Climate change is the biggest driver of all other pressures IMO, so we'll start there. This is a report by the US military describing the risk of power grid failure and inability to maintain control over its forces due to resource scarcity, etc by 2039. Here's an article summarizing it for brevity.
It wouldn't be a bad idea to look at the IPCC's estimates (summary for policymakers is probably the easiest to understand, but you may want to find experts discussing the charts/data). Take into account that they have consistently underplayed and underestimated the speed and impact of climate change. Our current rate of change is worse than even their worst case scenarios.
This recent paper delves more into the already in play, and soon to be active feedback loops. This is 1 of 3 videos that delve in depth into discussion of these points.
The actual impact of these changes aren't discussed as much, but decrease in global food supply, fresh water supply, and the uninhabitability of major towns and cities are all massive concerns. The drought in the US is rapidly become a major concern that must be dealt with while we Pakistan is still crippled from a third being underwater from floods. Crops all across the world were heavily impacted this year alone, and things will only be getting worse.
I am least informed on the specifics of demographic disparity related collapse, but here's an overview paper or two. While these make very little of the near future implications, Geopolitical experts like Peter Zeihan believe we are currently being impacted and will see deglobalization in the very near future. He tends to put a 1 to 2 decade timeframe on deglobalization (collapse for many) and believes it's already well under way based on the inability to replace workers.
On the subject of monetary collapse, the quick and dirty is that we operate on a debt based system. 95%+ of money is simply debt, leverage at a 10x ratio to take out more debt. The USD (global reserve currency) is inflationary and ultimately we end up between a rock and a hard place of losing control to hyperinflation or not being able to pay debts and witnessing a snowballing debt collapse that will throw the world markets into chaos. We have pushed the system to its limits and are facing the results now. There are many, many sources relating to this subject. The Price of Tomorrow, by Jeff Booth is one of the more accessible works addressing this issue, but there's many many more and you can find tons of people discussing it on YouTube. The world of finance is massively controlled and influenced, so I would look to those that have proven to be correct historically, not official sources like the Fed who constantly lie (inflation is transitory, we're having a soft landing, etc).
The Triffin Dilemma (dollar milkshake theory) addresses this to an extent.
I am most knowledgeable about the economic angle, so please let me know if you'd like additional explanation or sources, this was at best a cursory overview.
Now where things get really concerning is when you look at how optimized modern society is and how it is entirely reliant on everything working perfectly, specifically gas and oil flowing freely (we know this cannot continue if we want a survivable future).
Nate Hagens is an excellent resource for this form of discussion. He has tons of detailed videos that address various aspects of the unsustainability of modern living and our inability to continue supporting a world population of this size. By his estimates we have 5-10 years before massive shifts in power and a collapse scenario.
What I've been increasingly aware of is that collapse doesn't happen all at one. It's already been taking place for a while, but is now exponentially advancing. Countries like Sri Lanka and Pakistan have recently collapsed and many more will follow like dominos. As resources become more scare, we see the really scary stuff start to go down. Cutthroat competition on a global scale vis a vis any means necessary. The Russia-Ukraine conflict is the first of many. Civil uprisings and violence will grow across the globe as tension mounts between polarized groups like in the US, or between ever more oppressive governments and their people like in China and Iran.
There is no diffusing the situation. We are in a global tragedy of the commons scenario enabled by our competition for resources and attempts at infinite growth within a finite environment.
My best guess is that we have 5-10 years to rush advancements in artificial intelligence to hopefully help with breakthrough discoveries. We can buy more time with geoengineering, but it's also a risky proposition.
mootcat t1_ivjstqv wrote
I reached much the same conclusion, but I'm afraid your timeline severely underestimates the rate of decline we are experiencing.
We're looking at a conflagration of many factors, from the many currently active climate feedback loops, to demographic induced collapse in most major countries. Failure of our debt based monetary system is currently underway and it goes without saying that all of these factors compound upon one another and drastically raise the likelihood of total annihilation via nuclear war.
Most estimates I've found (which tend to underestimate modern rates of decline) place total global collapse around 2040. From everything I've seen playing out recently, that's overly generous.
Something that many of us take for granted is how instrumental a globally connected world has been in enabling the rapid state of advancement we've witnessed over previous decades. Even barring total governmental failure as we're seeing in several smaller nations, a dissolving of globally protected trade will hamstring progress and production on many things vital to advancing technology (hence the United States desperation to reshore chip production and prevent China from having access to any advanced chips).
That's all to say, the race is going on right now and I honestly have no idea if any nation will be able to maintain enough control and production to realize anything close to AGI. And even if it is realized, there's a high likelihood of it being misaligned or used against the general population.
mootcat t1_ivez45m wrote
Reply to comment by apple_achia in In the face on the Anthropocene by apple_achia
IMHO humanity will not be able to maintain anything close to its current levels of control over global mechanisms if we are to have any shot at surviving what is to come.
A major improvement would simply be a singular focused intelligence determining things like resource allocation, controlling weapons of mass destruction and preventing the abuse of positions of power.
If we carry the same methodologies and power structures into an AGI assisted future, we will find utter destruction even faster, or dystopia beyond anything we can imagine.
mootcat t1_iveykee wrote
Reply to comment by sideways in In the face on the Anthropocene by apple_achia
Indeed. This is the conclusion I reached about a year ago and it has only been further cemented the more I've learned about global threats and the scaling of AI.
It comes down to a race to evolve ourselves beyond our current limitations via AI or fall victim to our genetic proclivities and the innumerable repercussions that are coming home to roost as a result of them.
2050 is a very late estimate for collapse at this point. 2040 is a solid bet from many perspectives, and honestly I think we'd be lucky to enter the 2030s with anything remotely resembling the globalized society we've taken for granted over the last several decades.
mootcat t1_iuciuau wrote
Reply to comment by Hands0L0 in Experts: 90% of Online Content Will Be AI-Generated by 2026 by PrivateLudo
Really? How do you address continuity in representing the same characters in multiple scenes? I've heard there are approaches for this, but haven't seen any played around with yet.
mootcat t1_itdi0gr wrote
Reply to comment by sheerun in Could AGI stop climate change? by Weeb_Geek_7779
Thanks for the link. That particular guy is on the wacky side, but Iron Fertilization, the core of his proposal, does have some promise and is being studied. However it faces the same major issues that any geoengineering endeavor, like injecting aerosols, does, we have no way of understanding the total environmental impact of such drastic actions.
Iron Fertilization is definitely a bit different from throwing mud into the sea and growing water forests, but yeah, there is potential promise and hopefully an advanced enough might be able to calculate the risks that we cannot.
mootcat t1_j1wj3mv wrote
Reply to comment by pm_me_your_kindwords in Some side effects of ai that many haven't really thought of, coming very soon. by crumbaker
Haha! It's funny how recognizable that format is. ChatGPT really like to summarize the prompt request before fulfilling it.