Viewing a single comment thread. View all comments

TFenrir t1_j8nyno3 wrote

>I am relatively uneducated on AI. I recently became interested by the introduction of chat-GPT and all the new AI art. My first and most significant reaction to all of this, which has taken absolute precedent in the last few months, is fear. Terror rather. What does this all mean?

Honestly? Fair. If this is your first introduction, I can really appreciate the discomfort. If it helps, I find learning more about the technology under the hood removes some of the anxiety for a lot of people, as ignorance of something powerful leaves a lot of room for fear. That's not to say that fear is unwarranted, just that it can be mitigated by exposure.

> I’m currently a college student. I will I spend my entire adult life simply giving prompts to AI? Or will there be prompt AI soon, even before I graduate?

There is a lot of effort being made to remove the need for prompting all together, to create true, somewhat autonomous agents who are "prompted" not just by a message sent to them, but by the changes in the real world in real time.

> I’ve don’t a lot of speculation and some research, and I am having a very hard time understanding the practical reasons why we as a species seam to be so adamant on creating this singularity.

Well the reason is pretty straight forward - want to "solve" intelligence, so that it can solve all other problems for us, quickly and fairly. That's the pie-in-the-sky dream many people are pursuing

> Form what I understand, we have no idea what will happen. I am horrified. I feel as if the future is bleak and dystopian and there is now way to prepare, and everything I do now is useless. This post is somewhat curiosity, and a lot of desperation. Why am I forced to reconcile with the fact that the world will never ever look the same, and the reasons for that entirely allude me? Is it in the pursuit of money? As far as I can see money won’t matter once the singularity comes about. I am fucking terrified, confused, and desperate.

Like I said earlier, I appreciate why you are overwhelmed, but the world was never going to stay the same. Becoming comfortable with that... Discomfort, that uncertainty, is going to be a strength unto itself. If you can master that, I think the changing world will be a lot more palatable in the future

11

wastedtime32 OP t1_j8o00h2 wrote

This is what I keep hearing. Stuff about excepting change. But there is no historical precedent for this. This is the start of the exponential growth. The way I see it, I have every reason to be afraid and not one reason not to be. I am spending my parents life savings to get a degree that likely will not matter. My big problem is, what exactly are we expected to do once we “solve intelligence”? I LIKE the natural world. That’s all there is. It will never make sense to me. I don’t want to float around in a computer metaverse and be fed unlimited amounts of seratonin and never question anything or wonder or worry or feel any other emotion. That is all I know. And it is going ti be taken away from me without my consent? This future of AI is inevitable totalitarian. Brave new world type shit. It’s real. It’s fucking real. And everyone around me is talking about internships and where they want to live and different jobs and stuff. My girlfriend thinks I’m crazy because this fear is all I talk about. She said everything will be okay and I’m just falling for the fear mongering. I don’t know what to do with myself. It is hard to find joy when all I think about is how EVERYTHING that gives me joy will be gone.

8

TFenrir t1_j8o1khg wrote

>This is what I keep hearing. Stuff about excepting change. But there is no historical precedent for this. This is the start of the exponential growth. The way I see it, I have every reason to be afraid and not one reason not to be. I am spending my parents life savings to get a degree that likely will not matter. My big problem is, what exactly are we expected to do once we “solve intelligence”? I LIKE the natural world. That’s all there is. It will never make sense to me. I don’t want to float around in a computer metaverse and be fed unlimited amounts of seratonin and never question anything or wonder or worry or feel any other emotion. That is all I know. And it is going ti be taken away from me without my consent? This future of AI is inevitable totalitarian.

It's really really hard to predict anything, especially the future. I get it. There is a sort of... Logical set of steps you can walk down, that leads to your conclusion. But that is only one of many paths that are going to open up to us. You're right it's all exponential, but I also think that means what the human experience can be is going to expand. Maybe we will diverge as a species. There is a great sci fi book series (Commonwealth Saga) and in one of their books, they come across this species that seems to have fallen into this divide. Most of the species have left their physical bodies behind, but some of the species never strayed from their farming, amish-like lifestyle. My point is... I can imagine lots of different futures, and lots of them have a world where maybe more people can have the kind of lives they want.

>Brave new world type shit. It’s real. It’s fucking real. And everyone around me is talking about internships and where they want to live and different jobs and stuff. My girlfriend thinks I’m crazy because this fear is all I talk about. She said everything will be okay and I’m just falling for the fear mongering. I don’t know what to do with myself. It is hard to find joy when all I think about is how EVERYTHING that gives me joy will be gone.

I had this talk with my partner literally... Monday, this week. She's had to hear me talk about AI for the entire decade we've been together, and as things get crazier she asks me how I feel about it. If I'm freaking out. I just told her that I'm trying to live my life like the next ten years are the last years that I can even kind of imagine, that there is an event horizon that I can't see beyond, and worrying about what's beyond that line is just a source of infinite loops in my mind.

Instead I'm going to get some friends together and go out dancing. It's been a while since we've had the chance.

8

MrTacobeans t1_j8p1wwt wrote

I have sorta the same kinda fear response to the exponential growth of AI in just the last year. We've gone from "WOAH this ai can convincingly have a conversation with a human and beat a world expert in games/challenges" to "holy shit this AI chatbot can now help me with my job, give therapist level advice (even if it's wrong), image generation at 1% of the effort needed by a conventional artist and the constant flux of all these things improving not just on a quarterly basis anymore but like every other day some sort of SOTA level model is released".

It's alarming and it is a lot but I think if AI doesn't just randomly decide that humans are a useless presence we'll be fine. I see a world where even general intelligence AI aligns as a tool that even "bad actor" AI is combatted by competing "good actor" AI. I don't see society falling apart or a grand exodus of jobs.

I'm hoping AI turns into the next evolution of the internet, where we all just have an all-knowing intelligent assistant in our pocket. I can't wait for the day that my phone pings with a notification from my assistant with useful help like "Hey, I noticed you were busy and a bill was due so I went ahead and paid that for you! Here's some info about it:" Or "I know it's been a tough day at work but you haven't eaten, can I order you some dinner? Here's a few options that I know are your comfort foods: ".

The moment a dynamic AI model that can run on consumer hardware with chatGPT level or even beyond intelligence, stuff like that will become a reality. AI might be scary but trying to think about the positive effects it could have really helps me cope with the nebulous unknowns.

2

Eduard1234 t1_j8pkiql wrote

Just a comment about the idea of having one AI supervise the other. I’m thinking that won’t work. If the bad actor AI has even slightly more advanced than the other AI it will be uncontrollable. Chances that a bad actor AI somehow figures out how to escape its confines and invent self improvement unsupervised seems real to me.

1

wastedtime32 OP t1_j8p2nn1 wrote

Those things aren’t necessarily positive effects to me though. I don’t want to be able to consume all available information at the press of a button. What would be the point of knowing if you can’t actively learn?

0

MrTacobeans t1_j8qi0gg wrote

You just explained a Google search. Atleast in the short term AI/chat bots are just proving to be a more consumable or more entertaining way of gathering information.

It's upto each person to decide what they want to learn without the crutch of technology. Even an expert AI will never replace the need for actively learning things. Jumping way back even written language is a technology. For thousands of years humans have been figuring out how to compress knowledge and share it easily.

1

wastedtime32 OP t1_j8ri4rq wrote

Idk dude. Seems like a lot of people on this sub (and subsequently, in the tech world, at the forefront of these technologies) look at AI as a means to completely optimize all human functions and reduce any sort of meaning in anything. Seems to me a lot of these people feel alienated in modern society, and the way they think it will get better is by taking all the things that alienate them and making those things even stronger. Like the way Musk and SBF say they’ll never read a book because reading is useless. The game of life has already lost its meaning to so many in the modern age, and people who can’t see WHY, and they wrongly think that accelerating into a even further alienating world is the answer. If it was up to the tech gurus, we’d all be neurologically implanted into the blockchain and never do anything other than optimize optimize optimize produce produce produce. There is a reason most people at the forefront of these technologies tend to be neurodivergent. This is all just a capitalist wet dream and soon enough all ethics and regulatory practices will be seen as enemies to the doctrines of “efficiency” and “optimization” and serve no purpose and will be ignored. People here love to paint me as some weird traditionalist religious conservative values person. But they are so unaware of how they worship the idols of optimization and utility and efficiency. These are things that should be the goals of systems that supply human needs yes, but they have their place. The idea that once we reach post scarcity those in charge will have ANY incentive to help us in any way is entirely insane. It’s “evolution”! Following the same biological tendencies, AI is giving those in power the means, and the reason to completely annihilate all others. And people here really think it will HELP us😂😂😂.

−1

Proof_Deer8426 t1_j8oxcjz wrote

Ai will not solve all problems for us - most of our problems are already solvable. We could homelessness tomorrow but we don’t - because that would contradict our society’s ideology. This technology will be owned by people that don’t want to solve the same kinds of problems that most people imagine they want solved. Mass production did not lead to the end of scarcity - most of the world still lives in poverty and spend most of their lives working for a pittance. If we ask an ai how to end poverty and it answers with economic redistribution and a command economy, that ai will be reprogrammed to give an answer that doesn’t upset the status quo.

2

TFenrir t1_j8p7obm wrote

>Ai will not solve all problems for us - most of our problems are already solvable. We could homelessness tomorrow but we don’t - because that would contradict our society’s ideology.

How could we solve homelessness tomorrow? Not to be rude, but statements like this feel like they are just coming from a place of ignorance and jadedness.

We have many many many problems in this world, and they are often very intertwined - and so far, every problem we have overcome, we have used our intelligence to do so.

> This technology will be owned by people that don’t want to solve the same kinds of problems that most people imagine they want solved.

Again. Jadedness. Do you know who is working towards AGI? What their ideologies are? Do you think the thousands of engineers who are putting their life's work into creating this... Thing, are doing so because they want to serve some nefarious, mustache twirling evil overlord? I think that you are doing yourself a disservice with such a myopic and simplistic outlook on society and humankind.

> Mass production did not lead to the end of scarcity - most of the world still lives in poverty and spend most of their lives working for a pittance.

The world is today, in probably the best state it has ever been in, in most measures of human success. We have fewer people as a percentage of the population in poverty than ever before. We have blown past most goals that we have placed for ourselves to end world hunger. The upward momentum of the world is extremely significant, and you can see this in all corners of the developing world. What are you basing your opinions on?

> If we ask an ai how to end poverty and it answers with economic redistribution and a command economy, that ai will be reprogrammed to give an answer that doesn’t upset the status quo.

Again, myopic.

2

Proof_Deer8426 t1_j8pbo9h wrote

I don’t mean to be rude but I think it’s naieve to imagine that ai will not be used to reinforce the current power structures, or that those structures have benefited humanity. Jeremy Corbyn said that if he were elected, homelessness within the UK would be ended within weeks, and it is not an exaggeration to say that would be entirely possible. There are far more homes than homeless people, and we have the resources to build many more. We don’t, because it would disrupt the ideology of capitalism, which requires the threat of homelessness and unemployment in order to force people to work for low wages. Wages and productivity have been detached for decades now - ie wages have remained stagnant while productivity has increased exponentially. Ai will increase productivity, but without changing the economic system the benefit will not be to ordinary people but to the profits of the rich.

The upward momentum of the world you refer to is misleading. People like Bill Gates like to point out that enormous amounts of people have been lifted out of poverty in recent decades, trying to attribute this to the implementation of neoliberal economics. They always neglect the fact that these stats are skewed by the work of the Chinese Communist Party, which has lifted 700 million people out of absolute poverty - more than any government in history. That has nothing to do with the political trajectory that the West has been on, or it’s domestic economic paradigm - by which for the first time in centuries, the younger generations are significantly poorer and downwardly mobile compared to their parents.

I don’t know much about the ideology of the people working towards agi, I would be interested to know more about it though if you want to tell me. I do know that a lot of people interested in ai follow ideas like effective altruism, which is a philosophy that serves rather than challenges the status quo.

2

TFenrir t1_j8pe4w6 wrote

>I don’t mean to be rude but I think it’s naieve to imagine that ai will not be used to reinforce the current power structures, or that those structures are have benefited humanity.

How does it play out in your brain? Let's say Google is the first to AGI - this AI can do lots of really impressive things that can help humanity; solve medical questions, mathematics questions, can be embodied and do all menial work, and can automate the process of building and farming, finding efficiencies and reducing costs constantly until the cost is a fraction of today's cost.

How does Google use this to prevent people from from benefiting? Do they also prevent all other non Google AGI attempts? Give me an idea is what you are envisioning.

> Jeremy Corbyn said that if he were elected, homelessness within the UK would be ended within weeks, and it is not an exaggeration to say that would be entirely possible. There are far more homes than homeless people, and we have the resources to build many more.

So in this very UK specific example, you imagine that the roughly 250k homeless could be homed in the roughly 250k empty homes. Would you want the government to just take those houses from whomever owned them, and give them to the homeless? I'm not in any way against providing homes for the homeless, but could see how that could not cause many negative side effects?

> We don’t, because it would disrupt the ideology of capitalism, which requires the threat of homelessness and unemployment in order to force people to work for low wages.

Or we don't because no one wants to spend that kind of money for no return. What happens when doing so becomes effectively free, do you think the government and people would like... Ban efforts to build homes for homeless?

> Wages and productivity have been detached for decades now - ie wages have remained stagnant while productivity has increased exponentially. Ai will increase productivity, but without changing the economic system the benefit will not be to ordinary people but to the profits of the rich.

A truly post AGI world would not have any human labour. It likely couldn't in any real way. How do you imagine a post AGI world still having labour?

> The upward momentum of the world you refer to is misleading. People like Bill Gates like to point to the fact that enormous amounts of people have been lifted out of poverty in recent decades, trying to attribute this to the implementation of neoliberal economics. They always neglect to point out that these stats are skewed by the work of the Chinese Communist Party, which has lifted 700 million people out of absolute poverty - more than any government in history. That has nothing to do with the political trajectory that the West has been on.

I'm African, how do you think Africa has fared in the last few decades when it comes to starvation, and economic prosperity? We don't even need to include China, I think basically every developing country in the world is significantly better off today than they ever have been, minus a few outliers.

You think China lifted it's people out of poverty without capitalism? Do you think China is not a capitalist country? I'm not a fan of capitalism, but I'm not going to let that blind me from the reality - that the world is better off and continues to improve with almost every measurement of human success. It's not perfect, but so many people have an almost entirely detached view of the world, compared to what it was even 30 years ago.

Edit: for some additional data

https://ourworldindata.org/a-history-of-global-living-conditions-in-5-charts

2

Proof_Deer8426 t1_j8pkc2x wrote

“How does it play out in your brain?”

One of the problems with the command economies that socialist states like the USSR tried to implement was that the maths was simply too complex to make it work efficiently. This is an example of something ai could be enormously useful for. Obviously no western country is going to use ai for this, because it’s contrary to their ideological and economic systems. The use that technology is put too will follow ideology. When the Industrial Revolution happened many people imagined a utopian future where the increase in productivity would lead to humans having to work far less. But today, we still spend the majority of our lives at work, and if it wasn’t for the historical efforts of socialists and trade unionists then even the restrictions on working hours, and on children working, would not exist (there are states in the US even today that are trying to repeal child labour laws). Of course ai will have benefits in regard to medicine, farming and so on. Does that mean everyone will have access to medicine, or that the workers on farms will see any benefit? Technically that would be possible, but within the ideology of capitalism it will not occur. Homelessness, poverty and unemployment exist because they are necessary for the economic system to function, not because of a lack of resources or lack of a solution. The benefits of ai will be limited by and subsumed into the ideological system - a system designed to give power and luxury to a tiny few via enforcing deprivation on the majority.

“A truly post AGI world would not have any human labour”

Perhaps, but my point is only that benefits in production and material abundance predominantly do not flow down to the working class/average person but up into the profits of the rich. Capitalism as we know it could not continue in a world where labour is unnecessary, but without changing the relations of power, the new system that emerges will simply mirror the old one. I could envision - as one possibility - a sort of tech neo-feudalism, where a Universal Basic Income is paid in return for some kind of military or other public service. But this income will go straight into the hands of the owning class - to rent, and to various rentier schemes (you will own nothing and you will be happy”, as the WEF put it). Of course this is only one scenario, but without changing the power relations, the system of deprivation for the masses and wealth and luxury for the few will remain regardless.

“You think China lifted it's people out of poverty without capitalism”

No - I agree China used capitalism to do that. But it was a different kind of capitalism to what is used in the West - goal oriented, with a long-term vision and a materialist, Marxist outlook, which aims to use capitalism as a tool to develop productive forces and to the end of benefitting it’s citizens and ultimately transitioning to socialism. This is very different from the blind profit seeking capitalism of the West.

1

wastedtime32 OP t1_j8p3dn1 wrote

This scares me even more. A utopia is impossible. The ruling class will use AI as a tool to in face impose more suffering on the rest of us.

I don’t think a truly objective all knowing AI is possible because objectivity doesn’t truly exist, truth is a construct. It scares me that people will worship AI under the assumption that it has no biases, either one’s it developed on its own or imposed upon it by its creators.

1

Proof_Deer8426 t1_j8p91b7 wrote

I guess the problem I foresee is that ai should theoretically be of benefit to humanity by increasing its productive capacity - it will make many jobs redundant, and others far more efficient. And that could be a good thing, freeing humanity from the all the restrictions imposed by economic necessity. The problem is that the ruling class aren’t actually motivated by greed for material wealth but by lust for power. And power within our economic system is dependant on deprivation - the wealthy are a class of people that own things, and via ownership are able to deprive and exploit others. Without deprivation and poverty their power would cease to exist. Since the technology will effectively also be owned by these people it will be used to support and sustain their power. How this will take shape is still unclear, but as the working class begins to lose the one form of power that it still has - the ability to withhold work - and the power of the ruling class is massively boosted by their control of ai, it seems like the future could be headed down a potentially nightmarish path.

Ideologically people may be inspired to think in a pseudo-objective way that they believe mirrors ai - you can already see this with the popularity of ideas like effective altruism, long-termism and the simulation ‘theory’. Anti-humanist ideas like eugenics and population control are likely to follow.

1

wastedtime32 OP t1_j8pajiu wrote

Yeah this is exactly my fears about AI, I just am not good at articulating them so everyone here thinks I’m just saying I want everything to stay the same. With the impending ecological collapse and recourse depletion, and the fall of globalization and inevitable rise of fascist adjacent chauvinistic isolationist hyper militarized states, this is about as bad of a backdrop to introduce AI, but then again I’m not sure there will ever be a “optimal” circumstance for it. But I do think that this will all culminate in either a massive revolution or dystopia. I just don’t see an in between. If capitalism prevails into the post-scarcity world we will be looking at the dystopia which many people here have confused for utopia.

Totalitarianism is soon and to me AI is a vessel for it. It is the lock to that door and it is in the hands of the ruling class already. There is a reason tranhumanist and bioengineering ideas are more prevelant amongst the elite (think WEF) because they know damn well most people will think of it for a means by which to accomplish equality and peace, but that is far from the case.

I guess from this post and the replies I’m learning that most AI have a hard on for a utopia and a ignorance for political/economic implications. These reactions are exactly what the big tech developers want. Complacency. Surrender.

1

DesperateProblem7418 t1_j8sb2b1 wrote

>Well the reason is pretty straight forward - want to "solve" intelligence, so that it can solve all other problems for us, quickly and fairly. That's the pie-in-the-sky dream many people are pursuing

I feel like the real reason is because we can. Humans want to satisfy their ego, cravings for power and curiosity. I don't hate WEF like a lot of conspiracy theorists out there, but Klaus Schwab recently stated in front of entire audience of leaders and innovators that "He who masters AI will be the master of the world". Its all about power.

Maybe researchers actually do care to solve intelligence because they are curious and actually want to use it to help people, but majority of other leaders and researchers want it because of power it will bring them.

1