Gmroo

Gmroo t1_j3poqke wrote

Most certainly well before 2030. And I say this as someone who thinks AGI requires paradigm shifts. I already see signs of the sheer economic and intellectual force behind AI now digging into actions that likely lead to paradigm shifts. There is pressure and incentive to lower compute, apply the newest techniques across countless domains, innovate hardware, explore embodiment, etc.

This year we'll already get what I call pseudo-AGI. LLM-based Narrow AI that is general enough to be phenomenally useful when coupled with handy APIs and other modern techniques in AI.

26

Gmroo OP t1_j3nvkwi wrote

It's difficult to quantify, but the core point ia that despite these cultural and linguistic differences we're relatively the same. It's when really different types of minds and entities are introduced that huge deviations from the norm become..the norm.

This augmentation ia underway already in a soft way,.. phones and technology we use every day. I work in AI myself and I expect things to rapidly accelerate from here on out.

Although I think there will be lots of worldwide access to information...something that is getting better every day.. poverty levels too.. people worst off will likely to remain worst off.

The details are very hard to predict though. I personally am sure this intersubjectivity collapse must happen because both possibly mind design space is large as we see in the animal kingdom and we're just not equipped as a society to deal with it.

I even speculate that some of this new communication barriers can't be overcome for the same reason as me not being able to check inside your head ans bkdy to figure out your internal states.

2

Gmroo OP t1_j3mv47a wrote

I think we need to work on figuring out what sort universal languages may be created and may exist. For example exchanging knowledge graphs.

I don't think current memes prove anything, in the sense that with the introduction of new minds we'll have a whole other world of minds on our hands.

So tendences of current minds are not that relevant.

Just imagine entities whose behavior completely doesn't jive with what you're used to from humans. We're used to infer each other's states because we're so alike. That's how evolution optimized us. But there is no universal principle that this needs to be the case. At all. Hence a total collapse of intersubjectivity once we have a "free for all" mind designs.

1

Gmroo OP t1_j3l4zgi wrote

The post elaborates on it. I just tried to think the logical consequences through of what happens when you introduce basically alien minds into a civilization that for 99% caters to one. Dystopian or not, it is what it is.

3

Gmroo OP t1_j3l256o wrote

Once we create new minds they will be so different that all communication will breakdown and we won't be able to predict each other's behavior or states.

Like if you cry now I can make the reasonable assumption you are sad or pain. Tremendously many assumptions like this one we take for granted because we're all humans and the diversity is quite low compared to a civilization that builds new types of minds.

So I argue this is a disaster waiting to happen.

2

Gmroo OP t1_j3l1wgp wrote

The summary of the abstract, by ChatGPT:

The intersubjectivity collapse refers to the breakdown of social and cultural norms in a civilization due to the proliferation of minds of different types and subjectivities that cannot communicate or coexist.

This will lead to conflicts and power imbalances, and make it difficult or impossible predict the actions of others.

It's likely to occur in any society that significantly modifies its own minds or develops artificial intelligence, due to the vast range of potential mind designs.

To mitigate this risk, it's necessary to anticipate it by developing strategies for managing diversity of minds and working on imagining how to cooperate in a civilization of very different types of minds.

2

Gmroo OP t1_j3l0koo wrote

Once we will start augmenting our minds and creating AIs that can participate in society, the subjectivity of these minds will be so different that all of our systems and ways of being will collapse.

These minds won't be able to predict each other. And none of our systems are ready for any of this.

When you think it through, it's a catastrophe about to happen, because we've custom-tailored our world to ourselves..since we're the only dominating species.

It's easy to just shrug at this, because we're so used to things being the way they are.

3

Gmroo OP t1_j3l009u wrote

With all due respect, this is not the topic...did you even read it?

It's about the introduction of new mind architectures with new subjectivity and the consequences of that. It answers the question: "What happens to civilizaiton when we can actually augment our minds and create all sorts of AIs?"

9

Gmroo OP t1_j3kz20v wrote

2

Gmroo OP t1_j3kyy6m wrote

Sorry, I tried to put it in one sentence with just 270 characters. Once we will augment minds and create new minds with AI, we will have catastrophic communication issues due to diverging subjectivity of these minds.

2

Gmroo OP t1_j3i6imo wrote

17

Gmroo OP t1_j3hcs00 wrote

Hard to say, but some argue phones and other external devices are already types of augmentations. I think large language models like ChatGPT are rapidly becoming ubiquitous and this year for many it'll become normal to have an A.I. assistant handy at all times. There is a gold rush underway.

So, in so many ways we're already accepting them. We just currently don't have the tech to connect them to the brain well. I think this can surprisingly rapidly change once we can put 1000s of A.I. scientists to work.

Getting a model like GPT 3.5 there is not too difficult. Fine-tune on science papers, do reinforcement learning for math (it's not quite good at that, for the similar reasons diffusion models like Midjourney or Dall-E2 produce text-like gibberish) and give it access to the web and let it self-verify its output. That'd be a good start.

5