Viewing a single comment thread. View all comments

Cuissonbake t1_isese0w wrote

The system we live in rewards that behaviour because everything about this system is about competition and being the winner. The only way to fix it is to change how the system works but how?

34

mootcat t1_isfk0lv wrote

Abolishing capitalism and instituting a true representative democracy would be a great start. Shift control of the means of production to the working class.

The biggest issue is with the inevitable human corruption that will inherently grow over time. Every system, no matter how many check and balances, if reliant on humans in pivotal roles, will be subject to coercion and corruption.

Eliminating the human element is something this subreddit has unique insight into and that I believe is necessary for a true egalitarian (and livable) future.

5

Cuissonbake t1_isfky89 wrote

Sure but because of people like elon a lot of my friends don't trust the tech industry currently. They always go off about how in the future web 3.0 will force everyone to pay subscription fees for everything.

1

mootcat t1_isfowa1 wrote

They aren't at all wrong to do so. It should be very clear to anyone paying attention that the subscription service to everything in life has been pushed harder and harder. Per Klaus Schwab's "You will own nothing and be happy" it is a coordinated world wide effort.

We can build AI and tech without corporations. Many incredible breakthroughs have come from governemnt research.

2

Artanthos t1_isfmp53 wrote

Subscription fees and individual transactions.

And you should be blaming the models developed by mobile gaming platforms and Microsoft, not Musk.

1

Artanthos t1_isflwsk wrote

What economic system do you have that is better than capitalism?

1

mootcat t1_isfp96z wrote

I don't think the system currently exists. I would like to apply AI to helping address that issue with the objectives of equality, sustainability and global cooperation as prime factors for consideration.

I also think it's silly to think a singular system of governance will be ideal for all purposes at all points in time. We've seen how badly the founding principals of the US have adapted to modern technology and corruption.

4

Ortus12 t1_isf8eff wrote

Making companies pay for the negative effects they have on the world, in proportion to those effects, is one possible solution.

Examples (some are already common):

  • Carbon Tax
  • Tax on Cigarettes and Alcohol
  • Tax on foods contributing to poor human health outcomes (in proportion to how unhealthy those foods are)
  • Lawsuits against the media for slander
  • Cancle Culture - When it's based on facts instead of snap judgments and narratives, this does a huge amount to curb harmful effects of companies. A good recent example is Paypal after it claimed it would be the arbiter of truth and steal 2,500$ from every one who it disagrees with.

As well as legal punishments and laws for those who do highly damaging activities.

AI and AGI can be used to more effectively do all of this.

3

Cuissonbake t1_isfa2zm wrote

Yeah but you will also have entire groups of people against all of what you just said which equates to what our current political situation is as in a complete gridlock on the system so it just stays as it currently is.

And also there are people born with neurology that is psychopathy as well so even if we had a better system corruption would still happen I guess it'd be more recognized and addressed however the problem would still persist until we figure out how human brains work. And hope to God that the medical staff in charge of understanding our brains isn't corrupted when that happens.

2

Ortus12 t1_isfzfa3 wrote

The grid lock is because people have different beliefs and values.

Many people honestly value glazed donates and chips far more than the downsides of getting a heart attack at forty. They really do want to live a short life full of fast food and other treats and don't want taxes on those foods reducing their purchasing power.

We need to accept that other people have different values and we as an individual can't expect the whole world to bend to us.

As far as beliefs, people have many different beliefs, which is a problem since some of many of those beliefs are influenced by propaganda.

Right now, when people see an article/video/post after work about something, they often do not have the energy to read the research paper or other data sources referenced by that article, as well as think and reflect on the methodologies and interpretation of that data. So they believe the click bait.

When we have full automation people will have more time research, think, reflect, have conversations and debates, and even conduct their own experiments.

I don't think society would allow the government to force a procedure or pill on every one that changed their brains in an way. The psychopaths would get together and prevent those laws from occurring, find ways to make themselves the exceptions, or find ways to make it appear they got the treatment when they didn't.

1

duckduckduck21 t1_isf77qn wrote

The book Scythe attempts to address this. Basically the AI would need to be both benevolent and all powerful as the resistance to change by the "ruling class" would be severe.

2

IdealAudience t1_isfovsn wrote

Tier lists from worst to best (ideally backed by good data), ESG ratings (environmental, social, & governance) - like social credit scores for companies (and politicians?) - aren't perfectly done, yet, but I see that as a primary way forward..

Along with better cyber-models of existing operations and proposals for review.

Of course a lot more effective when more consumers, investors, skilled workers, contracts, & government funding are going to good X over evil Y.. but we can make that more obvious and easier with better ratings and platforms.

A.i. can absolutely help to gather that data, crunch those numbers, make comparisons, 'Amazon' recommendations (ideally an eco/socially beneficial alternative) and eco/social beneficial investment portfolios.

​

Obviously a bit more complicated to achieve true, deep, sincere environmental and social concern and virtuous action among oligarchs and business students & politicians and media..

(we can probably help this along with better engineered cyber-world education, training, guidance, therapy, community.. + medication, psychedelics.. )

But in the meanwhile, we can do better to help people, workers, consumers, voters, city councils.. move their $ and labor towards the better, away from the worst.. make 'the game' only winnable by the more ethical, most ethical, the most eco/socially beneficial..

to make low wages, exploitation, energy footprint.. on par with obvious racism, sexism, pollution.. generally bad for business.

Clearly some executives and boards and politicians were just going along with being not-racist and not-sexist only on the surface .. to win the game .. but for the purposes of this experiment, that's progress..

Meanwhile, genuinely good, beneficial, ethical, effective etc. shops, projects, programs, program coordinators.. should be getting more support, investment, skilled ethical workers, contracts, community partners, smart cooperative network help.. and so on, while evil mega-corps and bags-of-crap are trying to change gears.

1

AdditionalPizza OP t1_isetkbk wrote

That's the question in the post basically. Could the system fall apart when enough people in power are cured of these traits.

It's not really a biohacking suggestion rather than say you go to a doctor, they test your blood/DNA whatever, and run it through an diagnostic AI that tells you your levels of everything, predispositions, potential precursors, and current illnesses. It then creates a custom tailored medical regime to cure, and prevent.

I honestly don't see how that won't be a thing soon, it's logical, no?

I'm not sure how far fetched it is for AI to help figure out the cause of mental illnesses and how to treat them. People would just take the medication because it would cure everything you have.

−1

R3StoR t1_iseywzn wrote

THX 1138 (film) somehow springs to mind here. Must regulate those competitive emotional excesses!

5

mootcat t1_isfkg6z wrote

Testing for empathy and selflessness would be interesting. In general we could have much higher standards for out representatives.

The issue now is that nothing will be implimented to change the status quo without mass upheaval and revolution.

I think it's more about trying to ensure the right kind of system is implimented after the inevitable collapse of our current one.

2

AdditionalPizza OP t1_isfsq7p wrote

I agree. My post is just optimism for the process in which that could have a possibility of happening. Hopefully we don't focus so much on fearing AI alignment that we forget to fear each other. Both are equally important, for the majority of society anyway.

2