Submitted by zalivom1s t3_11da7sq in singularity
neonoodle t1_ja8ptso wrote
Reply to comment by CosmicVo in Leaked: $466B conglomerate Tencent has a team building a ChatGPT rival platform by zalivom1s
We can't get a room of 10 random people with aligned values. The chance of AI aligning values with all of humanity is pretty much nil.
drsimonz t1_ja8z9sb wrote
Not necessarily true. I don't think we really understand the true nature of intelligence. It could, for example, turn out that at very high levels of intelligence, an agent's values will naturally align with long-term sustainability, preservation of biodiversity, etc. due to an increase ability to predict future challenges. It seems to me that most of the disagreement on basic values among humans comes from the left side of the bell curve, where views are informed by nothing more than arbitrary traditions, and rational thought has no involvement whatsoever.
But yes, the alignment problem does feel kind of daunting when you consider how mis-aligned the human ruling class already is.
gcaussade t1_ja96ee8 wrote
The problem is, and a lot of humans would agree is that that's super intelligence they decide that 2 billion less people of this Earth is the best way forward... Both of us would feel that's a problem
drsimonz t1_ja9q5av wrote
That's an interesting question too. Alignment researchers like to talk about "X-risks" and "S-risks" but I don't see as much discussion on less extreme outcomes. A "steward" ASI might decide that it likes humanity, but needs to take control for our own good, and honestly it might not be wrong. Human civilization is doing a very mediocre job of providing justice, a fair market, and sustainable use of the earth's resources. Corruption is rampant even at the highest levels of government. We are absolutely just children playing with matches here, so even a completely friendly superintelligence might end up concluding that it must take over, or that the population needs to be reduced. Though it seems unlikely considering how much the carrying capacity has already been increased by technological progress. 100 years ago the global carrying capacity was probably 1/10 of what it is now.
ccnmncc t1_jad8yh2 wrote
The carrying capacity of an ecosystem is not increased by technology - at least not the way we use it.
drsimonz t1_jae74p2 wrote
To be fair, I don't have any formal training in ecology, but my understanding is that carrying capacity is the max population that can be sustained by the resources in the environment. Sure, we're doing a lot of things that are unsustainable long term, but if we suddenly stopped using fertilizers and pesticides, I think most of humanity would be dead within a couple years.
ccnmncc t1_jaet8sy wrote
I understand what you’re saying. We’ve developed methods and materials that have facilitated (arguably, made inevitable) our massive population growth.
We’ve taught ourselves how to wring more out of the sponge, but that doesn’t mean the sponge can hold more.
You caught my drift, though: we are overpopulated - whether certain segments of society recognize it or not - because on balance we use technology to extract more than we use it to replenish. As you note, that’s unsustainable. Carrying capacity is the average population an ecosystem can sustain given the resources available - not the max. It reflects our understanding of boom and bust population cycles. Unsustainable rates of population growth - booms - are always followed by busts.
We could feasibly increase carrying capacity by using technology to, for example, develop and implement large-scale regenerative farming techniques, which would replenish soils over time while still feeding humanity enough to maintain current or slowly decreasing population levels. We could also use technology to assist in the restoration, protection and expansion of marine habitats such as coral reefs and mangrove and kelp forests. Such applications of technology might halt and then reverse the insane declines in biodiversity we’re witnessing daily. Unless and until we take such measures (or someone or something does it for us), it’s as if we’re living above our means on ecological credit and borrowed time.
drsimonz t1_jaexoi0 wrote
Ok I see the distinction now. Our increased production has mostly come from increasing the rate at which we're depleting existing resources, rather than increasing the "steady state" productivity. Since we're still nowhere near sustainable, we can't really claim that we're below carrying capacity.
But yes, I have a lot of hope for the role of AI in ecological restoration. Reforesting with drones, hunting invasive species with killer robots, etc.
For a long time I've thought that we need a much smaller population, but I do think there's something to the argument that certain techies have made, that more people = more innovation. If you need to be in the 99.99th percentile to invent a particular technology, there will be more people in that percentile if the population is larger. This is why China wins so many Olympic medals - they have an enormous distribution to sample from. So if we wanted to maximize the health of the biosphere at some future date (say 100 years from now), would we be better off with a large population reduction or not? I don't know if it's that obvious. At any rate, ASI will probably make a bigger difference than a 50% change in population size...
Nmanga90 t1_ja9ywnj wrote
Well not necessarily though. This could be accomplished in 50 years without killing anyone. Demographic transition models only have relevance with respect to labor, but if the majority of labor was automated, it wouldn’t matter if everyone only had 1 kid.
stupendousman t1_jaa4s4n wrote
> The problem is, and a lot of humans would agree is that that's super intelligence they decide that 2 billion less people of this Earth is the best way forward
Well there are many powerful people who believe that right now.
Many of the fears about AI already exist. State organizations killed 100s of millions of people in the 20th century.
Those same organization have come up with many marketing and indoctrination strategies to make people support them.
AI(s) could do this as well.
That's a danger. But the danger has already occurred, is occurring. Look at Yemen.
ThatUsernameWasTaken t1_ja9pvz7 wrote
“There was also the Argument of Increasing Decency, which basically held that cruelty was linked to stupidity and that the link between intelligence, imagination, empathy and good-behaviour-as-it-was-generally-understood – i.e. not being cruel to others – was as profound as these matters ever got.”
~Ian M. Banks
Northcliff t1_ja9ks69 wrote
>the left side of the bell curve
🙄🙄🙄🙄
Aculem t1_ja9qg3x wrote
I think he means the left side of the bell curve of intelligence among humans, not the political left, which isn't exactly known for loving arbitrary traditions.
Northcliff t1_ja9wnju wrote
Saying the political left is equivalent with the right side of the bell curve of human intelligence is pretty cringe desu
MrYOLOMcSwagMeister t1_jaa1sr8 wrote
Please learn how to read and understand written text
HakarlSagan t1_ja9nspo wrote
Considering the DOE news this week, I'd say the eventual chance of someone intentionally creating a malicious superintelligence for "research purposes" and then accidentally letting it out is pretty high
Brashendeavours t1_ja9tj65 wrote
To be fair, the odds of aligning 10 people’s values is pretty low. Maybe start with two.
GrowFreeFood t1_jaanzei wrote
I will invite it to come over and chat about how we are all trapped in space-time and killing us would be completely pointless.
neonoodle t1_jaavj1j wrote
It read A Brief History of Time. It's already thought about it.
Viewing a single comment thread. View all comments