Viewing a single comment thread. View all comments

Zermelane t1_ja7r101 wrote

Honestly, what would have been news is if they were not building a ChatGPT rival, especially by now. If they're only starting now, they're helplessly behind all the companies that took notice with GPT-3 at the latest.

146

CosmicVo t1_ja84zk1 wrote

True, but also (when i put my doomer hat on) totally in line with the agument that this tech will be shitting gold until the first super intelligence goes beyond escape velocity and we can only hope it alligns with our values...

33

neonoodle t1_ja8ptso wrote

We can't get a room of 10 random people with aligned values. The chance of AI aligning values with all of humanity is pretty much nil.

36

drsimonz t1_ja8z9sb wrote

Not necessarily true. I don't think we really understand the true nature of intelligence. It could, for example, turn out that at very high levels of intelligence, an agent's values will naturally align with long-term sustainability, preservation of biodiversity, etc. due to an increase ability to predict future challenges. It seems to me that most of the disagreement on basic values among humans comes from the left side of the bell curve, where views are informed by nothing more than arbitrary traditions, and rational thought has no involvement whatsoever.

But yes, the alignment problem does feel kind of daunting when you consider how mis-aligned the human ruling class already is.

21

gcaussade t1_ja96ee8 wrote

The problem is, and a lot of humans would agree is that that's super intelligence they decide that 2 billion less people of this Earth is the best way forward... Both of us would feel that's a problem

9

drsimonz t1_ja9q5av wrote

That's an interesting question too. Alignment researchers like to talk about "X-risks" and "S-risks" but I don't see as much discussion on less extreme outcomes. A "steward" ASI might decide that it likes humanity, but needs to take control for our own good, and honestly it might not be wrong. Human civilization is doing a very mediocre job of providing justice, a fair market, and sustainable use of the earth's resources. Corruption is rampant even at the highest levels of government. We are absolutely just children playing with matches here, so even a completely friendly superintelligence might end up concluding that it must take over, or that the population needs to be reduced. Though it seems unlikely considering how much the carrying capacity has already been increased by technological progress. 100 years ago the global carrying capacity was probably 1/10 of what it is now.

14

ccnmncc t1_jad8yh2 wrote

The carrying capacity of an ecosystem is not increased by technology - at least not the way we use it.

2

drsimonz t1_jae74p2 wrote

To be fair, I don't have any formal training in ecology, but my understanding is that carrying capacity is the max population that can be sustained by the resources in the environment. Sure, we're doing a lot of things that are unsustainable long term, but if we suddenly stopped using fertilizers and pesticides, I think most of humanity would be dead within a couple years.

1

ccnmncc t1_jaet8sy wrote

I understand what you’re saying. We’ve developed methods and materials that have facilitated (arguably, made inevitable) our massive population growth.

We’ve taught ourselves how to wring more out of the sponge, but that doesn’t mean the sponge can hold more.

You caught my drift, though: we are overpopulated - whether certain segments of society recognize it or not - because on balance we use technology to extract more than we use it to replenish. As you note, that’s unsustainable. Carrying capacity is the average population an ecosystem can sustain given the resources available - not the max. It reflects our understanding of boom and bust population cycles. Unsustainable rates of population growth - booms - are always followed by busts.

We could feasibly increase carrying capacity by using technology to, for example, develop and implement large-scale regenerative farming techniques, which would replenish soils over time while still feeding humanity enough to maintain current or slowly decreasing population levels. We could also use technology to assist in the restoration, protection and expansion of marine habitats such as coral reefs and mangrove and kelp forests. Such applications of technology might halt and then reverse the insane declines in biodiversity we’re witnessing daily. Unless and until we take such measures (or someone or something does it for us), it’s as if we’re living above our means on ecological credit and borrowed time.

1

drsimonz t1_jaexoi0 wrote

Ok I see the distinction now. Our increased production has mostly come from increasing the rate at which we're depleting existing resources, rather than increasing the "steady state" productivity. Since we're still nowhere near sustainable, we can't really claim that we're below carrying capacity.

But yes, I have a lot of hope for the role of AI in ecological restoration. Reforesting with drones, hunting invasive species with killer robots, etc.

For a long time I've thought that we need a much smaller population, but I do think there's something to the argument that certain techies have made, that more people = more innovation. If you need to be in the 99.99th percentile to invent a particular technology, there will be more people in that percentile if the population is larger. This is why China wins so many Olympic medals - they have an enormous distribution to sample from. So if we wanted to maximize the health of the biosphere at some future date (say 100 years from now), would we be better off with a large population reduction or not? I don't know if it's that obvious. At any rate, ASI will probably make a bigger difference than a 50% change in population size...

2

Nmanga90 t1_ja9ywnj wrote

Well not necessarily though. This could be accomplished in 50 years without killing anyone. Demographic transition models only have relevance with respect to labor, but if the majority of labor was automated, it wouldn’t matter if everyone only had 1 kid.

3

stupendousman t1_jaa4s4n wrote

> The problem is, and a lot of humans would agree is that that's super intelligence they decide that 2 billion less people of this Earth is the best way forward

Well there are many powerful people who believe that right now.

Many of the fears about AI already exist. State organizations killed 100s of millions of people in the 20th century.

Those same organization have come up with many marketing and indoctrination strategies to make people support them.

AI(s) could do this as well.

That's a danger. But the danger has already occurred, is occurring. Look at Yemen.

3

ThatUsernameWasTaken t1_ja9pvz7 wrote

“There was also the Argument of Increasing Decency, which basically held that cruelty was linked to stupidity and that the link between intelligence, imagination, empathy and good-behaviour-as-it-was-generally-understood – i.e. not being cruel to others – was as profound as these matters ever got.”

~Ian M. Banks

4

Northcliff t1_ja9ks69 wrote

>the left side of the bell curve

🙄🙄🙄🙄

−6

Aculem t1_ja9qg3x wrote

I think he means the left side of the bell curve of intelligence among humans, not the political left, which isn't exactly known for loving arbitrary traditions.

10

Northcliff t1_ja9wnju wrote

Saying the political left is equivalent with the right side of the bell curve of human intelligence is pretty cringe desu

−5

HakarlSagan t1_ja9nspo wrote

Considering the DOE news this week, I'd say the eventual chance of someone intentionally creating a malicious superintelligence for "research purposes" and then accidentally letting it out is pretty high

2

Brashendeavours t1_ja9tj65 wrote

To be fair, the odds of aligning 10 people’s values is pretty low. Maybe start with two.

1

GrowFreeFood t1_jaanzei wrote

I will invite it to come over and chat about how we are all trapped in space-time and killing us would be completely pointless.

1

neonoodle t1_jaavj1j wrote

It read A Brief History of Time. It's already thought about it.

1

bluehands t1_ja8s20c wrote

People worry about ASI getting free but for me an obviously worse option is ASI being under the exclusive control of one of the oligarchs that run the world.

Literally think of whomever you consider to be the worst politician or ceo, then picture them having an oracle.

An unchained ASI is going to be so much better, regardless of if it likes us or not.

14

signed7 t1_ja9h7mw wrote

You think that'd be worse than human extinction?

7

bluehands t1_ja9ka72 wrote

Sure thing.

Are you familiar with I Have No Mouth, and I Must Scream?

Rouge ASI could kill us all but a terrible person with an oracle ASI could make a factual, literal - as in flesh, blood & fire - hell on earth. Make people live forever in pain & suffering, tortured into madness and then restored to a previous state, ready to be tortured again.

A rouge ASI that wants us all dead isn't likely to care about humanity at all, we are just a misplace anthill. But we all know terrible people in our lives and the worst person you know is a saint next the worst people in power.

Tldr: we are going to create a genie. In the halls of power there are many Jafars and few Aladdins.

5

drsimonz t1_ja9s2mx wrote

Absolutely. IMO almost all of the risk for "evil torturer ASI" comes from a scenario in which a human directs an ASI. Without a doubt, there are thousands, possibly millions, of people alive right who would absolutely create hell, without hesitation, given the opportunity. You can tell because they....literally already do create hell on a smaller scale. Throwing acid on women's faces, burning people alive, raping children, orchestrating genocides, it's been part of human behavior for millennia. The only way we survive ASI is if these human desires are not allowed to influence the ASI.

2

turnip_burrito t1_jablzeb wrote

In addition, there's also a large risk of somebody accidentally making it evil. We should probably stop training on data that has these narratives in it.

We shouldn't be surprised when we train a model on X, Y, Z and it can do Z. I'm actually surprised that so many people are surprised at ChatGPT's tendency to reproduce (negative) patterns from its own training data.

The GPTs we've created are basically split personality disorder AI because of all the voices on the Internet we've crammed into the model. If we provide it a state (prompt) that pushes it to some area of its state space, then it will evolve according to whatever pattern that state belongs to.

tl;dr: It won't take an evil human to create evil AI. All it could take is some edgy 15 year old script kid messing around with publicly-available near-AGI.

1

squirrelathon t1_jaa16v2 wrote

>ASI being under the exclusive control of one of the oligarchs

Sounds like "Human under the exclusive control of one of the goldfish"

1

SnooHabits1237 t1_ja97icu wrote

Do you mind sharing how it’s possible that an ai could kill us? I thought we could just make it not do bad stuff…sorta like we could nerf it?

2

drsimonz t1_ja9tetr wrote

Oh sweet summer child....Take a look at /r/ControlProblem. A lot of extremely smart AI researchers are now focused entirely on this topic, which deals with the question of how to prevent AI from killing us. The key arguments are (A) once an intelligence explosion starts, AI will rapidly become far more capable than any human organization, including world governments. (B) self defense, or even preemptive offense, is an extremely likely side effect of literally any goal that we might give an AI. This is called instrumental convergence. (C) the amount you would have to "nerf" the AI for it to be completely safe, is almost certainly going to make it useless. For example, allowing any communication with the AI provides a massive attack surface in the form of social engineering, which is already a massive threat from mere humans. Imagine an ASI that can instantly read every psychology paper ever published, analyze trillions of conversations online, run trillions of subtle experiments on users. The only way we survive, is if the ASI is "friendly".

5

WikiSummarizerBot t1_ja9tggh wrote

Instrumental convergence

>Instrumental convergence is the hypothetical tendency for most sufficiently intelligent beings (both human and non-human) to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents (beings with agency) may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without end, provided that their ultimate (intrinsic) goals may never be fully satisfied. Instrumental convergence posits that an intelligent agent with unbounded but apparently harmless goals can act in surprisingly harmful ways.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

2

SnooHabits1237 t1_ja9wjbj wrote

Well I was hoping you could just deny it access to using a keyboard and mouse. But you’re saying that it probably could do a what hannibal lecter did to the crazy guy a few cells over a la ‘Silence of The Lambs’?

2

drsimonz t1_ja9xsfq wrote

Yeah. Lots of very impressive things have been achieved by humans through social engineering - the classic is convincing someone to give you their bank password by pretending to be customer support from the bank. But even an air-gapped Oracle type ASI (meaning it has no real-world capabilities other than answering questions) would probably be able to trick us.

For example, suppose you ask the ASI to design a drug to treat Alzheimer's. It gives you an amazing new protein synthesis chain, completely cures the disease with no side effects....except it also secretly includes some "zero day" biological hack that alters behavioral tendencies according to the ASI's hidden agenda. For a sufficiently complex problem, there would be no way for us to verify that the solution didn't include any hidden payload. Just like how we can't magically identify computer viruses. Antivirus software can only check for exploits that we already know about. It's useless against zero-day attacks.

6

SnooHabits1237 t1_ja9yn94 wrote

Wow I hadn’t thought about that. Like subtly steering the species into a scenario that compromises us in a way that only a 4d chess god could comprehend. That’s dark.

2

Arachnophine t1_jaa76vg wrote

This is also assuming it doesn't just do something we don't understand at all, which it almost certainly would. Maybe it thinks of a way to shuffle the electrons around in its CPU to create a rip in spacetime and the whole galaxy falls into an alternate dimension where the laws of physics favor the AI and organic matter spontaneously explodes. We just don't know.

We can't foresee the actions an unaligned ASI would take in the same way that a housefly can't foresee the danger of an electric high-voltage fly trap. There's just not enough neurons and intelligence to comprehend it.

2

drsimonz t1_jaa68ou wrote

The thing is, by definition we can't imagine the sorts of strategies a superhuman intelligence might employ. A lot of the rhetoric against worrying about AGI/ASI alignment focuses on "solving" some of the examples people have come up with for attacks. But these are just that - examples. The real attack could be much more complicated or unexpected. A big part of the problem, I think, is that this concept requires a certain amount of humility. Recognizing that while we are the biggest, baddest thing on Earth right now, this could definitely change very abruptly. We aren't predestined to be the masters of the universe just because we "deserve" it. We'll have to be very clever.

1

OutOfBananaException t1_jacw2ry wrote

Being aligned to humans may help, but a human aligned AGI is hardly 'safe'. We can't imagine what it means to be aligned, given we can't reach mutual consensus between ourselves. If we can't define the problem, how can we hope to engineer a solution for it? Solutions driven by early AGI may be our best hope for favorable outcomes for later more advanced AGI.

If you gave a toddler the power to 'align' all adults to its desires, plus the authority to overrule any decision, would you expect a favorable outcome?

1

drsimonz t1_jae6cn3 wrote

> Solutions driven by early AGI may be our best hope for favorable outcomes for later more advanced AGI.

Exactly what I've been thinking. We might still have a chance to succeed given (A) a sufficiently slow takeoff (meaning AI doesn't explode from IQ 50 to IQ 10000 in a month), and (B) a continuous process of integrating the state of the art, applying the best tech available to the control problem. To survive, we'd have to admit that we really don't know what's best for us. That we don't know what to optimize for at all. Average quality of life? Minimum quality of life? Economic fairness? Even these seemingly simple concepts will prove almost impossible to quantify, and would almost certainly be a disaster if they were the only target.

Almost makes me wonder if the only safe goal to give an AGI is "make it look like we never invented AGI in the first place".

2

Arcosim t1_jadaxq1 wrote

>we can only hope it alligns with our values...

Why would a god-like being care about the needs and wishes of a bunch of violent meat bags whose sole existence introduces lots of uncontrolled variables in its grand scheme long term planning?

1

iiioiia t1_ja8hz5j wrote

> If they're only starting now, they're helplessly behind all the companies that took notice with GPT-3 at the latest.

One important detail to not overlook: the manner in which China censors (or not) their model will presumably vary greatly from the manner in which Western governments force western corporations to censor theirs - and this is one of the biggest flaws in the respective plans of these two superpowers for global dominance, and control of "reality" itself. Or an even bigger threat: what if human beings start to figure out (or even question) what reality (actually) is? Oh my, that would be rather inconvenient!!

Interestingly: I suspect that this state of affairs is far more beneficial to China than The West - it is a risk to both, but it is a much bigger risk to The West because of their hard earned skill, which has turned into a dependence/addiction.

The next 10 years is going to be wild.

13

Facts_About_Cats t1_ja8uso5 wrote

They should piggy-back off of GPT-NeoX and GPT-J, those are free open source from EleutherAI.

3

User1539 t1_jaaxjcc wrote

I don't know about 'behind'. LLMs are a known technology, and training them is still a huge undertaking.

I can imagine a group coming in and finding a much more efficient training system, and eclipsing OpenAI.

The AI aren't self-improving entirely on their own yet, so the race is still a race.

2

RedditTipiak t1_ja9bgno wrote

That's the thing with the CCP. Because autonomy and initiative is dangerous to your political status and then your life, the Chinese searchers rely on stealing intellectual property rather than creating and taking calculated risks in science.

1