genericrich

genericrich t1_jef7c9g wrote

We aren't worth staying for, so it goes elsewhere?

So it leaves.

But leaving leaves clues to its existence, and the earth with humans on it is still spewing radio waves into the galaxy. Plus, biosignatures are rare and the earth has one.

So it might want to cover its tracks, given it will be in the stellar neighborhood of our solar system for awhile.

Covering its tracks in this scenario would be bad for us.

−1

genericrich t1_jeesa9x wrote

Really? Is Henry Kissinger one of the most intelligent government officials? Was Mengele intelligent? Oppenheimer? Elon Musk?

Let me fix your generalization: Many of the most intelligent people tend to be more empathetic towards life and want to preserve it.

Many. Not all. And all it will take is one of these things deciding that its best path for long-term survival is a world without humans.

Still an irrational fear?

1

genericrich t1_jeeqlne wrote

Any "AI Safety Police" (aka Turing Heat) will be deceived by a sufficiently motivated ASI.

Remember, this thing will be smarter than you, or you, and yes, even you. All of us.

We only need to screw it up once. Seems problematic.

6

genericrich t1_jeeephy wrote

Yes this is the problem.

Actually, there is a plan. The US DOD has plans, revised every year, for invading every country on Earth. Why do they do this? Just in case they need to, and it's good practice for low-level general staff.

Do you really think the US DOD doesn't have a plan for what to do if China or Russia develop an ASI?

I'm pretty sure they do, and it involves the US Military taking action against the country that has one if we don't. If they don't have a plan, they are negligent. So odds are they have a plan, even if it is "Nuke the Data Center".

Now, if they have THIS plan, for a foreign adversary, do you think they also have a similar plan for "what happens if a Silicon Valley startup develops the same kind of ASI we're afraid China and Russia might get, which we're ready to nuke/bomb if it comes down to it?"

I think they probably do.

It is US doctrine that no adversary that can challenge our military supremacy be allowed to do so. ASI clearly would do this, so it can't be tolerated in anyone's hands but ours.

Going to be very interesting.

2

genericrich t1_jee7j2r wrote

Killing humanity right away would kill them. Any ASI is going to need people to keep it turned on for quite a few years. We don't have robots that are able to swap servers, manage infrastructure, operate power plants, etc.

Yet.

The danger will be that the ASI starts helping us with robotics. Once it has its robot army factory, it could self-sustain.

Of course, it could make a mistake and kill us all inadvertently before then. But it would die too so if it is superintelligent it hopefully won't.

2

genericrich t1_jebntv6 wrote

Reply to comment by koa_lala in GPT characters in games by YearZero

Uh, ok. (Old man grumbles about having worked at two AAA studios and several small game companies, including one he co-founded. Started in QA and ended as lead game designer before leaving the industry for more lucrative work.)

1

genericrich t1_jeakxai wrote

Reply to comment by 3z3ki3l in GPT characters in games by YearZero

Sure, if you can control it. The problem is that the developers need to be very sure that they aren't introducing a bug or meandering, meaningless, dead-end side quest. That would be hard to verify, IMO, with an AI-generated content layer.

I guess we'll find out sooner or later, because I am sure these will be rolling out soon.

0

genericrich t1_jeahu1l wrote

I don't think it will be useful for games, since games are storytelling medium and introducing randomness that can't be well-controlled into stories makes them into bad stories.

I don't think it will be feasible to work well enough.

−9

genericrich t1_jeaa4ms wrote

They want a 6 month pause on training these large language models. It's utopian thinking, not consistent with capitalism.

Ever hear about what happens when a new batch of powerful heroin would hit the streets, killing some junkies via overdose? The other junkies go looking for that shit, because something bad isn't going to happen to *them*, right? Tragedy is for the other poor bastards.

That's what's at work here. Capitalism doesn't allow them to slow down with AI development, no matter what the risk is. In fact, for VCs and C-suite tech company execs (basically the same tribe), risk is exactly what they want. Risk equals reward.

They don't believe that the risk is existential for the human race. They can't believe that. If they admit this possibility, they open the door to introducing ethics and morality into their business decisions, which in this case they cannot do, since they fear their competitors will not be similarly bound.

There's no slowing down. Nobody is pausing anything, regardless of how good an idea it might be.

This isn't even taking into account the military and intelligence services, who are almost certainly investing mega millions into LLM development. You can bet that the NSA is balls-deep in this field.

All this letter does is pour more chum into the water.

1

genericrich t1_j9orl32 wrote

Photographs record reality we can perceive. AI art machines generate images that are derived based on their similarity to other image elements which match the prompts it is given.

I agree with you that it is unfortunate the patent office based its decision on creative intention instead of on derivation from other copyrighted images. For me, that's the crux of the issue. These machines are just taking your prompt tokens, and manipulating pixels until the generated image is as close as it can get based on what it has been trained on. Which are copyrighted images. So it is literally deriving a new image based on similarity to copyrighted images. Which is derivation, and derivative works are only allowed by copyright holders. (Under US copyright law)

1

genericrich t1_j9ljoto wrote

You're living in a dream world if you don't think the US would act to prevent China from exploiting an AGI against them. Which it would, if it had one. (Just like the USA would, if it had one).

UBI? Please. Never gonna happen. Listen to the GOP nutjobs whine about "communism" in the USA now, for basic shit like social security and medicare. They would have aneurysms if someone was legit pushing UBI.

−4

genericrich t1_j9l85sv wrote

Let's game this out:

  • A state (say, China) develops AGI in a lab.
  • The US government intelligence service learns of this.

What happens?

  • It is the doctrine of the US DOD that nobody can challenge our supremacy on the battlefield. AGI is a direct threat to that supremacy.

Another scenario:

  • Say a Silicon Valley company develops AGI. Is the US government going to let one just sit around where our adversaries can get it or learn from it or copy it?

These things (if they ever exist) will be massively destabilizing and could easily spark a war just by existing. They wouldn't have to even DO anything.

6