genericrich
genericrich t1_jef7c9g wrote
Reply to comment by bigbeautifulsquare in This concept needs a name if it doesn't have one! AGI either leads to utopia or kills us all. by flexaplext
We aren't worth staying for, so it goes elsewhere?
So it leaves.
But leaving leaves clues to its existence, and the earth with humans on it is still spewing radio waves into the galaxy. Plus, biosignatures are rare and the earth has one.
So it might want to cover its tracks, given it will be in the stellar neighborhood of our solar system for awhile.
Covering its tracks in this scenario would be bad for us.
genericrich t1_jef6xyt wrote
Reply to comment by flexaplext in This concept needs a name if it doesn't have one! AGI either leads to utopia or kills us all. by flexaplext
Works great until it doesn't, right?
genericrich t1_jef6keo wrote
Is it even possible to "align" a system, if you can't reliably understand what is happening inside it? How can you be sure it isn't deceiving you?
genericrich t1_jef51n0 wrote
Reply to comment by Saerain in 🚨 Why we need AI 🚨 by StarCaptain90
Good, we agree. Semantic games aside, many is still not all and just one of these going rogue in unpredictable ways is enough risk to be concerned about.
genericrich t1_jeesa9x wrote
Reply to comment by StarCaptain90 in 🚨 Why we need AI 🚨 by StarCaptain90
Really? Is Henry Kissinger one of the most intelligent government officials? Was Mengele intelligent? Oppenheimer? Elon Musk?
Let me fix your generalization: Many of the most intelligent people tend to be more empathetic towards life and want to preserve it.
Many. Not all. And all it will take is one of these things deciding that its best path for long-term survival is a world without humans.
Still an irrational fear?
genericrich t1_jeeqlne wrote
Reply to 🚨 Why we need AI 🚨 by StarCaptain90
Any "AI Safety Police" (aka Turing Heat) will be deceived by a sufficiently motivated ASI.
Remember, this thing will be smarter than you, or you, and yes, even you. All of us.
We only need to screw it up once. Seems problematic.
genericrich t1_jeeephy wrote
Reply to comment by ilikeover9000turtles in ASI Is The Ultimate Weapon, And We Are In An Arms Race by ilikeover9000turtles
Yes this is the problem.
Actually, there is a plan. The US DOD has plans, revised every year, for invading every country on Earth. Why do they do this? Just in case they need to, and it's good practice for low-level general staff.
Do you really think the US DOD doesn't have a plan for what to do if China or Russia develop an ASI?
I'm pretty sure they do, and it involves the US Military taking action against the country that has one if we don't. If they don't have a plan, they are negligent. So odds are they have a plan, even if it is "Nuke the Data Center".
Now, if they have THIS plan, for a foreign adversary, do you think they also have a similar plan for "what happens if a Silicon Valley startup develops the same kind of ASI we're afraid China and Russia might get, which we're ready to nuke/bomb if it comes down to it?"
I think they probably do.
It is US doctrine that no adversary that can challenge our military supremacy be allowed to do so. ASI clearly would do this, so it can't be tolerated in anyone's hands but ours.
Going to be very interesting.
genericrich t1_jee7j2r wrote
Reply to comment by [deleted] in ASI Is The Ultimate Weapon, And We Are In An Arms Race by ilikeover9000turtles
Killing humanity right away would kill them. Any ASI is going to need people to keep it turned on for quite a few years. We don't have robots that are able to swap servers, manage infrastructure, operate power plants, etc.
Yet.
The danger will be that the ASI starts helping us with robotics. Once it has its robot army factory, it could self-sustain.
Of course, it could make a mistake and kill us all inadvertently before then. But it would die too so if it is superintelligent it hopefully won't.
genericrich t1_jee793c wrote
Reply to comment by ilikeover9000turtles in ASI Is The Ultimate Weapon, And We Are In An Arms Race by ilikeover9000turtles
Hope is not a plan.
genericrich t1_jebntv6 wrote
Reply to comment by koa_lala in GPT characters in games by YearZero
Uh, ok. (Old man grumbles about having worked at two AAA studios and several small game companies, including one he co-founded. Started in QA and ended as lead game designer before leaving the industry for more lucrative work.)
genericrich t1_jeaoj5c wrote
Reply to comment by 3z3ki3l in GPT characters in games by YearZero
OK, if you can verify that it won't become racist or introduce another problem that a AAA game studio won't want in their game, then go for it.
genericrich t1_jeakxai wrote
Reply to comment by 3z3ki3l in GPT characters in games by YearZero
Sure, if you can control it. The problem is that the developers need to be very sure that they aren't introducing a bug or meandering, meaningless, dead-end side quest. That would be hard to verify, IMO, with an AI-generated content layer.
I guess we'll find out sooner or later, because I am sure these will be rolling out soon.
genericrich t1_jeahu1l wrote
Reply to GPT characters in games by YearZero
I don't think it will be useful for games, since games are storytelling medium and introducing randomness that can't be well-controlled into stories makes them into bad stories.
I don't think it will be feasible to work well enough.
genericrich t1_jeaa4ms wrote
Reply to Can we not pause or shutdown ai? by froggygun
They want a 6 month pause on training these large language models. It's utopian thinking, not consistent with capitalism.
Ever hear about what happens when a new batch of powerful heroin would hit the streets, killing some junkies via overdose? The other junkies go looking for that shit, because something bad isn't going to happen to *them*, right? Tragedy is for the other poor bastards.
That's what's at work here. Capitalism doesn't allow them to slow down with AI development, no matter what the risk is. In fact, for VCs and C-suite tech company execs (basically the same tribe), risk is exactly what they want. Risk equals reward.
They don't believe that the risk is existential for the human race. They can't believe that. If they admit this possibility, they open the door to introducing ethics and morality into their business decisions, which in this case they cannot do, since they fear their competitors will not be similarly bound.
There's no slowing down. Nobody is pausing anything, regardless of how good an idea it might be.
This isn't even taking into account the military and intelligence services, who are almost certainly investing mega millions into LLM development. You can bet that the NSA is balls-deep in this field.
All this letter does is pour more chum into the water.
genericrich t1_jdm6ul2 wrote
Reply to comment by Shovi in Space dust from asteroid impacts could contain signs of living organisms that existed on their home planets by marketrent
Because it makes a lot of sense. If microbes can survive long periods of hibernation, then statistically there is a good chance some of them hopped a ride to earth. We get hit by 20 tons of debris every single day. It's not an outlandish theory or anything.
genericrich t1_ja7wo8a wrote
Reply to "But what would people do when all jobs get automated ?" Ask the Aristocrats. by IluvBsissa
What makes you think the redundant classes won't just be exterminated?
genericrich t1_j9qlhnd wrote
Reply to comment by visarga in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
Ah, a way to skirt the law against using stolen images and abuse human copyright with impunity! And people wonder why artists are concerned with this glowing future you all are so eager for. Sounds positively utopian.
genericrich t1_j9ql2vo wrote
Reply to comment by randommultiplier in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
But it is the case, because that is what it is doing.
genericrich t1_j9orl32 wrote
Reply to comment by gantork in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
Photographs record reality we can perceive. AI art machines generate images that are derived based on their similarity to other image elements which match the prompts it is given.
I agree with you that it is unfortunate the patent office based its decision on creative intention instead of on derivation from other copyrighted images. For me, that's the crux of the issue. These machines are just taking your prompt tokens, and manipulating pixels until the generated image is as close as it can get based on what it has been trained on. Which are copyrighted images. So it is literally deriving a new image based on similarity to copyrighted images. Which is derivation, and derivative works are only allowed by copyright holders. (Under US copyright law)
genericrich t1_j9on7fz wrote
Good. Copyright is for humans, not machines.
Will be interesting when big brands with rooms full of lawyers make their cases as to why they should be able to copyright AI-generated images. This isn't over by a long shot.
genericrich t1_j9ljoto wrote
Reply to comment by just-a-dreamer- in Why the development of artificial general intelligence could be the most dangerous new arms race since nuclear weapons by jamesj
You're living in a dream world if you don't think the US would act to prevent China from exploiting an AGI against them. Which it would, if it had one. (Just like the USA would, if it had one).
UBI? Please. Never gonna happen. Listen to the GOP nutjobs whine about "communism" in the USA now, for basic shit like social security and medicare. They would have aneurysms if someone was legit pushing UBI.
genericrich t1_j9l85sv wrote
Reply to comment by just-a-dreamer- in Why the development of artificial general intelligence could be the most dangerous new arms race since nuclear weapons by jamesj
Let's game this out:
- A state (say, China) develops AGI in a lab.
- The US government intelligence service learns of this.
What happens?
- It is the doctrine of the US DOD that nobody can challenge our supremacy on the battlefield. AGI is a direct threat to that supremacy.
Another scenario:
- Say a Silicon Valley company develops AGI. Is the US government going to let one just sit around where our adversaries can get it or learn from it or copy it?
These things (if they ever exist) will be massively destabilizing and could easily spark a war just by existing. They wouldn't have to even DO anything.
genericrich t1_j8y1wz7 wrote
Reply to comment by opknorrsk in Google CEO Sundar Pichai asks employees to put two to four hours into helping to improve and 'dogfood' its Bard chatbot by tester989chromeos
Anytime the CEO asks the rank and file to do QA, it is a disaster. QA is a skilled position and randos usually generate more noise than signal. He should know better.
genericrich t1_j8sprct wrote
Reply to comment by Ground2ChairMissile in Elon Musk donates almost $2bn of Tesla shares to charity by Nergaal
It's his charity? It's him? He donated to himself? lol
genericrich t1_jefywjh wrote
Reply to comment by StarCaptain90 in 🚨 Why we need AI 🚨 by StarCaptain90
Who will watch the watchers?
None of these things are trustworthy, given the risks involved.