PandaCommando69

PandaCommando69 t1_j9ci2y0 wrote

I think there's a good argument to be made that a superintelligent human adult would make a better ASI than a freshly born one, because the human ASI would already have experience managing themselves in the world and understanding how it's systems and people work.

2

PandaCommando69 t1_j9790lc wrote

Yes, oppressors do get awfully upset about not being able to oppress other people, and definitely think that having their ability to cause harm curtailed is a bad thing. They are wrong. (EX: homophobes who think their rights are being violated because gay people have been allowed to marry. Their rights have not been violated, merely their ability to oppress other people curtailed.) The difference is not null.

5

PandaCommando69 t1_j977fwh wrote

Russia attacked Ukraine unprovoked. They are in the wrong.

You are correct that some people do not understand what right and wrong are. That does not mean that right and wrong do not exist. Sometimes there are gray areas, and in these we need to be judicious in balancing competing interests, but that does not mean that we cannot tell right from wrong, and by pushing that narrative, you are advocating for the very type of moral ambiguity that you are pretending to decry.

4

PandaCommando69 t1_j974idz wrote

It will also allow others of us to transform ourselves into guardian angels, the real kind. If I get super intelligence I'm going to use it to protect (and give freedom to) as much sentient life as I can, for as long as I am able. I mean it. I hope others will do the same--I think they will.

14

PandaCommando69 t1_j94bk3u wrote

Yes. You can read about some of what else they're up to on DARPA'S website:

https://www.darpa.mil/work-with-us/ai-next-campaign

Here's a snippet:

> Defense Advanced Research Projects Agency AI Next Campaign

>For more than five decades, DARPA has been a leader in generating groundbreaking research and development (R&D) that facilitated the advancement and application of rule-based and statistical-learning based AI technologies. Today, DARPA continues to lead innovation in AI research as it funds a broad portfolio of R&D programs, ranging from basic research to advanced technology development. DARPA believes this future, where systems are capable of acquiring new knowledge through generative contextual and explanatory models, will be realized upon the development and application of “Third Wave” AI technologies.

>DARPA announced in September 2018 a multi-year investment of more than $2 billion in new and existing programs called the “AI Next” campaign. Key areas of the campaign include automating critical Department of Defense (DOD) business processes, such as security clearance vetting or accrediting software systems for operational deployment; improving the robustness and reliability of AI systems; enhancing the security and resiliency of machine learning and AI technologies; reducing power, data, and performance inefficiencies; and pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.

https://www.thefuturescentre.org/signal/darpa-planning-ai-system-to-predict-world-events/

They're working on using AI to predict the future (they probably already have it frankly).

>The Defense Advanced Research Projects Agency (DARPA) wants to create an artificial intelligence that sifts the media for early signals of potentially impactful events, such as terrorist attacks, financial crises or cold wars.

>The system is called KAIROS: Knowledge-directed Artificial Intelligence Reasoning Over Schemas. Schemas are small stories made up of linked events that people use to make sense of the world. For example, the “buying a gift” schema involves entering a shop, browsing for an item, selecting the item, experiencing pangs of self-doubt, bringing it to the till, paying for it, then leaving the shop.

>KAIROS will begin by ingesting massive amounts of data so it can build a library of basic schemas. Once it has compiled a set of schemas about the world, the system will try to use them to extract narratives about complex real-world events.

>According to the agency, KAIROS “aims to develop a semi-automated system capable of identifying and drawing correlations between seemingly unrelated events or data, helping to inform or create broad narratives about the world around us.”

And that's just a snip out of the stuff that's publicly available. The US government security apparatus has resources that are beyond what most people have any inkling about.

7

PandaCommando69 t1_j90avkm wrote

Read the Culture novels by Ian M Banks. You'll (probably) feel better. I personally think things are going to turn out alright (though the ride might be bumpy for a bit). You're living in a moment in time that our ancestors couldn't even have dreamed of in their wildest imaginations. It's really extraordinary if you stop to think about it for a minute. If things go right it means cures for all disease, the end of aging, limitless energy, new exotic materials for every conceivable purpose, true morphological freedom, full dive VR, and on and on. We are on the cusp of the ascension of humanity into something so much more. Keep your fingers crossed kiddo, and try not to worry too much in the meantime.

7

PandaCommando69 t1_j6irfzk wrote

I don't know but this piece of advice has helped guide me over the years: "the person knows how will always find a job, but the person who knows why will always be their boss." Also, don't take on student debt if you can avoid it--that'll just hamstring your options as time goes by. Personally I think AI will usher in a new era--just like the PC and the internet did--and like before we'll adapt and learn to leverage digital tools; creating new jobs/industries. I suspect the AI trajectory will be similar.

4

PandaCommando69 t1_izd0nnr wrote

If it wasn't him it would be someone else. Also, cheating in school isn't some good thing --bc it devalues other people's honest efforts. Grades are going to have to be based on in person unaided testing. Handing out assignments for credit that can be done at home is now essentially pointless--everyone who can is going to use AI tools. Maybe there's some other way we could grade people that would get around this problem, but I can't think of it offhand. Open to suggestions.

13

PandaCommando69 t1_iyeoyql wrote

I respectfully disagree. There is freedom in being able to call an asshole, an asshole (and in saying any other damn thing you please, provided it does not run afoul of law in terms of inciting violence or insurrection, or is actionable defamation). In the United States, you can insult the President to his face (shout out to that fat treason weasel Donald fuckingTrump), and there is not a single thing the law can do about it. Nor should there be. I think there are many nice things about Germany, but I would never trade our 1st Amendment protections for freedom of speech with yours. Not being able to say what you think is tyranny.

7