ProShortKingAction
ProShortKingAction t1_ja9qtdc wrote
Reply to comment by Cold-Change5060 in China accuses U.S. of ‘disinformation’ over warnings it’s considering sending artillery and ammo to Russia by diana321
How young are you that you didn't grow up learning about the possibility of a nuclear winter?
ProShortKingAction t1_ja95pc5 wrote
Reply to comment by bpooqd in China accuses U.S. of ‘disinformation’ over warnings it’s considering sending artillery and ammo to Russia by diana321
Honestly I think a big part of the difference between Cold War propaganda and modern propaganda on the subject is simply time. A large chunk of the people who were so into the idea of conflict with Russia also remembered the day that the Newspaper showed them images of a single bomb wiping a major city off the face of the Earth. People now a days don't have that type of reference. I have a feeling if a city like Kyiv (God forbid) got obliterated by a nuke people would not be so dismissive of the idea of nukes being something to scared of
Edit: I'm just using the city of Kyiv as an example because the idea of it being nuked has regularly been in the news and dismissed as an impossibility by regular people
ProShortKingAction t1_ja8ut3u wrote
Reply to comment by Deep-Mention-3875 in China accuses U.S. of ‘disinformation’ over warnings it’s considering sending artillery and ammo to Russia by diana321
I guess you are both correct because I was very vague in my initial comment. You are correct that for example 100 Hiroshima sized nukes if they were dropped in a desert would not end the world. However those same 100 nukes which are much smaller than what countries are capable of building today if dropped for example in heavily populated areas in Pakistan and India would cause a level of global famine that would bring every country on the planet to its knees, worse than any other famine in world history.
But even that might not be the end of the world to you, the end of the world was a pretty vague way to describe it on my part and for that I'm sorry. I meant more the collapse of everything that we currently rely on to survive. Countries falling apart, countless dead from starvation in even the wealthiest nations of the world, global trade collapsing, resource wars both regional and international, freak weather phenomenon, etc.
And that's not even considering how much more powerful modern nukes are than the one dropped on Hiroshima
ProShortKingAction t1_ja8d6kd wrote
Reply to comment by bpooqd in China accuses U.S. of ‘disinformation’ over warnings it’s considering sending artillery and ammo to Russia by diana321
Considering moderate estimates of how many nukes would be necessary to end the world if they hit the right spots is 100, I'm not sure why we should be all that Gung ho about feeling the force of 350.
ProShortKingAction t1_j8wmcyy wrote
Reply to comment by SillyRookie in US launches artificial intelligence military use initiative by stepsinstereo
Obviously the U.S. military has never openly lied then did something monstrous
ProShortKingAction t1_j57kre2 wrote
Reply to The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
This post gives serious discord debate server mod energy
ProShortKingAction t1_j33rvrl wrote
Reply to comment by Noname_FTW in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Seems like it requires a lot of good faith to assume it will only be applied to whole species and not to whatever arbitrary groups are convenient in the moment
ProShortKingAction t1_j33gery wrote
Reply to comment by Noname_FTW in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Honestly it sounds like it'd lead to some nazi shit, intelligence is incredibly subjective by its very nature and if you ask a thousand people what intelligence is you are going to get a thousand different answers. On top of that we already have a lot of evidence that someone's learning capability in childhood is directly related to how safe the person felt growing up presumably because the safer someone feels the more willing their brain is to let them experiment and reach for new things, which typically means that people who grow up in areas with higher violent crime rates or people persecuted by their government tend to score lower on tests and in general have a harder time at school. If we take some numbers and claim they represent how intelligent a person is and a group persecuted by a government routinely scores lower than the rest of that society it would make it pretty easy for the government to claim that group is less of a person than everyone else. Not to mention the loss of personhood for people with mental disabilities. Whatever metric we tie to AI quality is going to be directly associated with how "human" it is to the general public which is all fine when the AIs are pretty meh but once there are groups of humans scoring worse than the AI then it's going to bring up a whole host of issues
ProShortKingAction t1_j32sn33 wrote
Reply to comment by joseph20606 in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Yeah I also feel like we are reaching a point where moving the goal post of what is consciousness any farther gets into some pretty dark territory. Sure an AI expert can probably explain that ChatGPT isn't conscious but it's going to be in a way that no one outside of the field is going to understand at this point. I keep on seeing takes like "oh it isn't conscious because it doesn't keep its train of thought in-between conversations"... OK so your grandfather with dementia isn't conscious? Is that really a point these people want to make?
I feel like it's getting close enough that we should stop putting so much weight into the idea of what is and isn't conscious before we start moving this goal post past what a lot of human beings can compete with.
ProShortKingAction t1_j32rvyt wrote
Reply to comment by turnip_burrito in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
It's not in a discord debate with philosophy nerds, it was told to represent itself as a high school teacher which is why it makes a lot of sense for it to put forward multiple possible ideas and help the student figure it out for themselves instead of giving one definitive answer on an open ended question
ProShortKingAction t1_j15yur0 wrote
Reply to comment by AITADestroyer in How hard would it be for an AI to do the work of a CEO? by SeaBearsFoam
Sounds like a bunch of stuff an AI would do better than most of the idiots who lucked into being CEOs
ProShortKingAction t1_izwlcvl wrote
Reply to China wants legal sector to be AI-powered by 2025 / Supreme People's Court issues directive for an artificial intelligence network to be in place by 2025 to support and enhance legal services by Sorin61
Would probably help a lot with their bribe and sentencing disparity problem honestly
ProShortKingAction t1_iye9p5c wrote
Reply to comment by joeedger in From NeurIPS 2022 poster session: "[Google] Minerva author on AI solving math: IMO gold by 2026 seems reasonable, superhuman math in 2026 not crazy" by maxtility
Tell that to the folks working in transistor development. Sometimes modern industries have to create something where some parts of it are never going to be observable but need to be predictable in order to come out with an end product.
ProShortKingAction t1_iy94ob1 wrote
Reply to comment by _baundiesel_ in China is now using advanced 3D-printing tech in its warplanes by Gari_305
That makes a lot more sense, apologies. I've been seeing a weirdly large amount of talk of "oh insert Nation with nuclear weapons here should watch out they can't handle insert nation with nuclear weapons here" like this generation has fully forgotten that these weapons even exist. I thought you were probably referring to the increased tensions between the U.S. and China or China and India
ProShortKingAction t1_iy8zpgd wrote
Reply to comment by _baundiesel_ in China is now using advanced 3D-printing tech in its warplanes by Gari_305
No country on this scale can do a large military operation against any comparable force. It'll lead to nuclear war and the annihilation of both parties if not everyone
ProShortKingAction t1_iwvoojk wrote
Reply to comment by rnimmer in Full Self-Driving Twitter by [deleted]
Even with the farthest stretch of the imagination as to what the ML researchers/developers at Tesla are capable of you would still need a significant amount of data on the tasks that need to be automated. If entire teams are laid off how will their tasks even be explained to the model let alone demonstrated enough times for the model to understand?
ProShortKingAction t1_iwvkss0 wrote
Reply to Full Self-Driving Twitter by [deleted]
Automating software tasks is an entirely different skillset than building a Machine Learning model for automating something like driving. Automating software tasks involves slowly documenting and analyzing each best practice, finding what is repetitive and creating scripts/pipelines for those repetitive tasks. It's not AI based it's more a bunch of if else statements that you slowly build over years. Even if the skillset was the same and Elon brought on one engineer for every tech employee at Twitter it would still take months if not years of documentation and slow replacement to make up for the kind of skill drain that has happened at Twitter over the past two weeks. Nine women can't make a baby in a month and all that
ProShortKingAction t1_iuskyif wrote
Reply to comment by visarga in Robots That Write Their Own Code by kegzilla
This seems to be saying "safe to run" as in make it less likely to crash not as in prevent cybersecurity issues.
ProShortKingAction t1_iurmfoc wrote
Reply to comment by Sashinii in Robots That Write Their Own Code by kegzilla
Sorry, I took that as them saying they had built-in safety checks that are meant to prevent the robot from doing an unsafe physical action not prevent it from writing vulnerable code. I might have misinterpreted that.
Another thing I would like to bring up in the favor of this model of going about things is that vulnerabilities slip through in regular code all the time, this approach doesn't have to be perfect just more safe than the current approach. It's like with driverless cars, they don't have to be perfect just more safe than a car driven by a human which seems like a low bar. I just don't see anything from this post that implies a safe way to do this approach isn't rather far off
Edit: In the Twitter thread made by one of the researchers posted elsewhere in this thread they very vaguely mention "... and many potential safety risks need to be addressed" its hard to tell if this is referencing the robot physically interacting with the world, cybersecurity concerns, or both.
ProShortKingAction t1_iurhi4z wrote
Reply to Robots That Write Their Own Code by kegzilla
How do you prevent the robot from writing unsafe code? If it is continually adding new code without being checked by devs or a security team it seems like you'd run into the issue of there always being the possibility of it being one instruction away from generating code that includes a dangerous vulnerability
ProShortKingAction t1_irbibv8 wrote
I keep on feeling like one of these discoveries is going to turn out to be the penicillin of our generation where it leaves such an immense impact on human life and quality of life that its borderline impossible to imagine life without it.
You can imagine life without cars, without phones, without computers. But writers and people just looking back at the past regularly forget just how insanely different life was before penicillin. I personally would be dead 4 times over if it hadn't been invented and I bet most of the people reading this have similar stories. But it's such an insane impact that it's almost impossible for us to wrap our heads around and so instead we often times take it as granted, just a part of human life
ProShortKingAction t1_jdxqskd wrote
Reply to comment by [deleted] in China Energy proposes $1bn floating solar farm In Zimbabwe by Wagamaga
Belt and Road initiative loans are relatively low compared to other sources you would go to for a project like this like the IMF who would be saddling you with a higher interest rate and a bunch of stipulations involving how you can run your economy. The debt trap isn't anything on the surface it's that China is giving these loans to historically unstable countries that no other source would be willing to lend to because there is no guarantee they will be economically stable. So you take the loan because of course you will be the one to turn your country around and keep the boat from tipping over and then a nearby harvest fails, bread prices go up, and everything goes to shit. Now here you are stuck with this loan and no capacity to repay it