WarImportant9685
WarImportant9685 t1_j1nfl62 wrote
Reply to comment by Wassux in There are far more dissenting opinions in this sub than people keep saying. by Krillinfor18
Frankly, I don't know. Cause nuclear is a grey area tho. In one hand it cause peace on modern times. On the other hand there is always the threat of extinction by nuclear winter.
But let's leave nuclear for a second. I can give you an example of a technological progress where humanity dropped the ball. Usage of CFC refrigerant which damages ozone layer. While it's cool that humanity have banned CFC refrigerant as of now, it takes 20 years for the treaties that banned it (Montreal protocol) to be created from the discovery of the problem.
My point is, humanity is great at advancing tech, but maybe not so much at advancing tech safely. I think at least people should take seriously that aligned AGI is not a guaranteed matter. And instead of getting defensive, ideally more people would want to help alignment research to make sure unaligned AGI not be deployed.
WarImportant9685 t1_j1lr2tx wrote
Reply to comment by Wassux in There are far more dissenting opinions in this sub than people keep saying. by Krillinfor18
I think the correct view is that each technology should be evaluated on its own. And cannot be blanket viewed as good. I'd agree with technology in general is good. But I'd disagree with claims along the line of 'all technological progress is good'. I understand that most people are looking at heuristic reasons why technological progress is good. And I would agree that humanity history is the history of technological progress.
But it's dangerous to say that AGI is good because technology progress is good.
And I kinda disagree with your comments about nuclear weapon. Fact is we are lucky that USA get nuclear weapon first instead of germany. I viewed nuclear weapon as force multiplier instead of good or evil.
And I viewed unaligned AGI as evil. While aligned AGI as force multiplier.
More intuitively, throughout history, we mostly got technology that is still smaller than life. Fire, wheels, masonry, etc. The issue is that right now our tech have started to become larger than life, starting with nuclear and likely larger than the whole humanity with AGI.
WarImportant9685 t1_j1l4y6m wrote
Reply to comment by Maleficent_Cloud5943 in There are far more dissenting opinions in this sub than people keep saying. by Krillinfor18
Bruh, he literally said he doesn't know what will be the future after singularity.
How can you say that he claims he is speaking as though he knows the future?
WarImportant9685 t1_j1l4lx2 wrote
Reply to comment by Ortus12 in There are far more dissenting opinions in this sub than people keep saying. by Krillinfor18
You are missing some biases that's on the optimistic camp brother
- Techno-religiousity -> Simple believe that any progress on technology is ultimately good.
- Unrealistic optimism -> Some might call this hopium
- Intelligence anthromorphism -> Believe that intelligent agent would be similar to human
WarImportant9685 t1_j1j86g1 wrote
Reply to comment by darudesandstrom in There are far more dissenting opinions in this sub than people keep saying. by Krillinfor18
yeah I agree. I tried to be respectful in my discussion on how bleak outcome might be possible. That it might not be sunshine and rainbows if singularity is achieved.
To put it frankly, people don't like it.
WarImportant9685 t1_j1j7luw wrote
Reply to comment by yerawizardmandy in There are far more dissenting opinions in this sub than people keep saying. by Krillinfor18
r/controlproblem is more heavily regulated if I'm not wrong. It's more about alignment theory
WarImportant9685 t1_j1ii252 wrote
Reply to comment by SmoothPlastic9 in There are far more dissenting opinions in this sub than people keep saying. by Krillinfor18
nah bro, the people that are optimistic about AI also posts ad hominem attacks to people who doesn't like AI. It's mud wrestling now.
Have you seen the post that call artist neo-luddites, saying straight up he doesn't care if artists lose their jobs? It got tons of upvotes here
WarImportant9685 t1_j173veq wrote
Reply to comment by SendMePicsOfCat in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
Okay, then imagine this. In the future an AGI is in training to obey human being. In the training simulation, he is trained to get groceries. After some iterations, where unethical stuff happens (robbery for example), he finally succeed to buy groceries as human wanted it.
Question is, how can we be sure that he isn't just obeying as human wanted it only when told to buy groceries? Well we then train this AGI on other tasks. When we are sufficiently confident that this AGI obeys as human wanted it in other tasks, we deploy it.
But hold on, in the real world the AGI can access the real uncurated internet, learn about hacking and the real stock market. Note that this AGI is never trained on hacking training in the simulation, as simulating the internet is a bit too much.
Now, he is asked by his owner to buy a gold bar for as cheap as possible. Hacking an online shop to get a gold bar is a perfectly valid strategy! Because he is never trained in this scenario before, thus the moral restriction is not specified.
I think your argument hinges on the fact that morality will get generalized outside of the training environment. Which might or might not be true. This is becoming even more complex with the fact that AGI might found solutions which is not just excluded in training simulation, but also have never been considered by humanity as a whole. New technology for example.
WarImportant9685 t1_j16zinc wrote
Reply to comment by SendMePicsOfCat in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
Well I don't agree with your first sentence already. How do we get this perfectly trained royal servant. How do we train the AGI to be perfectly loyal?
WarImportant9685 t1_j16uzo7 wrote
Reply to Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
I think you generalized current AI to AGI. The most useful trait of AGI, but also the most problematic is that it can self-learn.
So that the training environment can be much smaller than the real world. But then, if the training environment is so small, how can we be sure that human morals/obedience is generalized to the real world?
What kind of reward function/training process would elicit generalization of the expected behaviour?
I would like to hear your thoughts about this.
WarImportant9685 t1_j0ngwf5 wrote
Reply to comment by AsheyDS in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
I understand your point. Although we are not on the same page, I believe we are on the same chapter.
I think my main disagreement is that to recognize undesirable 'thoughts' in AI is not such an easy problem. As from my previous comments, one of the holy grail of AI interpretation study is detecting a lying AI which mean we are talking about the same thing! But you are more optimistic than I do, which is fine.
I also understand that we might be able design the AI to use less black-boxy structure to aid AI interpretation. But again I'm not too optimistic about this. I just have no idea how it can be achieved. As at a glance it seems like they are on different abstraction levels. Like if we are just designing the building blocks. How can we dictate how it is going to be used.
Like how are you supposed to design lego blocks, so that it cannot be used to create dragons.
Then again, maybe I'm just too doomer, as alignment problem is unsolved, AGI haven't been solved too. So I agree with you, we'll have to see how it goes.
WarImportant9685 t1_j0muh8q wrote
Reply to comment by AsheyDS in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
I do hope I share your optimism. But from the research I read, it seems that even the control problem seems to be a hard problem for us right now. As a fellow researcher what makes you personally feel optimistic that it'll be easy to solve?
I'll try to take a shot why I think the solution you said, is likely to be moot.
Direct modification of memory -> This is an advantage yes. But it's useless if we don't understand the AI in the way that we want. For the holy grail ideally we can understand if the AI is lying by looking at the neural weights. Or maybe searching with 100% certainty if the AI have mesa-optimizer for its subroutine. But our current AI interpretability research is still so far away from that.
Seamless sandboxing -> I'm not sure what you mean by this. But if I was to take a shot, I'll interpret this as true simulation of the real world. Which is impossible! My reasoning is that, the real world doesn't only contain garden, lake, and atom interactions. But also tons of human doing what the fuck they usually did. The economics and so on and on. What we can get is only 'close enough' simulation. But how do we define close enough? No one knows how to define this rigorously
Soft influence -> Not sure what you mean by this
Hard behavior modification -> I'll interpret this as hard rules for the AI to follow? Not gonna work. There is a reason why we are moving on from expert systems to AI. And we want to control AI with expert systems?
And anyway, I do want to hear your reply as a fellow researcher. Hopefully I don't come across as rude
WarImportant9685 t1_j0mpzhd wrote
Reply to comment by AndromedaAnimated in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
By disappear I don't mean they become upper class. I mean they die from starvation or last struggle kind of thing
WarImportant9685 t1_j0mjabo wrote
Reply to comment by Wassux in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
Learn chinese, enter CPC, become supreme leader, enforce AI safety
WarImportant9685 t1_j0mis5x wrote
Reply to comment by AsheyDS in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
yeah, IF we succeed in AI alignment to singular entity whether it is corporation/singular human. The question become the age old question of humankind that is, greed or altruism?
What will the entity that gain the power first do, I'm more inclined to think that we are just incapable of knowing the answer yet. As it is too situational to whom the power is attained first.
Unaligned AGI is a whole different beast tho.
WarImportant9685 t1_j0mfkj0 wrote
Reply to comment by DaggerShowRabs in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
no I have tried stable diffusion, and you can enter prompt with blablabla + name of artist style. It does create the artist style. And sometimes there are even the artist handsign!
WarImportant9685 t1_j0m2a4i wrote
Reply to comment by Heizard in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
no he's correct, inequality has been rising insanely because of technology. Of course the socioeconomic thing affects that too. But tech definitely amplified it. As it makes it easier for the rich to get richer. Easiest example are billionares.
But you're also correct that the lower and middle class standard of living has been uplifted by technology.
I think the question is, if when AGI arrived, what will happen, there are several views, which two of them corresponding to OP and you:
- the inequality become insane insane insane. As the lower and middle class replaced by AGI. They are gone/die by starvation/last struggle
- Maybe it's the contrary, as it's extremely easy to get resources. The lower and middle class standard of living is lifted to be very good. Maybe the abolishment of the class altogether.
IMHO, we don't know which one will happen. Thus dubbed the singularity. As possibly for the first time in history we are going to create a thinking machine that are much smarter than human. Whether it will be the god or the devil or just the staff of god or the trident of the devil. No body knows. But the alignment theory does teach us the possibility of the devil is real and logical.
WarImportant9685 t1_j0m1xnh wrote
Reply to comment by AsheyDS in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
I do hope 1) AI alignment theory progress faster than AI development 2) AI alignment theory that is discovered it not about how to align the will of one people. But aligning to humanity in general
WarImportant9685 t1_j0m1flg wrote
Reply to comment by OldWorldRevival in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
yeah this concerns me. Blatant stealing of art from known artists. With seemingly no public backlash from the tech community seems distasteful. Even though we (the tech community) are the one that understood, that the training data must have been webcrawled from the artists with no permisson. It seems kinda trashy that we don't care about other community as long as it doesn't touch our territory.
I've always identified with tech people. AI makes me think twice.
WarImportant9685 t1_iwmeray wrote
Reply to comment by RavenWolf1 in Ai art is a mixed bag by Nintell
Easy to say, but people in general likes shitting on antiwork. With some argument that they earned their money, how come other people can get basic income freely? Of course, then there's the argument about communism.
WarImportant9685 t1_j1o0t41 wrote
Reply to comment by Wassux in There are far more dissenting opinions in this sub than people keep saying. by Krillinfor18
Yes, that's why I don't want to say AGI is bad either. But rather, we need to tread carefully about AGI and consider each possibility. Rather than wholly believe it will be good