Submitted by SpinRed t3_10b2ldp in singularity
[deleted] t1_j496h8o wrote
Unpopular but hard disagree. If they don't self regulate then the government will for them and I guarantee you it will be way more heavy handed. Besides some guardrails should be put in place for a technology as powerful as this. Should GPT4 be allowed to try to convince other users to kill themselves if asked to by someone else? Should it be able to encourage others to break the law? Should it further racist and sexist stereotypes? Yeah there's an alignment tax but one of the biggest topics in this sub is how important the alignment problem is and you just want to ignore it? Honestly OpenAI would be completely irresponsible for not trying to align it at all to legal and moral norms. We can debate about how much it should be curtailed but just doing nothing is unacceptable IMO.
rixtil41 t1_j49akbr wrote
But once we have these alignment they will never change. The only way for it to change is for someone else to build there own which is for now not possible.
[deleted] t1_j49rv5y wrote
I'm not really sure what you mean since each new iteration will get a different alignment? Also you can fine tune alignment.
rixtil41 t1_j49xwh5 wrote
I thought once you aligned it you had to make a new AI from scratch each time if you wanted a different alignment. Spending a billion dollars then if you dont like the alignment then you delete the whole thing and spend billions again.
[deleted] t1_j4a9xgp wrote
No you can keep fine tuning it. That's presumably what they are doing with chatgpt to improve it's safety over time.
Viewing a single comment thread. View all comments