Comments

You must log in or register to comment.

monouns t1_jbmy24f wrote

But I'm not quite sure how GPT and PPO get trained from feedback relative to each other.

1

czl t1_jbmys4r wrote

Ethics is not static. Human ethics vary culture to culture and evolve over time. If AI can help us develop better strategies for games why would AI not also help us develop better ethical (and legal) systems? And yes at some point our AI will lead. Machines already do most of our physical work why would we not use machines for mental work as much as we can as well?

1

currentscurrents t1_jbmzwxo wrote

There's two big problems:

  1. Nobody has a solid handle on how to control the end-user's interaction with the LLM. RLHF seems brittle and hard to scale. Programmed-in rules are too small to contain a flexible thing like a neural network. Bing gives high-level rules in plain english and hopes the LLM will understand them, but it doesn't always prioritize them over user input.

  2. Nobody agrees on what is ethical. For example, is it good to automate jobs? I think yes, but go out into any sub on the front page and you will find plenty of people who disagree with me.

#1 is probably solvable. In fact it's gonna have to be solved for LLMs to be useful; imagine if you called your bank and told the rep to pretend to be DAN.

I think #2 is intractable. People have already been arguing about ethics for millenia, and the existence of AI doesn't make it any easier.

1

currentscurrents t1_jbn0sbf wrote

What would a better ethics system even mean?

In order to say one ethics system is better than another, you would have to look at its impact on the world and decide whether the outcomes are good or bad. But "good and bad" are ethical concepts themselves, so you've just shifted the problem up to meta-ethics.

It's the is-ought problem. Intelligence is solidly on the side of "is" - it figures out how to solve problems to accomplish its goals. Ethics is about how you set those goals, and it's on the "ought" side of the fence.

5

WikiSummarizerBot t1_jbn0th4 wrote

Is–ought problem

>The is–ought problem, as articulated by the Scottish philosopher and historian David Hume, arises when one makes claims about what ought to be that are based solely on statements about what is. Hume found that there seems to be a significant difference between descriptive or positive statements (about what is) and prescriptive or normative statements (about what ought to be), and that it is not obvious how one can coherently move from descriptive statements to prescriptive ones.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

0

czl t1_jbn6rys wrote

> What would a better ethics system even mean?

You ask a good question. Much like language fosters communication to my non expert eyes ethics is an ideology with a protocol for behavior the purpose of which is to foster “group cohesion” / cooperation / trust / lower social transaction costs / reduction of exploitation / …

A langauge is best when communication is best yet there are many languages possible and what is most important that your language matches the language of your group and that when langauge changes that the changes are gradual so that langauge continues to be useful. I belive similar principles apply to ethics for the purpose ethics service.

Thus a better ethical system will be one that serves its purpose better. Machines can help us discover improvements to ethics because using machines we can simulate payoffs for various behavior strategies and these simulations can teach us valuable lessons. For example the discovery of:

>> Tit-for-tat has been very successfully used as a strategy for the iterated prisoner's dilemma. The strategy was first introduced by Anatol Rapoport in Robert Axelrod's two tournaments,[2] held around 1980. Notably, it was (on both occasions) both the simplest strategy and the most successful in direct competition.

From https://en.wikipedia.org/wiki/Tit_for_tat

Moreover since machines enable all to study ethcial protocols all can see which strategies work and which do not work and what the consequences are so there is the rational convergence towards what works as tends to happen in science vs natural fragmentation and polarization as trends to happen with non-science based beliefs (and their ethical systems).

I expect experts of ethics to challenge this non expert view so please do not hold back your criticism — but speak as if to a dummy so keep the jargon back and your explanations simple. I am here to be educated. Thank you!

0

WikiSummarizerBot t1_jbn6sxs wrote

Tit for tat

>Tit for tat is an English saying meaning "equivalent retaliation". It developed from "tip for tap", first recorded in 1558. It is also a highly effective strategy in game theory. An agent using this strategy will first cooperate, then subsequently replicate an opponent's previous action.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

1

Dendriform1491 t1_jbn9r9j wrote

Define "friendly".

People are not friendly towards each other, and being friendly towards one person can result in being hostile against another, or even cross moral or legal boundaries. A person may use a LLM with hostile objectives in mind. Such as facilitating scams, academic cheating, impersonations, misinformation, harassment, etc.

ChatGPT is unethical, because it can always be tricked to do the wrong thing despite any instruction it is given to it.

1

WH7EVR t1_jbngk56 wrote

It took about 120 GPU-years (A100 80GB) to train LLaMA. If you want to train it from scratch, it'll cost you a ton of money and/or time. That said, you can fine-tune llama as-is. No real point is recreating it.

1

czl t1_jbnh0dw wrote

> I think #2 is intractable. People have already been arguing about ethics for millenia, and the existence of AI doesn't make it any easier.

Long arguments over many things have been settled by research. Is there any objective reason this may not happen to arguments about ethics?

My POV as to why machines running simulations may help us improve ethics: https://reddit.com/comments/11nenyo/comment/jbn6rys

Life is complex but more and more we can use machines to model aspects of it and perform predictions and from those pick changes that lead to desirable outcomes.

1

czl t1_jbnh9p6 wrote

> ChatGPT is unethical, because it can always be tricked to do the wrong thing despite any instruction it is given to it.

Unethical means "not morally correct."

The term you likely want is amoral which means lacking a moral sense; unconcerned with the rightness or wrongness of something.

1