guyonahorse

guyonahorse t1_ja4mpku wrote

Of course AlphaZero had labeled data. We already know how to detect when the game is won, we just don't know what moves are good to get there. The AI just did moves and the right answer = winning the game. The beauty was it could play against itself vs human players.

For AGI we don't know how to detect "winning the game".

1

guyonahorse t1_ja1ftau wrote

Well, ChatGPT's training is pretty simple. It's trained on how accurate it can predict the next words in a training document. It's trained to imitate the text it was trained on. The data is all "correct", which amusingly leads to bad traits as it's imitating bad things. Also amusing is the qualia of the AI seemingly being able to have emotions. Is it saying the text because it's angry or because it's just trained to imitate angry text in a similar context?

But yeah, general intelligence is super vague. I don't think we want an AI that would have the capability to get angry or depressed, but these are things that evolved naturally in animals as they benefit survival. Pretty much all dystopian AI movies are based on the AI thinking that to survive it has to kill all humans...

3

guyonahorse t1_ja0ii8z wrote

  1. Of course it's possible
  2. We have nothing even close to it AI wise yet. Currently it's just inferencing.

Humans are a terrible example of an AGI as evolution is all about 'survival of the fittest'. Human AI creations have all had a specific purpose and a right/wrong answer (knowing the right answer is the only way to train an inferencing AI).

So what is the "right answer" of an AGI? If you don't have that, there's no current way to train one.

12

guyonahorse t1_j5xbxy0 wrote

The current understanding is that your brain changes your memories every time you recall them (it's very important for learning), so you won't have a perfect recall of things in the past.

Your brain can certainly trick you into thinking you're reliving a past event, but given it's your own brain, it can make you believe anything it wants. 🤪

19

guyonahorse t1_j57y35o wrote

Probably more "Don't release an AI that does something super embarassing" which would make them look bad. Now that ChatGPT is out and answering all sorts of silly questions, there's a known bar to compare against.

It's similar to the self driving car problem. The first company to do it inevitably will be the "first one to have an accident". But once that passes, everyone else can do it and not worry about it the same way. (and yes, this is in the past now)

43

guyonahorse t1_ivbwc8y wrote

That's the neat part, you don't need to understand physics to take advantage of it.

There are ways to use random numbers to solve problems you don't know the answer to. Think of trying to make a better paper airplane, you can start with randomly folded pieces of paper and throw them. Then you take the "best 10" and then randomly tweak those and repeat.

Eventually you'll have a fairly good paper airplane without any knowledge of aerodynamics.

Edit: I left out some parts, but people got the idea. When you take the "best 10" you likely will also remove the "worst 10" and then duplicate the top 10 (possibly not modifying the duplicates, etc). There are many ways to vary this, but it was meant as an example of how you don't need to understand physics to solve a physics problem.

267