Viewing a single comment thread. View all comments

ShowerGrapes t1_je9fibd wrote

a vast simplification is this: neural pathways are created randomly with each new training cycle then something is input (text in gpt instance), the generated outputs are compared to the training data and higher weights are attached to the pathways that generate the best output, reinforcing these pathways for future output. done millions or trillions of times, these reinforced pathways end up being impressive. the way the neural pathways are created is constantly changing and evolving, which is the programming aspect of it. eventually, the ai will be able to figure out how best to create the pathways itself, probably. you can watch it in real time and see how bad it is in the beginning, watch it get better. it's an interesting cycle.

1