Viewing a single comment thread. View all comments

purepersistence t1_j9vi0cg wrote

There's a limit to the quality of output you get from a model that's attempting to generate the next logical sequence of words based on your query. There's no understanding of the world. Just text and parsing and attention relationships. So there's no sanity check at any level that understands the real-world meaning vs. patterns of text. That why in spite of improvements, it will continue to give off the wall answers sometimes. Attempting to shield people from outrageous or violent content will also tend to make the tool put a cloak in front of the value it could have delivered. That's why when you see it censoring itself, you get a lot of words that don't say much other than excuses.

3