Viewing a single comment thread. View all comments

currentscurrents t1_j7ivm0a wrote

>the only thing I have seen is cheating on homeworks and exams, faking legal documents, and serving as a dungeon master for D&D. The last one is kind of cool, but the first two are illegal.

Well that's just cherry-picking. LLMs could do very socially-good things like act as an oracle for all internet knowledge or automate millions of jobs. (assuming they can get the accuracy issues worked out - which there are tons of researchers trying to do, some of whom are even on this sub)

By far the most promising use is allowing computers to understand and express complex ideas in plain english. We're already seeing uses of this, for example text-to-image generators use a language model to understand prompts and guide the generation process. Or how Github Copilit can turn instructions from english into implementations in code.

I expect we'll see them applied to many more applications in the years to come, especially once desktop computers get fast enough to run them locally.

>starts playing by the same rules as everyone else in the industry.

Everyone else in the industry is also training on copyrighted data, because there is no source of uncopyrighted data big enough to train these models.

Also, your brain is updating its weights based on the copyrighted data in my comment right now, and that doesn't violate my copyright. Why should AI be any different?

5