Viewing a single comment thread. View all comments

jmbirn t1_j6690cz wrote

> "Without transparent referencing, students are forbidden to use the software for the production of any written work or presentations, except for specific course purposes, with the supervision of a course leader,"

In other words, the University issued reasonable guidelines, such as that you should label ChatGPT output accurately. Hardly a "ban."

37

drossbots t1_j672tq4 wrote

Redditors actually read the article challenge (impossible)

16

GoodRedd t1_j693wg9 wrote

I would love for you to explain what "transparent referencing" looks like when using a tool like ChatGPT.

I'm fairly confident that they're not referring to referencing ChatGPT. They're referring to referencing the material ChatGPT trains from. Which is opaque, and therefore makes the tool unusable.

The stupid part is that no human is expected to provide a transparent reference list of every piece of writing that they train themselves from. Which would be like keeping a history of everything you had ever read, and every conversation you had ever had with any person... Or yourself.

−1

jmbirn t1_j6ippwj wrote

A good first step towards transparency is that, if you're going to quote ChatGPT, you should say that you are quoting ChatGPT's output, provide the context of what prompt or question it was responding to, and say when you asked. Just like quoting a person, the quote can be an accurate quote, even if the person being quoted was wrong about something.

2