respeckKnuckles

respeckKnuckles t1_j1x00bh wrote

Yeah we have that, at least. The problem is that the pandemic moved a lot of classes and assignments online. Whether it is their choice or not, a lot of professors are still having homework assignments (even tests) online, and on those you often will see prompts asking for short 100-word answers.

1

respeckKnuckles t1_j1vo5s7 wrote

I've never seen empirical study demonstrating either: (1) professors can reliably differentiate between AI-generated text and a random B-earning or C-earning student's work, or (2) those "tools" you mention (probably you're talking about the huggingface GPT-2-based tool) can do that either.

You say "on some level", and I don't think anyone disagrees. An A-student's work, especially if we have prior examples from the student, can probably be distinguished from AI work. That's not the special case I'm concerned with.

3

respeckKnuckles t1_j1vg27f wrote

Please let us know when you get some reportable results on this. I'm having trouble convincing fellow professors that they should be concerned enough to modify their courses to avoid the inevitable cheating that will happen. But in a stunning display of high-level Dunning-Kruger, they are entirely confident they can always tell the difference between AI and human-generated text. Some data might help to open their eyes.

5

respeckKnuckles t1_j1vempm wrote

> you are asking humans to solve this task untrained, which is not the same as the human ability to distinguish the two.

This is exactly my point. There are two different research questions being addressed by the two different methods. One needs to be aware of which they're addressing.

> you are then also making it harder by phrasing the task in a way that makes it difficult for the human brain to solve it.

In studying human reasoning, sometimes this is exactly what you want. In fact, for some work in studying Type 1 vs. Type 2 reasoning, we actually make the task harder (e.g. by adding WM or attentional constraints) in order to elicit certain types of reasoning. You want to see how they will perform in conditions where they're not given help. Not every study is about how to maximize human performance. Again, you need to be aware of what your study design is actually meant to do.

7

respeckKnuckles t1_j1v66iq wrote

You say it allows them to "better frame the task", but is your goal to have them maximize their accuracy, or to capture how well they can distinguish AI from human text in real-world conditions? If the latter, then this establishing of a "baseline" leads to a task with questionable ecological validity.

7

respeckKnuckles t1_j0apq5l wrote

I asked for an operationalizable, non-circular definition. These are neither.

> the state of knowing that you know something and can analyze it, look at it from different angles, change your mind about it given new information, and so on.

Can it be measured? Can it be detected in a measurable, objective way? How is this not simply circular: truly understanding is defined as truly knowing, and truly knowing is defined as truly understanding?

> Today's AI language models have lots of information contained within themselves, but they can only use this information to complete prompts, to add words to the end of a sequence of words you give them. They have no memory of what they've done, no ability to look at themselves, no viewpoints. There is understanding of the world contained within their model in a sense, but THEY don't understand anything, because there is no them at all, there is no operator there which can do anything but add more words to the end of the word chain.

This is the problem with the "argumentum ad qualia"; qualia is simply asserted as this non-measurable thing that "you just gotta feel, man", and then is supported by these assertions of what AI is not and never can be. And how do they back up those assertions? By saying it all reduces to qualia, of course. And they conveniently hide behind the non-falsifiable shell that their belief in qualia provides. It's exhausting.

3

respeckKnuckles t1_irwn0vj wrote

NYU professor who published a few "pop-sciency" books on AI-related stuff. Like many in his generation, he got some attention for taking a contrarian stance on what current approaches to AI can do, and decided to go extremist with it. I'm not sure he's much more than a full-time angry twitterer now.

8