Submitted by Dicitur t3_zwht9g in MachineLearning

Hi everyone,

I am no programmer, and I have a very basic knowledge of machine learning, but I am fascinated by the possibilities offered by all the new models we have seen so far.

Some people around me say they are not that impressed by what AIs can do, so I built a small test (with a little help by chatGPT to code the whole thing): can you always 100% distinguish between AI art or text and old works of art or literature?

Here is the site: http://aiorart.com/

I find that AI-generated text is still generally easy to spot, but of course it is very challenging to go against great literary works. AI images can sometimes be truly deceptive.

I wonder what you will all think of it... and how all that will evolve in the coming months!

PS: The site is very crude (again, I am no programmer!). It works though.

287

Comments

You must log in or register to comment.

blablanonymous t1_j1usf6q wrote

Nothing more annoying than a counter that never ends but aside from that AI is getting really good

13

dojoteef t1_j1uwubj wrote

Nice job!

Though, to produce a better comparison it's best to show two examples side-by-side (one by a human, the other by the model, in a randomized order of course). The reason is that most people are not trained to analyze short snippets of text out of context. People trained to do that, e.g. English teachers, can better distinguish generated text without a baseline to compare against, but most people (crowd sourced evaluation) will likely produce a very biased analysis not reflective of the real ability for humans to distinguish between the two.

For a more thorough investigation of this phenomenon you can check out our research:

The Perils of Using Mechanical Turk to Evaluate Open-Ended Text Generation

26

anthonyhughes t1_j1uwyio wrote

Nice app. I worked up to 8/10. Seems to me that the main give away is the eyes and/or shadows for AI generated art.

11

KonArtist01 t1_j1ux9mu wrote

To me the art it is not distinguishable anymore. It passes the Turing test. I think it scares a lot of people, but that‘s the new reality

0

respeckKnuckles t1_j1v440q wrote

I'm not sure how the side by side comparison answers the same research question. If they are told one is AI and the other isn't, the reasoning they use will be different. It's not so much "is this AI?" as it is "which is more AI-like?"

20

dojoteef t1_j1v4j4r wrote

You don't need to tell them one is AI or model generated. Could be two model generated texts or two human written texts. Merely having another text for comparison allows people to better frame the task since otherwise they essentially need to imagine a baseline for comparison, which people rarely do.

−3

respeckKnuckles t1_j1v66iq wrote

You say it allows them to "better frame the task", but is your goal to have them maximize their accuracy, or to capture how well they can distinguish AI from human text in real-world conditions? If the latter, then this establishing of a "baseline" leads to a task with questionable ecological validity.

7

HermanCainsGhost t1_j1v71ty wrote

I got into an argument yesterday with some people about whether they could tell if something was AI or not, so I am definitely going to throw this around the next time the topic comes up....

15

Ulfgardleo t1_j1vcqri wrote

  1. you are asking humans to solve this task untrained, which is not the same as the human ability to distinguish the two.

  2. you are then also making it harder by phrasing the task in a way that makes it difficult for the human brain to solve it.

2

Ulfgardleo t1_j1vd1op wrote

there are definitely signs. paper texture is often wrong. hands are often wrong. With all my guesses of "old master" i was never really sure, but with the AI guesses i often were pretty confident.

3

respeckKnuckles t1_j1vempm wrote

> you are asking humans to solve this task untrained, which is not the same as the human ability to distinguish the two.

This is exactly my point. There are two different research questions being addressed by the two different methods. One needs to be aware of which they're addressing.

> you are then also making it harder by phrasing the task in a way that makes it difficult for the human brain to solve it.

In studying human reasoning, sometimes this is exactly what you want. In fact, for some work in studying Type 1 vs. Type 2 reasoning, we actually make the task harder (e.g. by adding WM or attentional constraints) in order to elicit certain types of reasoning. You want to see how they will perform in conditions where they're not given help. Not every study is about how to maximize human performance. Again, you need to be aware of what your study design is actually meant to do.

7

respeckKnuckles t1_j1vg27f wrote

Please let us know when you get some reportable results on this. I'm having trouble convincing fellow professors that they should be concerned enough to modify their courses to avoid the inevitable cheating that will happen. But in a stunning display of high-level Dunning-Kruger, they are entirely confident they can always tell the difference between AI and human-generated text. Some data might help to open their eyes.

5

respeckKnuckles t1_j1vgrit wrote

It'd be great if you could extend it to longer texts, like paragraph-lengths. A lot of these are recognizable quotes, so it throws off the reliability of the assessment a bit (especially if the people doing this might be, say, English professors).

3

starstruckmon t1_j1vib58 wrote

66/100 on paintings. Not that great considering 50/100 is a coin toss.

Also, thanks for making the improvements I was talking about when you posted last time ( probably not on this sub ).

53

Terra-Em t1_j1vjayv wrote

Many of the Oscar Wilde quotes didn't show up, Google chrome user. Neat app.

3

Ulfgardleo t1_j1vjc6q wrote

I don't think this is one of those cases. The question we want to answer is whether texts are good enough that humans will not pick up on it. Making the task as hard as possible for humans is not indicative of real world performance once people get presented these texts more regularly.

1

piiiou t1_j1vl1gw wrote

You should blur hands

36

FilthyCommieAccount t1_j1vmmm8 wrote

Nah i got 9/11 for art. It passes the at a glance test though and I was never really sure. How I did it is there's a distinct way we pose people in modern paintings that looks different than classical poses. Also in some AI paintings the faces looked too detailed and also modern. The other thing that gives it away are hands and eyes. Give it a few years though and I'm confident that even under inspection only pros will be able to tell.

1

MrFlamingQueen t1_j1vmykp wrote

They're not worried because on some level, it is recognizable, especially if you have a writing sample from the student.

On the other hand, there are already tools that can detect it, by comparing the sequences to the model's internal weights.

4

respeckKnuckles t1_j1vo5s7 wrote

I've never seen empirical study demonstrating either: (1) professors can reliably differentiate between AI-generated text and a random B-earning or C-earning student's work, or (2) those "tools" you mention (probably you're talking about the huggingface GPT-2-based tool) can do that either.

You say "on some level", and I don't think anyone disagrees. An A-student's work, especially if we have prior examples from the student, can probably be distinguished from AI work. That's not the special case I'm concerned with.

3

cgarrahan t1_j1vrw0a wrote

train a machine learning model to predict which works are AI generated

3

MrFlamingQueen t1_j1vskd9 wrote

Thank you for your response. You are correct that it may be easier to distinguish between the work of an A-student and AI-generated text. However, it is possible that professors can still differentiate between AI-generated text and the work of a B-earning or C-earning student, even if it is more difficult. This is because professors are trained to evaluate the quality and originality of student work, and may be able to identify certain characteristics or patterns that suggest the work was generated by an AI.

As for the tools that I mentioned, it is possible that they may also be able to differentiate between AI-generated text and human-written text to some degree. These tools use advanced machine learning algorithms to analyze text and identify patterns or characteristics that are indicative of AI-generated text. While they may not be able to reliably distinguish between AI-generated text and human-written text in all cases, they can still be useful for identifying potentially suspect text and alerting professors to the possibility that it may have been generated by an AI. Overall, it is important for professors to remain vigilant and use their expertise and judgement to evaluate the quality and originality of student work.

1

KonArtist01 t1_j1vsr9l wrote

But it got to the point where there is no sure tell. If you encounter an AI painting in the wild, no one will give you the correct answer. And the important thing is that the signs you mentioned do not take anything away from the beauty that lies within.

3

Thepookster t1_j1vt430 wrote

I was at 10/11 correct before I accidentally clicked back a page.

The one I did get wrong, I guessed that it was an AI, but it was real. The ones that were actually AI were still somewhat obvious.

Impressionist paintings are probably among the most difficult to tell the difference between because details that look "off" may just be part of the original impressionist design.

The most common things that tend to be "off"

- hands/ears

- Alphanumeric text

- common inanimate objects

It will definitely get to a point where its nigh impossible to tell, but not quite yet.

3

diditforthevideocard t1_j1vvw5m wrote

This is cool but would be more interesting IMO if the AI painting examples weren't instructed to emulate specific painting masters

1

danja t1_j1vwpoq wrote

Crit first - make it stop at 10!

Good work.

I was very surprised, only tried the paintings. I'm a fan of art history, relatively familiar with styles, can identify some because I recognise them. Wrong! Closer to 50/50.

4

FilthyCommieAccount t1_j1vx1s4 wrote

Make sure to do 100. I did 12 early and got 9/12 for paintings and thought it was trivial. I redid it and went to 100 and got 71/100. Hard but still discernable. Oddly enough I actually preferred midjourney over humans a lot of the time which made it easier to determine which ones were AI generated. Midjourney has a very distinctive style. The hardest to distinguish was Dall-E2 imho.

I wonder though if I wasn't familiar with AI art how I would fair. I used a lot of meta knowledge (midjourney oversaturates, image generators struggle with hands and eyes, image generators struggle to tell a narrative etc). I bet a rando non artist who hasn't followed AI art would score in the 50-60s range right now. Gonna send this to family and see how they score.

29

FilthyCommieAccount t1_j1vxrwy wrote

Yeah it's fascinating. I redid it and went to a hundred and got 71/100 so it's more difficult than my short test indicated. I'm sure if I hadn't been following AI art like I have the past year I would have done much worse.

3

Nrdman t1_j1vyglr wrote

My mom got 25/50 on paintings

5

danja t1_j1vytz5 wrote

Really, really good work doing this.

Again make it stop at 10.

I did slightly better on literature, even though I am less familiar with the writers than the painters. Never read any Nathaniel Hawthorn.

I didn't take a note, maybe Hemmingway - one was grammatically awful, but worked really well as a 'poetic' statement. Obviously not AI.

1

veshneresis t1_j1w0wec wrote

17/20. Biggest tells for me are for sure still conv artifacts and not the subject/painting itself. I think if there were a small Gaussian blur on everything my accuracy would only be slightly better than chance. It’s so cool to see how far the field has come and this is a great simple quiz! Will definitely send to some family and friends

9

Ulfgardleo t1_j1w3qhc wrote

I am not sure about you but I can predict a significant amount of ai paintings with high confidence. There are still significant errors in paper/material texture. Like "the model has not understood that canvas threads do not swirl". Or "this Hand looks off" or "this eye looks wrong".

(All three examples visible in the painting test above).

1

SuperImprobable t1_j1w432q wrote

Spoilers ahead. Great job on putting this together! I got 40/60 on paintings. Some of the signs for me were that if it just pops a little too much (very bright brights) it's probably MidJourney. I also saw some give aways that would be very strange choices for a human artist. One ai painting had two signatures, another had an extra piece of arm that didn't belong, extra fingers, one had the eyes closed and not in a particularly artistic way, I also saw misshapen eyes. Another one had an ink pen in her hand while an ink pen was still in the holder. Also lack of detail in particular areas where you would expect more attention to detail. Also, if I could zoom in and still see shapes of distant figures that made sense, or small supporting objects that was pretty clearly an old master. For example, there was one that had venetian boats and guys holding the poles, but also a random pole stuck in the mud. After thinking about it, this seems a very likely way to store them and not some random ai choice. In another, it looked very much of da Vinci, but the face was incredibly detailed and life like, which I never recalled seeing from his drawings. I also had some success with noticing that if the scene just felt cropped, where some supporting detail felt like it needed to be extended, that tended to be an old master. That said, if it was a straight portrait or a particularly stylized painting it was hard to tell. There was a man made of vegetables that had incredible detail and another person made of detailed flowers. One was AI, the other was not. Very impressive.

12

wasserdemon t1_j1w5u28 wrote

Hey so how exactly are the images generated? When it's an AI image, is the response the exact prompt fed into the image generator? If you give an AI the name of the painter, the piece, the year, and the gallery, shouldn't it just pull up an exact duplicate from the original, indistinguishable by anyone?

1

Dicitur OP t1_j1w9a51 wrote

Yes, it is the exact prompt (sometimes I just had to shorten it a bit). No, diffusion models don't create identical copies at all! They are just inspired by the source.

4

susmot t1_j1w9qw5 wrote

10/10 paintings. I was just looking for artifacts. But damn, some ai paintings were impressive, I really had to go strictly with my “this looks like an artifact” rule

4

muffinpercent t1_j1wgjqj wrote

Currently have 27/51 on paintings. Only slightly better than chance.

Luckily, in my area of art which is classical music, AI isn't close yet. And of course I'd be much more equipped to tell the difference.

3

PlsNoPics t1_j1wh596 wrote

Got 18/25 which is much worse than I expected to do!

2

fujiitora t1_j1whk8e wrote

Oi, it alternated between AI and human artist for the first 12 prompts, unlucky...

1

DSPCanada t1_j1wiw39 wrote

I mean the largest and popular Natural Language processing model (NLP) right now is GPT, which ChatGPT is built upon. If you read a literature and suspect it is written by ChatGPT, or JasperAI (also used GPT to process the NLP) , you can ask ChatGPT to write a literature topic similar to the published work, if ChatGPT produce the same or similar work, then you can confirm the published literature is written by ChatGPT or at least GPT model

1

waxlez2 t1_j1womlk wrote

Having a square vs square battle of AI VS art is not fair. It's only square. 28/40

edit second round: I am scared by this, but the lack of context makes this a weird quiz. And what is to gain from this? 33/40

1

SoloWingPixy1 t1_j1wppnp wrote

Why do all of these tests insist on using the lowest resolution version of images? Please change this in the future.

2

Liwet_SJNC t1_j1wtpdy wrote

72/100 on English literature. In general: poetry was a lot easier to answer than prose (even for authors I'm less familiar with), while shorter passages were predictably harder. My accuracy also got higher as I answered more questions, and I wonder if it might be harder if not all the AI quotes were from GPT-3.

A few questions were bugged and just didn't give me a quotation, and including James Joyce's Ulysses is definitely cheating.

1

thelastpizzaslice t1_j1wu0nb wrote

AI is very good at making old masters paintings specifically because they are realistically proportioned. Try something with less realism, less proportion or more subjects and it falls apart a lot of the time.

2

Cherubin0 t1_j1wv0sq wrote

I can't even distinguish modern art from child paintings.

1

Ellieot t1_j1wwczz wrote

Good job doing that, but the name of each image tells you what it is. Prefix 'Master' and Prefix 'MidJorney AI' or 'Stable Difusion AI'...

Perhaps renaming the images will improve this.

1

Liwet_SJNC t1_j1wwv7z wrote

I'm not sure this would be terribly convincing unless the professors in question are routinely setting 100 word essays on 'whatever'. In general a one sentence quotation of unknown surrounding context is always going to be much harder to identify as being from an AI than 5000 words on a known topic that have to be self-contained.

4

Liwet_SJNC t1_j1wyx8t wrote

Really? I'd argue that for people who aren't musically trained, things like AIVA are extremely hard to identify. Possibly harder than most AI pictures. Obviously it's rarely going to fool someone actually trained in classical music (or worse classical music and AI), but as the results here show, that's roughly true of paintings and literature too. Trained experts can identify them consistently, untrained people often have trouble.

3

respeckKnuckles t1_j1x00bh wrote

Yeah we have that, at least. The problem is that the pandemic moved a lot of classes and assignments online. Whether it is their choice or not, a lot of professors are still having homework assignments (even tests) online, and on those you often will see prompts asking for short 100-word answers.

1

ocsse t1_j1x6lgx wrote

Non-native speaker of English here. To me the English literature (4/10) is more like random guessing. Paintings are easier (15/20).

1

garo675 t1_j1xg6hm wrote

10/21 so I'm probably guessing by pure chance

1

TrainquilOasis1423 t1_j1xk1w3 wrote

I would love to see how this evolves with the next generation releases. It would be cool to see if the line stays flat cause preference is a subjective thing or if each iteration of dalle/stable diffusion/mid journey get progressively better.

....for the record I am not good at this. 54/100

1

EgregiousJellybean t1_j1xli1n wrote

GPT-3 is very bad at writing like the best authors of English literature. There’s a cheapness, a certain stiffness, and penchant for trite language to its prose. And it cannot emulate the prosody of Shakespeare or any of the greatest poets and authors (yet).

1

omniron t1_j1xmw3h wrote

Got 20/25 on paintings. I think I was learning as I went on though, probably would do better on a second glance

One thing that surprised me is the Asian painting of women bending into water. I’ve never seen an ai capture a subtle interaction like that as part of the background of an image. Ai is great at foreground objects but fails miserably at subtle background elements right now

1

s6x t1_j1xs2zb wrote

Not really a fair test since the resolution is so low.

1

cyranix t1_j1xtuhy wrote

So, I AM a programmer, and I've got, lets say, a bit more than basic knowledge of machine learning... We'll leave it at that, but suffice it to say I find recent models, especially stable diffusion and GPT, remarkable. I also think its interesting to wonder about how one might differentiate AI from any other abstract type of art...

So a while back, I wrote a script (actually, I wrote several of them, but I digress) that tests certain kinds of data sets for compliance with Benfords Law in a few different ways... For almost any arbitrary set of binary coded data, I can examine the bit values for compliance, but for things like ascii text, it is interesting to also look at the specific ascii coded values (so for instance, the leading letter "A" might appear roughly twice as often as the letter "B" or "E", depending on how you want to encapsulate the law, but the idea being that the statistical appearance of the model should be roughly the same for all real world data, and it would show anomalies if that data was artificially tampered with). For things like graphics, I can enumerate pixel/color values, and sure enough, the same pattern holds true. For instance, if you take a picture with a DSLR camera, the raw data encoded by that picture will comply with Benfords Law. If that picture has been touched up after the fact, for instance in Photoshop or GIMP, it is less likely to comply with Benfords Law.

You might wonder how this is useful in analyzing AI data, and I don't have a [coherent] answer for you yet, but I have a hypothesis, which is basically that when looked at the right way, thoeretically, AI data should be differentiable from Human created data by virtue of the fact that one will adhere to Benfords Law more often than the other... How, I don't entirely know. The funny thing about that theory is that Human data is typically less compliant with the rule, it is typically natural, ordered data which is more compliant. I'm still working out how this rule might be applied in such a way that makes it easier to detect a difference, but I'm curious whether in the end that will show Humans to be more compliant or AI to be more compliant with the rule. Maybe it won't be able to detect the difference. Anyway, its a side project that I'll probably dedicate some time to when I'm not up to my eyeballs in other things.

3

modeless t1_j1xw8xu wrote

71/100 exactly here too. I found Midjourney most convincing. Easiest tells I found are hands (obviously), signatures or any other lettering, malformed objects in general, and anything with symmetry or duplication. Funny that AI would be bad at duplication!

3

modeless t1_j1xwo4s wrote

Very cool. Literature is tough without being that familiar with the authors. Even so, I think longer snippets would be pretty easy. A sentence of only ten or so words out of context is not really much to go on.

5

TheAxeMan2020 t1_j1xzaem wrote

I went 19/30 on paintings. I had zero knowledge of AI, but I did scrutinize too much given it was a challenge. It is remarkable what AI can do, however, I am sure are experts of the individual generes will still just laugh and point out the fakes. Let's see what happens next. Pair it with a collab robot to acutually paint on oil on canvas and I'll be impressed.

1

kingwhocares t1_j1y3ac5 wrote

It's actually easier if you look at it with this in mind:

Paintings are done on some type of paper and over time both the paper and ink/paint/oil becomes old. For AI, it has to fake it and look for the faked effect. Original has things like lines and wrinkles.

3

starstruckmon t1_j1y6eon wrote

I dont think that method is fullproof, but yeah, if I took it again or gave each more time, Im sure I'd get better results. I think the ones I got most wrong are

  • Some of really bad smudgy stuff that was apparently done by humans.

  • Some of the really good ones that were done by Midjourney. Though I do think I can spot these much easier now. They're kind of uncannily good. Like a bit too photoreal while not actually being photoreal. The paper thing also seems to work on these ones.

3

CalligrapherFine6407 t1_j1y6yg6 wrote

There is literally no way of knowing AI or Real Human art! This shit (new era of AI generated content) is freaking scary!

1

CalligrapherFine6407 t1_j1y6z0a wrote

There is literally no way of knowing or differentiating between AI or Real Human art!

I was just guessing all through 😆

This shit (new era of AI generated content) is freaking scary!

1

cyranix t1_j1y6z4q wrote

Even more fascinating would be if such a test could be developed, if it is then further possible to train an AI to be able to pass the test. As with all questions of these nature, the real end-game is like the Turing Test. If the AI can be trained so well that no (blind) test can differentiate between the AI and a Human, what are the implications of that?

2

Dicitur OP t1_j1y7c43 wrote

I often think about this line from Westworld where the main character meets this beautiful woman in a world where there are very realistic androids. After 5 minutes of conversation, he asks her: "Are you real?" and she answers "If you can't tell, does it matter?"

1

Slothvibes t1_j1y8q0e wrote

23/40. I never get the sanguine ones right… smh

2

tavirabon t1_j1yc2ow wrote

We were doing AI art Turing tests with SD 1.4

Most people got a little above average, but the artists tended to guess correctly the human artists and miss some of the AI. It comes down to experience really, you can see brush techniques (especially digital brushes) and pick up on things like how some aspects of AI will be inconsistent in terms of skill level across the same image or how human art will take shortcuts reusing parts of the image. The test images were carefully picked so you couldn't determine it was AI by obviously bad anatomy, text etc

2

tavirabon t1_j1ycnu6 wrote

That's not an inherent thing with AI tho. Humans can do blurry hair and actually my experience with SD is hair tends to come out pretty sharp when the generation is good. Especially upscaling.

1

probably_sarc4sm t1_j1yebe6 wrote

I loved doing this (I went all the way to the end of the paintings)--Thanks! My only complaint is that the images are scaled up too much and that causes artifacts in all the paintings, which makes things more difficult. It would also be nice to have a running percentage score.

2

async_andrew t1_j1yif92 wrote

21 / 40 on paintings, so it's impossible task for me. Though I'm really proud for my 32 / 40 in English literature, since it's not my native lang, and I've never read a line of Byron.

2

sEi_ t1_j1ytk1j wrote

AI generated text is hard to spot in short sentences, but more easy with multiple lines.

2

NDPNDNTVariable t1_j1yxu6o wrote

The thing is you know what to look out for and are going into the situation with massive bias. Just knowing that youre reading something that is generated by AI will make you try really hard to spot it.

But for most people who have zero idea this exists. Who don't care. Who only read headlines or cursory glances at things. Everything chatgpt puts out I strongly believe that you can't tell the difference. The art stuff is the same way. The real scary thing about the art stuff is that it knows how to map out human emotion.

4

Liwet_SJNC t1_j1z8gk0 wrote

I agree? My favourite poem has barely any rhymes. And the AI actually manages rhymes fairly often ("If a man be true and of humble heart / Then none can deny him his rightful part / Love will lead him through the dark of night / And show him the truth that lies before his sight" is AI).

But that's not why poems were easier. It tends to be far easier to identify a poet's style from a brief snippet, and the AI has some trouble even keeping to a consistent metre, let alone riffing on it in a sensible way. Some modern poetry might not bother with metre at all, but that wasn't really a thing for Byron and Wordsworth, and it definitely wasn't for Shakespeare.

Also, every word of a really good poem is usually carefully chosen, because a word out of place stands out like a dropped note in a song. Whereas you can have passages that seem fairly out of place in a novel without overly damaging the overall work. Partly because prose focuses more than poetry on the meanings of the words, and far less on the sound of them. And partly because poetry just tends to be shorter.

You can identify a lot of the AI poetry by reading it aloud and realising it just doesn't sound good. At all.

Likewise, the ideas in the poetry are easier to judge. A passage from a book might tell us 'It was 13 O'clock in April', whereas a poem might tell us that 'April is the cruelest month, it mixes memory and desire'. The AI seems reasonably capable of imitating the factual kind of statement, but less capable of meaningfully dealing with more abstract value judgements. And when it tries you get things like "Through the darkness I forge, To a life I must endure, For this is my journey, My heart must be sure."

Even aside from the fact that it sounds bad, that is the kind of deep meaning I'd expect from a song written by a 13 year old emo whose parents just don't understand. Not Lord Byron.

2

EgregiousJellybean t1_j1zm9n8 wrote

Absolutely. You’ve articulated it much better than I could. I believe that good poetry needs meter (of some sort, though not as consistent as the Romantics’ adherence to meter or of course Shakespeare’s iambic pentameter).

2

Liwet_SJNC t1_j1zyl1o wrote

I tend to prefer poetry with metre too, but free verse is popular now, and doesn't always stick to a metre. You get things like Marianne Moore's 'Poetry' that just don't have any metre at all, or TS Elliott's 'The Waste Land' that flirts with lots of metres but is ultimately faithful to none of them.

2

EgregiousJellybean t1_j1zzr5z wrote

See, I love Eliot’s use of meter because he is very precisely economical with it; rather than consistent adherence to meter (like the great poets whom he viewed as his literary predecessors), he uses meter for effect. I haven’t read the waste land in a while, but I quite enjoyed Four Quartets in part due to his deliberate use of meter.

1

emosy t1_j203npw wrote

i wish my french were better so i could try the french literature. however, i was able to get about 70% correct with english literature because i could tell that the AI would try to generate longer sentences or use more common language whereas the real authors would use sentence constructions that would seem grammatically confusing nowadays. it's an interesting tell because it seems like the AI is often on par with a modern student trying to replicate the older writing style but uses more modern language.

a side note is that i got the same passage twice in a row while doing the english literature test, so that may be a bug. i believe it was Jane Austen

1

SoloWingPixy1 t1_j20xh0s wrote

The images should be represented as they are most commonly seen by the public, AI upscaling and all. Degrading traditional art to give AI generated images a fair chance is a bit silly.

2

SoloWingPixy1 t1_j216si9 wrote

Phone screens are often the same res as your monitor, 1080p at minimum, it's not really a valid excuse. The images posted in /r/StableDiffusion and MJ communities are practically never shared at 512x512 either

2

Tou7and t1_j26xslr wrote

Got 8/12. Maybe I can get a higher score after visiting a museum.

1