Comments

You must log in or register to comment.

Vucea OP t1_j9gzjyx wrote

One side effect of unlimited content-creation machines—generative AI—is unlimited content.

On Monday, the editor of the renowned sci-fi publication Clarkesworld Magazine announced that he had temporarily closed story submissions due to a massive increase in machine-generated stories sent to the publication.

In a graph shared on Twitter, Clarkesworld editor Neil Clarke tallied the number of banned writers submitting plagiarized or machine-generated stories.

The numbers totaled 500 in February, up from just over 100 in January and a low baseline of around 25 in October 2022.

The rise in banned submissions roughly coincides with the release of ChatGPT on November 30, 2022.

41

Ian_ronald_maiden t1_j9h3eon wrote

It’s shit content though. I’m hoping this whole AI thing actually reduces the amount of bad writing overall, by highlighting just how terrible so much of it is.

Things like ChatGPT are appallingly bland, and dull, wordsmiths

33

FuturologyBot t1_j9h48ll wrote

The following submission statement was provided by /u/Vucea:


One side effect of unlimited content-creation machines—generative AI—is unlimited content.

On Monday, the editor of the renowned sci-fi publication Clarkesworld Magazine announced that he had temporarily closed story submissions due to a massive increase in machine-generated stories sent to the publication.

In a graph shared on Twitter, Clarkesworld editor Neil Clarke tallied the number of banned writers submitting plagiarized or machine-generated stories.

The numbers totaled 500 in February, up from just over 100 in January and a low baseline of around 25 in October 2022.

The rise in banned submissions roughly coincides with the release of ChatGPT on November 30, 2022.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/118gzrp/scifi_becomes_real_as_renowned_magazine_closes/j9gzjyx/

1

davidolson22 t1_j9hjy7d wrote

Are we pretending that scifi magazines are popular?

−15

JoaoMXN t1_j9hmrkp wrote

Keep believing in that. ChatGPT is very simple AI, imagine the new ones that are being made with billions more data and complexity. And how blogs and news sites are already trash with humans writing it, AIs will actually destroy their jobs easily in the future.

4

DomesticApe23 t1_j9hmune wrote

It's not the same thing as doing it. Are you familiar with the concept of the Chinese Room?

Currently AI can trawl data and build human language into sensible sentences and paragraphs. It understands nothing. All it needs to do to mimic meaning, or to further expand on its 'creative' properties, is to keep on learning.

14

FindorKotor93 t1_j9hnf62 wrote

Lets put it this way, you take someone else's insight on the human experience, change the words in a way that doesn't impact understanding. You now have an insight on the human experience that may resonate better with certain people purely based on the language used. If the AI figures out how to do that reliably, things are going to get very fucky for writers.

7

Ian_ronald_maiden t1_j9hodn4 wrote

That’s not insight though, it’s just plagiarism.

Life might get difficult for shitty writers, but it’s very interesting to think about AI perhaps communicating some beautifully crafted artistic truth that we’ve never considered.

5

Ian_ronald_maiden t1_j9hp0mm wrote

That’s not quite what I’m talking about.

The nature of actual artistic insight means it is impossible to mimic by virtue to the fact that successful mimicry in this sense cannot exist, because successful mimicry of insightful art would just be actual insightful art.

It’s not a question if “can a computer create something that looks like art”. We know it can. We already know that ChatGPT can produce good writing from the perspective if someone with no artistic understanding.

What’s fascinating here is the idea that AI could create actual art here, because it a machine is able to create something from which people gain a new or unique perspective via whatever artistic medium, then we have machines that aren’t mimicking, they’re just doing.

0

FindorKotor93 t1_j9hprrt wrote

New Writers.* Pretty much anyone without a brand existing could be indistinguishable from AI at that point. Authors don't constantly generate new unique insights into the human experience, they put down a version that resonates with different people, and it is no more plagiarism than GRRM plagiarised Tolkien or Tolkien plagiarised myth or Lewis plagiarised the Bible.
If the AI can teach itself how to conserve meaning whilst rewriting, then the written word becomes a dangerous world indeed.

5

Ian_ronald_maiden t1_j9hqhl2 wrote

Wasn’t photography supposed to destroy painting as well though?

If new writers cannot provide a single original thought, then perhaps they don’t deserve to break in anyway. No one is actually owed a successful novel, and if an expert craftsman can’t produce something any better than a literary sausage maker, then, well… perhaps this can provide some impetus for a sorely needed new phase of creativity.

Because it is quite notable that no one has done anything truly new and game since Tolkien - and he started writing more than a century ago.

0

DomesticApe23 t1_j9hr70b wrote

That's not even new. People have been finding meaning in sunsets and the sound of babbling brooks for millennia. People already assign meaning to nonsense, are unable to distinguish bullshit from meaning and Rupi Kaur is a famous poet. You can generate trite verse with ChatGPT right now that is just as meaningful as her banal nonsense, and if you market it right people will lap it up. What's the difference?

It's not an intrinsic property of the work you're talking about, it's perceptions. Right now ChatGPT sucks at creating fiction, not because 'it still doesn't understand'. It will never understand. But all it has to do is complexify its model enough that it encompasses longer forms. All that takes is raw data.

I don't really know what you mean by 'actual art'.

2

KillianDrake t1_j9i06gw wrote

I don't understand what the problem is, they should be glad to receive so many new entries. Just continue to read them and pick the best ones and publish them. Too many? Overwhelming? Utilize an AI to read and categorize them for you so you end up with a filtered amount similar to your previous workload.

All that matters is the best content gets published, regardless of who wrote it or how they wrote it. I mean, it's no different than an author who reads Game of Thrones and then decides to publish a novel using the same character-per-chapter format and themes but with new characters, events and environment... Oh wait, that was The Expanse!

−11

KillianDrake t1_j9i0ysj wrote

And people don't always need new original thoughts. They just want to be entertained cheaply. If an AI can write a full 10-novel series in an hour that entertains people enough and only costs a quarter, then... so be it? Better than waiting for 20 years for a series that the author's gotten tired of (ahem, GRRM).

The real thing that is attempting to be protected here is the gate-keeping. We no longer need editors, we no longer need publishers, we no longer need bored millionaire authors...

0

DomesticApe23 t1_j9i25rb wrote

Write me a paragraph sarcastically mocking Avatar: The Last Airbender movie

Oh, where do I begin with Avatar: The Last Airbender movie? It's a masterpiece! An absolute cinematic treasure that completely did justice to the beloved animated series. Who needs compelling character development and a coherent storyline when you have a cringe-worthy script and laughable acting? And why bother staying true to the original source material when you can just completely butcher it beyond recognition? The stunning visual effects and impressive choreography are obviously enough to make up for any other shortcomings. Clearly, the filmmakers had a deep understanding of what made the animated series so iconic and chose to completely ignore it in favor of their own creative genius. Bravo, Avatar movie, bravo.

3

ego_bot t1_j9i3ozo wrote

You are underestimating how difficult it is for the editors to keep up with the volume of submissions. Between minimal staff (volunteer or otherwise) and already razor-thin profit margins, submissions rocketing with entirely garbage AI submissions is a difficulty magazines can't really afford.

In other words, the AI submissions are not the best content. Not even close. They are simply muddying the waters and making it harder for the editors to find the good human stuff because they have that many more documents to open, that many more submissions to reject, that many more accounts to ban. It's just spam.

As for the AI to sort and filter out the AI-generated submissions, the tech only has about a 50% success rate at the moment. The editor commented on this. It's simply not accurate enough at the moment.

13

KillianDrake t1_j9i4x9y wrote

Progress moves forward, old ways die off, adjust and adapt. Content is now cheaper to produce. If it's true that all AI content is trash, then people will ignore it and gravitate to the "real" stuff - but I think we all know, that's not actually true, and that people will gravitate to whatever is interesting and that's what scares the gatekeepers. What if the AI stuff is just as interesting as their own stuff? What happens to "me"? What if this is just temporary and in a few years, AI makes another leap forward? People will adjust and adapt and become better prompt writers and if they can direct the AI better than average, then they'll be fine.

−5

fwubglubbel t1_j9iai68 wrote

>due to AI writers

No. It's due to many idiots using a single "AI writer".

10

Shelsonw t1_j9iaqlv wrote

And so it begins, the flood of cheap content that will bury human made content in a morass of garbage until we won’t know what’s fake or real. All to cheers and applause of what a brilliant future we are creating….

49

aft_punk t1_j9id1ve wrote

I actually think it has the possibility of swinging the other way (at least in some areas).

Content being read and judged more critically, to select the gems among the sea of drivel (which humans are quite capable of producing without the assistance of AI)

This article is about the editors attempts to block AI content, it’s hard to see how a publication will be able to curate high quality content without a bit of my theory playing out.

11

Ian_ronald_maiden t1_j9igle0 wrote

Sure. But derivative cliche ridden re-hashes are no great loss to anyone.

The gate keeping if reliable sources, however, as we’ve seen in recent years, is a critical function.

Zuckerberg and Musk’s etc refusal to accept editorial responsibility in tech platforms has been disastrous

3

ExasperatedEE t1_j9ijpww wrote

There's nothing fake about it.

And garbage is garbage, whether its created by humans, or created by AI.

If ChatGPT is generating garbage, and this magazine can't tell the difference between the work an AI is spitting out and what humans are putting out, then what the humans were putting out was also garbage.

−7

SandAndAlum t1_j9ik8xx wrote

The chinese room is just an exercise in shuffling complexity around and argument from incredulity. Nothing is proven other than the human in the room isn't the person being spoken to, which we started with in the premise.

1

SandAndAlum t1_j9ilsm5 wrote

All of Searle's no-simulation arguments consist of making an information processing machine out of silly parts, hiding how much information such a system would contain, and then saying 'look those parts are silly! There can't be meaning here'. It's pointless and circular.

But neither you nor he have defined meaning, and are saying nothing about whether or not meaning is an emergent property. Facile dismissals based on the presumption that it cannot emerge are what's hollow. Pointing out how tautogical that argument is is not.

0

DomesticApe23 t1_j9im59f wrote

ChatGPT is literally a Chinese Room. It understands nothing, yet it delivers meaning well enough, just as the Chinese Room translate Chinese well enough. Your failure to understand the specifics of ChatGPTs software is exactly analogous to 'hiding how much information a system such a system would contain'.

1

SandAndAlum t1_j9in9v7 wrote

Your presupposition that understanding cannot emerge from a table of numbers and some rules for multiplying and adding them is your conclusion that there is no understanding or new meaning that can emerge.

Your conclusion is identical to your assumption, so you're just extremely arrogantly saying nothing, then even more arrogantly falling back to an argument from authority where someone else did the same thing.

−1

SandAndAlum t1_j9io0d3 wrote

There is the kinda-open question of whether there are physical phenomena that cannot be modelled as an information process. True randomness would be one. Free will (insofar as the phrase is at all well defined) would potentially be another.

If so then all physical phenomena are not reducable to information processes and "meaning" could be one.

3

CaseyTS t1_j9io8yn wrote

The thing you were talking about was developing deep and unique insights about the human experience, from the comment. Yes, you can do that with a generative model that does not have subjective experience. It can intelligently and creatively synthesize information from vast amounts of documented human experience. That is literally what generative LLMs are designed to do - learn from humans and talk about it.

0

jawshoeaw t1_j9ip89n wrote

Maybe. I have been impressed with chatGPT , but mostly in its ability to replicate the tedious and practical. The things so many of us must do for a paycheck. You know that feeling that you love a song and wonder , will there ever be another song this good? Or a book where you’re literally depressed that it’s over and want to cry that nothing written will ever make you feel that way again? I don’t believe that will be reproduced by an AI . If it is I’m done

8

r2k-in-the-vortex t1_j9ipdz9 wrote

The problem is that they were paying for submissions by wordcount, well, what do you expect? Of course you get massive wordcounts of garbage.

1

Ehgadsman t1_j9isk7v wrote

great, a new reason why we cant have nice things, to go along with all the old reasons. I am starting to feel 'fuck this AI bullshit'. Keep it to medical and scientific research and ban its use for art and literature. Its literally dehumanizing and degrading.

17

ego_bot t1_j9iuoq4 wrote

Valid points. Will be interesting to see what happens when the AI art is actually competent and enjoyable.

However, it seems to be that humans inherently enjoy something less if they know it has been generated by a program in a few seconds. There is no creative process, no soul, "a mockery of what it means to be human." The AI itself isn't even a thinking being, not even close (though one day that could change).

You are right about one thing. We will adapt, one way or another.

5

hxckrt t1_j9iv5lh wrote

It's good at replicating text patterns, but it doesn't reason, and can only basically only copy humans chatting. Midjourney might have been a better example. Point is that those systems will fundamentally not surpass human, just become better at copying us.

5

Zer0pede t1_j9ix7nj wrote

Yeah, before if a bad writer wanted to submit something, they’d actually have to take the time and effort to write it. That slows them down and weeds out the lazy ones. Now they just have to write a prompt. Nothing to slow them down and nothing to weed out the laziest. Having to read the first several paragraphs of hundreds of submissions just sounds miserable—literally more work than it took them to “write” it. I would absolutely ban everyone who wasted my time like that.

8

scummos t1_j9iy72c wrote

> I actually think it has the possibility of swinging the other way (at least in some areas).

I think that's without alternatives honestly. The internet long has suffered from a problem with algorithmically curated, if not outright generated, content now. This time will come to an end, because already now googling things tends to yield heaps of garbage to sift through to get to the one good piece of information, and it will get worse quickly with all this AI tooling available.

I guess people will turn back to reading their favourite blogs and websites more (or the more modern counterparts realized in Instagram or whatever -- same concept, you look at content created or curated by a specific person), and explore through what people they trust point them to. Which is probably a good thing, since exploring the algorithm-curated landscape was usually not particularly great. (I'm looking at you, "Youtube related videos".)

I think the algorithmically curated landscape (e.g. Google search) will retract more and more towards just buying/selling things, because that's a transaction with real money and real things attached, which cannot plausibly be fudged by AI.

4

walkingmonster t1_j9j9lsj wrote

AI "art" is just an amalgam of 1000 things we've seen 1000 times before (but now with goopy mutant hands). It's a revolutionary tool for content creation, but it doesn't come close to actual human creativity/ ingenuity. It will always be derivative.

3

Shelsonw t1_j9je5kp wrote

And I view that opinion about quality to be so short sighted. Like, it’s quality today, the technology has functionality been around for what, a year? If we don’t think it’s going to improve, then we’re crazy. I mean, it’s already secretly winning art competitions. That’s quality production

https://www.theinertia.com/surf/ai-generated-surf-image-wins-australian-photo-competition/#

−1

Vezeri t1_j9jtttl wrote

And it really still is, because they copy from existing material, but they don't think for themselves. Current AI is more capital A artificial like cheese whiz and not really capital I intelligent like humans are. It is impressive how well it imitates things, but the key thing is that it only imitates and it doesn't make anything that doesn't exist already. Maybe one day we will have true AI that will surpass humanity, but that really isn't ChatGPT or Midjourner lol.

2

Top_Requirement_1341 t1_j9jx8w1 wrote

There are already services which detect plagiarism.

Those services also need to query the AI services, which must be required to report anything similar to generated content.

Is also a short term fix for student submissions.

Doesn't help if/when everyone has an AI locally on their phones / PCs.

2

ExasperatedEE t1_j9k3ibt wrote

Well, in that case you could argue the AI cheated. It didn't take a photo. It PAINTED the image. If a human used photoshop to create a photorealitic image that won a photography competition, they would also be cheating, and lose, if caught.

> or is the very best we can do also garbage?

It's photography. It's a hobby where if you are wealthy enough to afford the equipment and travel to exotic places and hang out for long enough to spot a cool looking animal, you can win prizes by pointing, adjusting focus, and clicking a button at the right time. It doesn't require a huge amount of skill. Someone can be a naturally talented photographer with almost no training, whereas being a highly skilled artist requires decades of practice. Don't tell me that the award winning photo of the afghan girl isn't a photo that almost any portrait photographer in a mall couldn't have managed to snag, had they been in the right place at the right time.

So maybe the problem really is we're giving wealthy people awards for mediocirity? Even art is not immune to this. There is a hell of a lot of "art" that sells for a lot of money which is literally just a pile of garbage in a corner. But hey, the AI can't produce that, yet, right? That's a physical thing.

So maybe the solution here is for artists to go back to mediums that are physical, like acrylic paint on canvas, and then sell those works for a lot of money instead of just mass printing their stuff on a laserjet? I know I wouldn't buy a laserjet image, human or AI generated, but something with acrylic or oil that has depth to the brush strokes? That's something worth hanging on your wall and paying for.

2

yaosio t1_j9kdjhz wrote

Bing Chat uses a better model than ChatGPT which results in better written stories. Biggest improvement is I don't have to tell Bing Chat not to start the story with "Once upon a time." It's now at the level of an intelligent 8 year old fan fiction writer that needs to write their story as fast as possible because it's almost bed time. https://pastebin.com/G8iTJmqk

Every time they improve the model it becomes a better writer. I remember when AI Dungeon had the original GPT-3 and it could not stay on topic, and that was fine tuned on stories.

1

KillianDrake t1_j9km4p9 wrote

how do you think humans learn? by being forced to read and learn from a ton of existing material... as a blubbering mass of baby fat, you don't know how to speak, write or do anything unless someone shows you from EXISTING MATERIAL.

0

KillianDrake t1_j9kmqd9 wrote

what is "reason"? humans simply have more neurons firing in an insanely efficient manner.

when ML reaches the same number of "neurons" firing, it will produce the same kind of results. then it will be focusing on increasing the efficiency.

there is nothing special about humans

−2

K----_ST t1_j9lmxwd wrote

As long as you have the option to upvote or 'like' or 'retweet' things, your theory doesn't hold true. The internet is about popular opinion even if it's not right.

1

K----_ST t1_j9lnojp wrote

Just had this convo in the Midjourney sub. Not only is it creating entitled, low-effort individuals, it's also teaching them to use descriptive words and phrases of concepts incorrectly. But they don't care because it yields the output that looks good to them.

2

Zer0pede t1_j9lom1q wrote

“Write me a book in the style of Leonardo DaVinci. Greg Rutkowski. Not ugly. Anatomically correct hands. Masterpiece. Beautiful woman. Greg Rutkowski. Makoto Shinkai. Anime. Greg Rutkowski. Normal fingers.”

2

K----_ST t1_j9lomg4 wrote

It isn't though. There's quite a bit wrong with that image from a surfer's perspective and the judges weren't surfers which is why she talks about the 'perfection' being appealing to the laymen person. A surfer is going to know that the break is weird, or the flow and physics of the whitewash doesn't make sense.

In general, most people who aren't privy to a specific discipline are ignorant of it. My neighborhood fb group is filled with people who couldn't tell the difference between a well post-processed photo and one that's massively clipped in the highlights with maxed out clarity and saturation.

2

hxckrt t1_j9lvpfi wrote

When you make a chip with just as many transistors as a calculator, does it automagically become a calculator? No, it needs to be wired for the job and you need to program it. In the same way, neural networks need weights and biases, their "training".

You can get the calculations going, but where are you getting the training data to make art and music superhuman? Because that's what the argument is about. Are you going to model the subjective appreciation of it? That doesn't work that way because you can't write a loss function for what "better" art is.

1

Rofel_Wodring t1_j9m1l2v wrote

>There is the kinda-open question of whether there are physical phenomena that cannot be modelled as an information process.

Spiritualists pretend like there is so they can have a scientific justification for crap like souls and telepathy, but from a materialist perspective: no, there isn't. If it can't be modelled as an information process, it doesn't fucking exist.

For example: randomness can be modelled as an information process. It's probably one of the easiest ones there is. It only seems complex because our brains are bad at handling iterative probability, or even non-linear change.

But that just means we're weak babies with simple minds, unable to comprehend the full consequences of our actions. It doesn't mean that it's actually a difficult thing to simulate in an information process, and it certainly doesn't mean that there exist physical phenomena that cannot be modelled as an information process. Because, again, such things don't and can't exist outside of spiritualists' imagination.

1

Rofel_Wodring t1_j9m2k22 wrote

What SandAndAlum means is that the Chinese Room Experiment shuffles the responsibility for explaining humanity's (self-oriented and essentialist) viewpoint of consciousness onto the computer. It just takes human consciousness as a given that doesn't have to justify itself, and certainly not through reductionism.

Because if our mode of consciousness did have to justify itself by the same rules of the computer in the Chinese Room Experiment, we'd fail in the same way the computer would fail.

1

SandAndAlum t1_j9m3r1g wrote

> For example: randomness can be modelled as an information process. It's probably one of the easiest ones there is. It only seems complex because our brains are bad at handling iterative probability, or even non-linear change

You can model stochastic systems, but a turing machine cannot produce a non-deterministic output. You can model the random system as a whole, but there is no rule saying when each particle will decay.

It could be some variant of superdeterminism/bohmian nonsense, but that's even more mystical than souls. A block universe or many worlds doesn't tell you why you're the you experiencing one branch and not the you experiencing another.

1

Outside-Car1988 t1_j9m3ucs wrote

I don't think we have anything to worry about if that robot in the picture is trying to use his TRS-80 Color Computer backwards.

1

Rofel_Wodring t1_j9m3zf6 wrote

It's also the only kind of art that exists, will ever exist, or even can ever exist.

Unless you're one of those spiritualists who think artistic talent comes from ~the human spirit~ instead of something more mundane and deterministic such as 'the artist's wartime experiences as a child' or 'exposure to hundreds of other artists of that genre'.

1

KillianDrake t1_j9ma7x7 wrote

adversarial networks, the same way they train Alphago - once you have something that can produce and understand stories, then it can rate them. It will generate and rate itself millions of times faster than the human race did, and just like Alphago became dominant enough to take down Go grandmasters, so will this.

No point fighting against it, learn to adapt, learn to adjust.

0

hxckrt t1_j9nq67q wrote

Ah so the answer is "yes, we're going to model subjective appreciation of art"?

Go has an objective score you can quickly calculate to get better than humans. Writing and art do not, so you're still stuck copying humans, because you need them to rate the output. You're confusing objective score (quantity) with subjective quality.

And "no point fighting against it"? You're starting to sound like the Borg gif. Try to understand how this works before you abandon all hope in favor of our robot overlords.

1