Submitted by AutoModerator t3_122oxap in MachineLearning
Chris_The_Pekka t1_jdrfr4l wrote
Hello everyone, I have a dataset with news articles and real radio-messages written by journalists. Now I want to generate radio-messages that look like real radio-messages so that is must not be done manually anymore. I wanted to use a GAN structure that uses a CNN as Discriminator, and a LSTM as Generator (as literature from 2021 suggested). However, now that GPT has become very strong, I want to use GPT. Could I use GPT as both the Discriminator and the Generator, or only the Generator (using GPT as Generator seems to be good, but I will need to do prompt optimization). Has anyone got an opinion or suggestion (or paper/blog I could read into that I might have missed)? I am doing this for my thesis and it would help me out greatly. Or maybe I am too fixated in using a GAN structure, and you suggest me to look into something else.
Username2upTo20chars t1_jdrydqw wrote
I am confused about your mention of GAN structure. If you want to generate natural language text, use a pretrained Large Language Model. You probably have to finetune it for best use, as you don't have access to the giant ones, which do very well with zero-shot prompting.
Viewing a single comment thread. View all comments