SuperSpaceEye
SuperSpaceEye t1_jdnzekf wrote
Right now? Not really. In the future? Will probably require a ton of data about person (If you want it to be at least somewhat close)
SuperSpaceEye t1_j68jtkq wrote
Reply to Google not releasing MusicLM by Sieventer
Researchers in general rarely release stuff, sadly.
SuperSpaceEye t1_iwrtt69 wrote
Reply to When does an individual's death occur if the biological brain is gradually replaced by synthetic neurons? by NefariousNaz
That will depend on what are "we". If our consciousness arises from computation of neurons, then it wouldn't matter what device does the computation or in what form it is done. If, however, there is something more to our consciousness (some quantum stuff, maybe even existence of souls), then I don't think this question can be answered until we learn more about these processes. I'm myself a materialist, but who knows...
SuperSpaceEye t1_iwht6hf wrote
Reply to comment by Jordan117 in ELI5: Why such a big difference in compute cost for different types of media? by Jordan117
Two different tasks. Language model in SD just encodes text to some abstract representation that diffusion part of the model then uses. Text-to-text model such as GPT-J does different task which is much harder. Also, GPT-J is 6B parameters, which will only take like 12GB or VRAM, not hundreds.
SuperSpaceEye t1_iwhjfk1 wrote
Well, if you want to generate a coherent text you need a quite large model because you will easily find logical and writing errors as smaller models will give artifacts that will ruin the quality of output. The same with music as we are quite perceptive in small inaccuracies. Now images on the other hand can have "large" errors and still be beautiful to look at. Also, images can have large variations in textures, backgrounds, etc, making it easier for model to make "good enough" picture which won't work for text or audio, allowing for much smaller models.
SuperSpaceEye t1_itynygn wrote
GATO is able to retain knowledge across different tasks. It’s not, however, is able to "generalize"(i.e. the improvement in one task did not lead to improvement in a different task) (if i remember the paper correctly). So no, AGI was not solved.
SuperSpaceEye t1_irv0w3d wrote
Reply to comment by Mr_Hu-Man in Generation of high fidelity videos from text using Imagen Video by Dr_Singularity
It's "dreamlike" because it originally generates at such low resolution.
SuperSpaceEye t1_iruxijm wrote
Reply to comment by Saerain in Generation of high fidelity videos from text using Imagen Video by Dr_Singularity
The video generator only creates video at 24x48 pixel resolution and 3 fps.
SuperSpaceEye t1_jdxhwi6 wrote
Reply to comment by Gortanian2 in Singularity is a hypothesis by Gortanian2