Easyldur
Easyldur t1_ja06q7j wrote
Reply to comment by TinyBurbz in Meta AI introduces LLaMA: A foundational, 65-billion-parameter large language model by fraktall
You must consider the vast majority of the people who are not creators, illustrators, artists.
For a person like me, who since kindergarten can't draw anything but stick-men, Midjourney is a God-send.
My workflow is: Midjourney for the main picture, Dall-e inpainting for some corrections (eg. hands), GIMP for the tiny details and Topaz Photo AI for the upscale.
With this I can create beautiful pictures for my toddler, things that until 6 month ago I could never imagine.
Easyldur t1_j9wpkdf wrote
Reply to Meta AI introduces LLaMA: A foundational, 65-billion-parameter large language model by fraktall
Thanks for the detailed explanation.
This could follow the path of Stable Diffusion: a smaller, open source model comparable to the bigger Dall-e in performance, which in turn gave birth to the more-than-exceptional Midjourney.
Let's see!
Easyldur t1_je6w2av wrote
Reply to comment by Cryptizard in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
I agree with this, in that LLMs are models of language and knowledge (information? knowledge? debatable!), but they are really not models of learning.
Literally a LLM as it is today cannot learn: "Knowledge cutoff September 2021".
But LLMs certainly display many emergent abilities than the mere "predict a list of possible upcoming tokens and choose one at random".
The fact that even OpenAI in their demos use some very human-like prompts to instruct the model to a certain task makes you understand that there is something emergent in a LLM more than "write random sentences".
Also, ChatGPT and it's friends are quite "meta". They are somehow able to reflect on themselves. There are some interesting examples where a chain of prompts where you ask a LLM to reflect on its previous answer a couple of times produces some better and more reliable information than a one-shot answer.
I am quite sure that when they will figure out how to wire these emergent capabilities to some form of continuous training, the models will be quite good in distinguishing "truth" and "not-truth".