IntrepidTieKnot
IntrepidTieKnot t1_j9f89yr wrote
I build you one for 10 million dollars. Payment is upfront. It'll have a guaranteed prediction rate of almost 50%! So almost every second trade you execute, you will make profit!!! You just need to figure out which of the two trades is the profitable one.
But wait! I build you another model that can even predict that with an accuracy of almost 50%. Also upfront payment required.
You know what?
I'll discount you both models by 20% so you save a lot of money ordering both at once!
IntrepidTieKnot t1_j9egvzc wrote
Reply to comment by Snoo9704 in [D] Large Language Models feasible to run on 32GB RAM / 8 GB VRAM / 24GB VRAM by head_robotics
yes
IntrepidTieKnot t1_j547lq2 wrote
Reply to comment by Ok-Cartoonist8114 in [D] is it time to investigate retrieval language models? by hapliniste
I made a tool that chops documents in chunks, creates embeddings for the chunks via GPT-3 and stores the embeddings in a REDIS database. When I make a query, I create an embedding for that and look up my stored embeddings via cosine similarity.
My question is: isn't that the same as your tool does? In other words: what can you do with Cherche what I cannot do like I described? Is it that I don't need GPT-3 for the same result? Or what is it?
IntrepidTieKnot t1_j4dihgh wrote
I also like to do this. At the moment I think it is only possible if you re-train the model with that said code base. But I am happy to hear how it can be done from someone with more knowledge.
IntrepidTieKnot t1_jebtzyj wrote
Reply to [R] TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs - Yaobo Liang et al Microsoft 2023 by Singularian2501
Isn't this basically the description of Langchain?