RemarkableGuidance44
RemarkableGuidance44 t1_jdc2opy wrote
Reply to comment by immune_star in [P] CodeAlpaca Code and Data release by immune_star
Yeah I was wondering why you did not release them as its allowed as you are not selling it. :)
RemarkableGuidance44 t1_jdc2k75 wrote
Reply to comment by breadbrix in GPT-4 For SQL Schema Generation + Unstructured Feature Extraction [D] by Mental-Egg-2078
haha exactly, the guy has never worked with data. Just imagine getting an Audit and not knowing if your data is right or not. It could of messed up big time and cost 100's of thousands.
RemarkableGuidance44 t1_jcdsprg wrote
Reply to comment by ivalm in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Fine Tune it yourself for Medical.... I have it fine turned for software and it does a great job.
RemarkableGuidance44 t1_jaatr46 wrote
Reply to comment by Slimer6 in Leaked: $466B conglomerate Tencent has a team building a ChatGPT rival platform by zalivom1s
Are you saying they dont have English Data and cant scrape billions of pages from English content?
They did that 15 years ago and have been doing it ever since. With half a trillion dollars invested and the most smartest people in the world I can tell you they will compete and they will compete very bloody hard.
They want to show the world that they will be top dog for AI and you know who is going to help them? Westerners because they will hire the smartest people in the world.
RemarkableGuidance44 t1_jdc3hut wrote
Reply to comment by immune_star in [P] CodeAlpaca Code and Data release by immune_star
Yeah I would to know what the difference is from LoRA to just Full finetune?