Submitted by Business-Lead2679 t3_1271po7 in MachineLearning
polawiaczperel t1_jed1e9h wrote
I was playing with Llama 7b, 13b, 30b, 65b, Alpaca 30b native and lora, but this seems to be much better, and it is only 13b. Nice! Will they share the weights?
pasr9 t1_jefqoii wrote
I'm more interested in them releasing the dataset used to fine tune it.
Viewing a single comment thread. View all comments