Viewing a single comment thread. View all comments

The_frozen_one t1_jbzqvwc wrote

I'm running it using https://github.com/ggerganov/llama.cpp. The 4-bit version of 13b runs ok without GPU acceleration.

5

remghoost7 t1_jbzro03 wrote

Nice!

How's the generation speed...?

2

The_frozen_one t1_jbzv0gt wrote

It takes about 7 seconds to generate a full response using 13B to a prompt with the default (128) number of predicted tokens.

5