rihard7854@alien.topB to LocalLLaMAEnglish · 2 years agoNVidia H200 achieves nearly 12,000 tokens/sec on Llama2-13B with TensorRT-LLMgithub.comexternal-linkmessage-square23linkfedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkNVidia H200 achieves nearly 12,000 tokens/sec on Llama2-13B with TensorRT-LLMgithub.comrihard7854@alien.topB to LocalLLaMAEnglish · 2 years agomessage-square23linkfedilink
minus-squarea_beautiful_rhind@alien.topBlinkfedilinkEnglisharrow-up1·2 years ago70b with 2048 context and 128 reply is about 303 t/s. That sounds more reasonable. And assuming they aren’t quantized. The batch size is just theoretical batch I think.
70b with 2048 context and 128 reply is about 303 t/s.
That sounds more reasonable. And assuming they aren’t quantized. The batch size is just theoretical batch I think.