lans_throwaway@alien.topB to LocalLLaMAEnglish · 1 year agoLook ahead decoding offers massive (~1.5x) speedup for inferencelmsys.orgexternal-linkmessage-square4fedilinkarrow-up11arrow-down10cross-posted to: localllama
arrow-up11arrow-down1external-linkLook ahead decoding offers massive (~1.5x) speedup for inferencelmsys.orglans_throwaway@alien.topB to LocalLLaMAEnglish · 1 year agomessage-square4fedilinkcross-posted to: localllama
minus-squareFlishFlashman@alien.topBlinkfedilinkEnglisharrow-up1·1 year agoThis seems like this approach could also be useful in situations where the goal isn’t speed, but rather “quality” (by a variety of metrics).
This seems like this approach could also be useful in situations where the goal isn’t speed, but rather “quality” (by a variety of metrics).