• lakolda@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The original LLMZip paper mainly focused on text compression. A later work (I forget the name) used an LLM trained on byte tokens. This allowed it to compress not just text, but any file format. I think it may have been Google who published that particular paper… Very impressive though.