• durden111111@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    nice. From my tests it seems to be about the same as LLava v1.5 13B and Bakllava. I’m starting to suspect that the CLIP-Large model all of these multi-model LLMs are using is holding them back.