I tried to apply a lot of prompting techniques in 7b and 13b models. And no matter how hard I tried, there was barely any improvement.

  • Kep0a@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s definitely not useless, it’s just doesn’t understand instructions as literally as big models. Instead of asking it to write eloquently, if you play with delivering instructions in the context of what you want, it’ll better convey the meaning.

    Bad: “Write like you’re in a wild west adventure.”

    Good: “Yee-haw, partner! Saddle up for a rip-roarin’, gunslingin’ escapade through the untamed heart of the Wild West!”

    You also can’t force it to what you want it to do. It depends on the training data. If I want it to output a short story, sometimes requesting a scenario, or book summary will give wildly different results.

    I think prompt engineering is entirely for smaller models.