Can’t say I’ve seen any post discussing game dev with LLM + grammar, so here I go.
Grammar makes the LLM response parsable by code. You can easily construct a json template that all responses from the LLM must abide to.
On a per character basis during RP, the following grammar properties are useful:
- emotion (I had a list of available emotion in my grammar file)
- affectionChange (describe immediate character reaction to user’s word and actions. Useful for visual novel immersion and progressionmanipulation)
- location (LLM is good at tracking whereabout and movement.)
- isDoingXXX (LLM is capable of detecting start and end of activities)
- typeOfXXX (LLM also know what the activity is, for example if the character is cooking, then a property called “whatIsCooking” will show eggs and hams)
I’m building a Renpy game using the above, and the prototype is successful, LLM can indeed meaningfully interact with the game world, and acts as more of a director than some text gen engine. I would assume that a D&D game can be built with a grammar file with some properties such as “EnemyList”, “Stats”, etc.
For my usage, 7b Mistral or 13b are already giving sensible/accurate results. Oobabooga allows grammar usage with exllamaHF (and AWQ but haven’t tried).
How is everyone using grammar to integrate LLM as a functional component of your software? Any tips and tricks?
How do you get it to work with ExLlama or ExLlamav2?
It works beautifully with llama.cpp, but with GPTQ models the responses are always empty.
zephyr-7B-beta-GPTQ:gptq-4bit-32g-actorder_True:
zephyr-7b-beta.Q4_K_M.gguf:
This is my grammar definition:
Do you need to “prime” the models using prompts to generate the proper output?