• laca_komputilulo@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Having an LLM clean up or summarize the user query and create a KG from the vector database’s response could lead to more accurate answers.

    That is the promise. Of course, you still need to figure out for your app domain if doing a concept-level, chunk level, or some in-between option like CSKG is the right application.

    One thing I find helpful with prompt design is to spend less attention on writing instructions, replacing them with specific examples instead. This replaces word-smithing with in-context learning samples. You build up the examples iteratively, running the same prompt through more text, fixing it and adding onto the example list… until you reach your context budget for the system prompt.

    • Some_Endian_FP17@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah, that’s what I do too. Example input and JSON key output, for example. The example idea also works with calculations: instead of telling the LLM each calculation step, use real numbers and show the result of each step in sequence.

      Sometimes vector search gets inaccurate results with really short queries, those with misspellings or SMS-speak. I find it helps to get an LLM to expand and correct a query before creating an embedding vector out of it.