Hello there,

I’m a student and me and my team have the assignment to make a chatbot for our university. We need to make a chatbot that can help other students find information about their course. We will get our data from manuals of multiple universtity websites (as pdf). This data will be turned into Q&A data using ChatGPT 4.

However, we are struggling to find a pre-trained LLM that fits our assignment. We’ve researched T5, BERT and GPT-2 but our teacher was surprised those were the models we researched, since there are more popular and newer models. Our chatbot must be in Dutch, but we can translate so the LLM doesn’t need to be trained on Dutch data. The LLM can’t be too big, because we don’t have the hardware for very large models.

My question is: is LLaMa a good LLM for making a chatbot?

  • toothpastespiders@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Oh yeah, you’re absolutely going to want to go with a llama2 model over the options you’ve looked at already. The only one of them I have direct experience with is GPT-2. But the worst llama models I’ve seen still feel like night and day in comparison to GPT2.

    Personally, I think you’d be best off going with a combination of fine-tuning with your own data and using RAG in order to get as far away from hallucinations as possible. Not everyone agrees, but I think that both in tandem is the way to go.

    I think that the language is going to be the larger issue. This is just conjecture on my part. But I suspect that a powerful model that’s only trained on ‘your’ dutch data and is otherwise focused on English would probably end up performing worse to Dutch prompts than a less capable model that was trained with large amounts of miscellaneous Dutch language data in addition to your own.

    I remember this Dutch 7b model was released fairly recently. It was created from a base llama2 chat model. Which means it probably also has a lot of the more “corporate style” tone that most people here are trying to avoid. But given the context, I think that might actually be an advantage for you. Being safe for work/school is probably a bit of a priority.

    7b also has the advantage of being very light on resource usage. And I mean very, very, light. I’ve been using a 7b model for some automated tasks on spare hardware that doesn’t even have a GPU. It’s entirely running on an ancient CPU. And while slow, it’s not unbearably so.

    • reallmconnoisseur@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I agree with finetuning + RAG, given that OP already seems to have Q&A pairs, so it should be a great starting point as a dataset.

      The language (Dutch <-> English) could possibly be a barrier for reasonable performance with Llama or any other 7B model, but as OP stated they might be able to use translation for that. I’m not sure whether DeepL could be used for that, i.e., using the DeepL API as a wrapper around the code for user input and chatbot output. It should have pretty good perfomance for Dutch. I like the idea and would like to test this or see the results when properly implemented. So please keep us updated on your approach u/Flo501