Hi.
Anyone got any experience with using (a set of) local LLMs for practicing a new language? (Spanish, not Python). Curious about experiences and knowledge gained.
And, in the extension of that thought, what would be required ‘scaffolding’ around a set of LLMs to be able to:
- assess a student’s current proficiency
- set up some kind of study guide
- provide assignments (vocab training, writing prompts, reading comprehension, speaking exercises, listening exercises)
- evaluate responses to assignments
- give feedback on responses
- keep track of progress over time and adjust assignments accordingly
I *assume* something like this would require multiple LLMs, in order to handle Text To Speech and Automatic Speech Recognition. Is whisper (for example) useful for evaluating (and give feedback on) pronunciation?
Would you care to describe your setup in more detail? Do you have any notes suitable for publishing on github or similar?
I am using the text-generation-webui by oobabooga https://github.com/oobabooga/text-generation-webui
One of the built-in plugins is the whisper_stt, you will need to enable it in the settings of the webui. https://github.com/oobabooga/text-generation-webui/tree/main/extensions/whisper_stt
I have been using the Elyza-japanese-llama-2-7B. Other models specific to your target language should work. https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b
Lastly, I created my own plugin, that is no longer maintained, unfortunately. Using a similar python script to the silero_tts extension, I swapped in calls to bark tts. I only chose bark because it had a Japanese model.
https://github.com/suno-ai/bark
But, you might have some luck with the new coqui_tts, which is under development. Hopefully they will fix the error I have been having with multi-language support. But it’s built in, you would just need to install the requirements.txt https://github.com/oobabooga/text-generation-webui/tree/main/extensions/coqui_tts