I had some success making a workflow to use whisper to speak my target language. A llm made in the target language. And a tts capable of of producing my target language.
This allowed me to practice conversations. Some were hit or miss, I suspect this is because I am very new to my target language. But it was useful and allowed me to practice things like order food.
The language app memrise has a simlar system, of course for a price.
I am using the text-generation-webui by oobabooga https://github.com/oobabooga/text-generation-webui
One of the built-in plugins is the whisper_stt, you will need to enable it in the settings of the webui. https://github.com/oobabooga/text-generation-webui/tree/main/extensions/whisper_stt
I have been using the Elyza-japanese-llama-2-7B. Other models specific to your target language should work. https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b
Lastly, I created my own plugin, that is no longer maintained, unfortunately. Using a similar python script to the silero_tts extension, I swapped in calls to bark tts. I only chose bark because it had a Japanese model.
https://github.com/suno-ai/bark
But, you might have some luck with the new coqui_tts, which is under development. Hopefully they will fix the error I have been having with multi-language support. But it’s built in, you would just need to install the requirements.txt https://github.com/oobabooga/text-generation-webui/tree/main/extensions/coqui_tts