- So, i’ve been doing all my LLM-tinkering on an M1-- using llama.cpp/whisper.cpp for to run a basic voice powered assistant, nothing new at this point.
- Currently adding a visual component to it-- ShareGPT4V-7B, assuming I manage to convert to
gguf
. Once thats done i should be able to integrate it with llama.cpp and wire it to a live camera feed-- giving it eyes. - Might even get crazy and throw in a low level component to handle basic object detection, letting the model know when something is being “shown” to the to it-- other than that it will activate when prompted to do so (text or voice).
The one thing I’m not sure about is how to run a TTS engine locally like StyleTTS2-LJSpeech? are there libraries that support tts models?
Remindme! 5 days
Update: quick reddit search (which i should’ve done prior to posting tbh) led me to this post: ai_voicechat_script
I just use plain old Web Speech on PC and TextToSpeech on Android. I wasn’t gonna say anything because they don’t sound as good as the compute-heavy ones, but, they’re… way better than whatever that is!
I’m very tempted to make a server for this one because I am liking it more than XTTS after trying it and valle. It makes less artifacts.
There are notebook examples in the repo so perhaps they can be leveraged into the XTTS api server implementation.
For your project you can also more easily use coqi if that level is enough for you.
tried coqui and had issues with performance— read online and its doesnt seem to fully support MPS.
for now i’m using upon edge-tts which is doing the trick for now and is pretty decent/free.
is xtts supported on macs?
It’s tortoise so who knows. There is mac pytorch now. You would have to figure it out from scratch. I’m not sure why nobody is trying it.
When I tried edge-tts it was very mediocre like silero.