I’m really hopeful about how MusicAssistant will improve after reading this. I’ve got the voice blueprint set up, but am personally not wild about needing to use OpenAI for it to work well. I’d love to see some built-in intents for it. I have to say that building automations with custom sentences is very cool and relatively easy.
As I understand it you can also use a locally running LLM. But that requires power my Raspberry Pi 4 doesn’t have.
I’m not sure if Music Assistant has the capability yet, but I presume you could have a no-LLM version if you could trigger a search and to play the first song found.
Music assistant has Spotify connect integration, so if you have a Spotify connect device, maybe you can use voice command -> regular Spotify Integration -> music assistant Spotify connect device?
I’m really hopeful about how MusicAssistant will improve after reading this. I’ve got the voice blueprint set up, but am personally not wild about needing to use OpenAI for it to work well. I’d love to see some built-in intents for it. I have to say that building automations with custom sentences is very cool and relatively easy.
As I understand it you can also use a locally running LLM. But that requires power my Raspberry Pi 4 doesn’t have.
I’m not sure if Music Assistant has the capability yet, but I presume you could have a no-LLM version if you could trigger a search and to play the first song found.
Music assistant has Spotify connect integration, so if you have a Spotify connect device, maybe you can use voice command -> regular Spotify Integration -> music assistant Spotify connect device?
Yeah perhaps. I don’t have a Spotify connect device so can’t try it.
I quite like having the LLM, and listening to all the weird questions the kids ask.
Yeah, I’m in the same boat RE capability to actually run an LLM.