3 comments

  • popalchemist 7 hours ago
    The critiques about local inference are valid, if you're billing this as an open source alternative to existing cloud based solutions.
    • kstonekuan 7 hours ago
      Thanks for the feedback, probably should have been clearer in my original post and in the README as well. Local inference is already supported via Pipecat, you can use ollama or any custom OpenAI endpoint. Local STT is also supported via whisper, which pipecat will download and manage for you.
  • grayhatter 11 hours ago
    I don't think I'd call anything that only works with a proprietary Internet hosted LLM (one you need an account to use) open-source.

    This is less voice dictation software, and much more a shim to [popular LLM provider]

    • kstonekuan 7 hours ago
      Hey, sorry if the examples given were not robust, but because this is built on Pipecat, you can actually very easily swap to a local LLM if you prefer that, and the project is already set up to allow you to do that via environment variables.

      The integration to set up the WebRTC connection, get the voice dictation working seamlessly from anywhere, and input into any app took a long time to build out, and that's why I want to share this open source.

  • lrvick 11 hours ago
    Is there a way to do this with a local LLM, without any internet access needed?
    • kstonekuan 7 hours ago
      Yes, Pipecat already supports that natively, so this can be done easily with ollama. I have also built that into the environment variables with `OLLAMA_BASE_URL`.

      About ollama in pipecat: https://docs.pipecat.ai/server/services/llm/ollama

      Also, check out any provider they support, and it can be easily onboarded in a few lines of code.