Consider contribution to open source Android LLM API clients like
https://play.google.com/store/apps/details?id=dev.chungjungsoo.gptmobile&hl=en_US
https://github.com/Taewan-P/gpt_mobile
I tested it with Ollama using Venice API and it works very smooth. Also in case of using slow LLMs as DeepSeek it continues to print even when screen is turned of unlike current Venice web app.
Please authenticate to join the conversation.
Backlog
Feature Requests
API
11 months ago

cactus8668
Get notified by email when there are changes.
Backlog
Feature Requests
API
11 months ago

cactus8668
Get notified by email when there are changes.