February 4th, 2025
API
App
Models
Bugfixes / Misc
Features
Token
Characters

Public character URLs now embed the referral code of their creator in their links. Users who sign up after visiting a shared character will be credited to the character creator as a referral and earn the associated referral points.
For users with public characters who have cleared their local browser cache, the system should now sync the public characters back to your browser. Additionally, a manual sync button has been added where a user can trigger this on their own.
Introduced a new feature that enables image editing on generated and uploaded images using text prompts. It sends the user prompt to an LLM which interprets what should be changed about the image, then creates a mask for the image with a segmentation model used for inpainting.
This is compatible with all models except flux custom, as the flux in-painting model is a distinct model from flux that does not load lora weights (i.e. flux standard and flux custom will use the exact same in-painting model).
Users can upload an image and describe what the would like to change, or can use the "Inpaint" button under an existing image to change an already generated image.
This is solely offered in the UI at the moment. API support is coming.
This is a beta release and we're looking for your help finding any bugs. Please report them in the #ai-image channel on Discord.
Fix the model selector on mobile to make the duplicate Deepseek models distinguishable.
Within Text Settings > Advanced Settings, we've added a Disable Venice System Prompt toggle. When this is engaged, ONLY the system prompts you have enabled will be passed to the model (IE none of Venice's prompts will be included).
Strip Deepseek's <think></think> content from the conversation history when compiling the conversation context for new prompts.
When copying messages generated from Deepseek using the Copy button in the UI, the <think></think> block will be emitted from the content copied to the clipboard.
Performance of the user profile endpoint has been increased, resulting in faster initial load times.
Include the filename of uploaded files in the context sent to the LLM. This gives the LLM a bit more detail to know what it's talking about and permits the LLM to reference the file in its response.
Enabled support for Coinbase On-Ramp wallet funding. This adds a button to purchase up to $500 in VVV using a debit card and have it immediately delivered to your wallet to stake on the platform. Additionally, if your wallet doesn't have enough ETH to pay for gas, the error will allow you to purchase $5 in ETH on Base so you can interact with the staking contracts.
Permit the connected wallet to configure the gas fees when interacting with the blockchain to address the need to incease gas during periods of network congestion.
Added both Deepseek models to the API. Preliminary VCU pricing is listed in the docs.
Support Deepseek 671 and Qwen Coder 2.5's full 128k context on requests made via the API.
Increased limit on image prompt length in API from 1500 to 2048 characters for all models except SD3.5.
Ensure mis-formatted requests that include an empty tools_calls: [] parameter on a message do not return return HTTP 500 errors. This fixes an error reported in Discord.
Fixed a bug where image inference servers would occasionally return black images to user prompts.
Lustify SDXL (number one requested image model on Featurebase) is now in production and the API for Pro users.
Crypto.com added support for VVV in their on-chain wallet.
You can now pay for a Venice Pro subscription using VVV by selecting the "Pay with Crypto via Coinbase" option. Use your VVV rewards from staking to upgrade to Venice Pro.