Hi Venice team. I’m seeing consistent newline/whitespace corruption in the chat UI that appears independent of the model (reproduces with GLM 4.7, DeepSeek v3.2, and GPT-5.2).
Issue: When prompting a model to return text with exact line breaks (including blank lines, indentation, bullets) the output is often collapsed into a single line or otherwise reformatted. Even fenced code blocks sometimes lose internal newlines. Conversely, in some cases the UI also seems to inject extra blank lines (e.g., double-spacing every line).
Repro (simple):
Ask model: “Reply with exactly these 5 lines, preserving the blank line:” then provide:
LINE 1
LINE 2
(blank line)
LINE 4
END
Result in UI frequently collapses/normalizes line breaks.
Observed: Copy/pasting the displayed output into a plain text editor shows the line breaks are not preserved as expected, suggesting UI rendering or pre/post-processing is altering whitespace.
Please authenticate to join the conversation.
New Submission
Bugs
20 days ago

An Anonymous User
Get notified by email when there are changes.
New Submission
Bugs
20 days ago

An Anonymous User
Get notified by email when there are changes.