Arbitrary context limit

The website artbitrarily limits the context window to 50 prompts (25 user prompts + 25 responses) which is ludicrusly low in most cases and completely negates models with large context windows. It leads to awkward fixes like downloading the conversation, manually removing all the <think> blocks (which is extremely time consuming) then feeding it back to a new conversation as a single prompt - and repeat, until one hits the actual context window.

You can easily test this if you tell it to create a prefix on each prompt, and go beyond 50, then tell it to give you the earliest prefix it recalls.

I doubt this is intentional, it might have been a remainder of early code? Hence why it’s flagged as a bug.

Please authenticate to join the conversation.

Upvoters
Status

Backlog

Board
🐛

Bugs

Tags

Chat

Date

8 months ago

Author

Asinine_Conversation

Subscribe to post

Get notified by email when there are changes.