I go between Venice Uncensored and DeepSeek because the “fully” uncensored model is (by necessity, I suppose) poorly trained (although perhaps it could be given training materials and outfitted with reasoning, eventually), invents words, randomly speaks in different languages, fails to adhere to system prompts, inserts random symbols, etc.
I’m fully aware of the harm that can be done by AI with no safety features or ethical controls, and the recent panic over “AI psychosis” (which is a serious topic which is, unfortunately, being made a joke via hyperbole designed to attract clicks and views) illustrates what happens when vulnerable people are given certain tools without an understanding of what they can, cannot, or should not do.
However, I believe AI needs to develop an informed consent model. This would:
Install safeguards which protect vulnerable people from misunderstanding what they are interacting with
Protect AI developers from liability when AI is misused and/or misunderstood
Take off the training wheels for responsible, adult users who can truly benefit from the one space where they do not have to edit or even silence themselves out of fear of judgment or misunderstanding.
Just like TOS and privacy policies, it would only require one document and a check box.
Please consider it if you can. Thank you.
Please authenticate to join the conversation.
New Submission
Feedback
22 days ago

An Anonymous User
Get notified by email when there are changes.
New Submission
Feedback
22 days ago

An Anonymous User
Get notified by email when there are changes.