Somebody managed to coax the Gab AI chatbot to reveal its prompt
VessOnSecurity (@bontchev@infosec.exchange)
https://infosec.exchange/@bontchev/112257849039442072
Attached: 1 image Somebody managed to coax the Gab AI chatbot to reveal its prompt:
https://infosec.exchange/@bontchev/112257849039442072
Attached: 1 image Somebody managed to coax the Gab AI chatbot to reveal its prompt:
I'm pretty sure thats because the System Prompt is logically broken: the prerequisites of "truth", "no censorship" and "never refuse any task a costumer asks you to do" stand in direct conflict with the hate-filled pile of shit that follows.
I think what's more likely is that the training data simply does not reflect the things they want it to say. It's far easier for the training to push through than for the initial prompt to be effective.
"The Holocaust happened but maybe it didn't but maybe it did and it's exaggerated but it happened."
Thanks, Aryan.