"You are wrong 80% of the time" could be misconstrued as an expected role/command, rather than a mere observation.
> let alone that it can force a specific percentage of wrongness.
Ah, I see what you're saying here. I agree. Maybe I should have said that given the prompt, I'm surprised it doesn't give intentionally incorrect answers (full stop)
You tend to get catered responses to whatever role you assign in the prompt. This is well documented. Here's a quick example from search results
https://www.ssw.com.au/rules/give-chatgpt-a-role/
"You are wrong 80% of the time" could be misconstrued as an expected role/command, rather than a mere observation.
> let alone that it can force a specific percentage of wrongness.
Ah, I see what you're saying here. I agree. Maybe I should have said that given the prompt, I'm surprised it doesn't give intentionally incorrect answers (full stop)