> Isn't your input your confidence that GPT-4 gives the correct answer
You may be right that that’s the intent, however what’s the point (other than collecting data about user confidences)? If I enter 0.3 and GPT provides a correct answer, then that doesn’t mean that the 0.3 was somehow wrong.
In that case 0.3 would be more wrong than 0.4 and less wrong than 0.2. The closer your predictions are to reality over a bunch of questions, the better you understand reality.
You're right. But if you enter 0.3 on average over 28 questions and the actual number of true answers differ by a lot from 8 then you have learned your general sense of GPT-4s abilities is uncalibrated.
You may be right that that’s the intent, however what’s the point (other than collecting data about user confidences)? If I enter 0.3 and GPT provides a correct answer, then that doesn’t mean that the 0.3 was somehow wrong.