That's pretty subjective and could be leveled against almost anything behavioral or psychological.
The thing about the Big Five is that it has surfaced in all sorts of contexts. You can maybe say maybe the 5 per se is not well justified (as opposed to 4 or 6 or 7, for example), but if you take enough ratings of a person, some variant of those 5 will probably work as reasonable summaries of the ratings, and they will account for a substantial chunk of their predictive variance. The thing too, is that if you take other types of variables, like clinical symptom ratings, or diagnoses, you start to see roughly similar types of attributes become prominent.
The Big Five is a descriptive model of how people perceive others. There's a lot of evidence for certain mechanistic processes being heavily involved in some (e.g., positive emotion in extraversion, negative emotion in neuroticism, behavioral control with conscientiousness, etc.) but I'm not sure the original idea behind the Big Five was mechanistic -- it was a hypothesis about major dimensions that could summarize social perceptual data. It's like classification in biology pre-DNA era. People have some ideas of how things go together, and find it useful for organizing descriptions and measuremnts.
It's like if you did unsupervised DL modeling of all the videos involving humans you can find on the web, and found that their classification could be accounted for by 5 major vectors, almost all the time, regardless of sampling. Wouldn't you want to know that?
Many other measures are very mechanistically well-justified but lack ecological validity, in the sense that they are very narrow predictively and not well outside of laboratory contexts. That's fine, there's a tension there between predictive bandwidth and depth, but if you want any kind of rating of a human being's behavior and experience, you enter at your own risk if you think you'll measure something that's radically different from the Big Five (or something subsumed by cognitive measures). Can you do it? Sure, but a lot of the time no (see: grit).
What constitutes as an adequate justification for use of an operational definition within a model is subjective indeed. However there is usually a point in the life of a theory where the gathered evidence are sufficient enough that a scientific consensus starts to form that the operational definition is justified. I’m not aware that that has happened in the 40-odd year history of the Big 5 personality theory.
The 5 personality traits may be overarching within the field of psychometrics and they may indeed be useful to describe behavior, however you still need to justify that said behavior is not easier described using different models, and this is where personality psychologists usually fails in justifying their operands.
Works criticizing the model range from using totally different constructs (such as priming, positive reinforcement, universal grammar, brain dopamine level socio-economic status etc.)—which don’t rely on psychometrics at all—to claiming that the behavior psychometricians are predicting are actually not that useful (e.g. predicting ‘high confidence’ is not that useful if ‘high confidence’ does not result in a significant behavior which isn’t better predicted without made up operands).
If you were an early astronomer and you constructed the notion of ‘epicycles’ to simplify your model of planetary motion. You may use these ‘epicycles’ to justify your prediction, however you may not use a successful prediction to justify the existence of epicycles. Your epicycles may be useful until someone comes along and deems them unnecessary since planetary motion is better described by using elliptical orbits.
Of course this could go the other way as is the case with particle physics and the atom. However given the amount of research, success of rival theories, the failure of psychometricians from making useful predictions outside of their narrow field that isn’t better explained with alternative theories, I have high doubts that the Big-5 personality traits (and any theory of personality using psychometrics for that matter) is anything but pseudoscience.
> If you were an early astronomer and you constructed the notion of ‘epicycles’ to simplify your model of planetary motion. You may use these ‘epicycles’ to justify your prediction, however you may not use a successful prediction to justify the existence of epicycles. Your epicycles may be useful until someone comes along and deems them unnecessary since planetary motion is better described by using elliptical orbits.
I couldn't comprehend the discussion until I read this metaphor. Thanks for the detailed explanation.
That's pretty subjective and could be leveled against almost anything behavioral or psychological.
The thing about the Big Five is that it has surfaced in all sorts of contexts. You can maybe say maybe the 5 per se is not well justified (as opposed to 4 or 6 or 7, for example), but if you take enough ratings of a person, some variant of those 5 will probably work as reasonable summaries of the ratings, and they will account for a substantial chunk of their predictive variance. The thing too, is that if you take other types of variables, like clinical symptom ratings, or diagnoses, you start to see roughly similar types of attributes become prominent.
The Big Five is a descriptive model of how people perceive others. There's a lot of evidence for certain mechanistic processes being heavily involved in some (e.g., positive emotion in extraversion, negative emotion in neuroticism, behavioral control with conscientiousness, etc.) but I'm not sure the original idea behind the Big Five was mechanistic -- it was a hypothesis about major dimensions that could summarize social perceptual data. It's like classification in biology pre-DNA era. People have some ideas of how things go together, and find it useful for organizing descriptions and measuremnts.
It's like if you did unsupervised DL modeling of all the videos involving humans you can find on the web, and found that their classification could be accounted for by 5 major vectors, almost all the time, regardless of sampling. Wouldn't you want to know that?
Many other measures are very mechanistically well-justified but lack ecological validity, in the sense that they are very narrow predictively and not well outside of laboratory contexts. That's fine, there's a tension there between predictive bandwidth and depth, but if you want any kind of rating of a human being's behavior and experience, you enter at your own risk if you think you'll measure something that's radically different from the Big Five (or something subsumed by cognitive measures). Can you do it? Sure, but a lot of the time no (see: grit).