RIP #6: Integration

(Originally posted November 14, 2023)


Here’s the trouble with how we think about measuring abstract psychological things that go beyond the physical processes of nervous system functioning — none of these psychological things are uniform, let alone unified into actual unitary constructs. Much of the time, we treat them as unified “things” for the purpose of actually getting research done. “Happiness” is a thing that we want to measure or influence in some mechanistic way. “Social connections” are things that we want to measure or influence in some mechanistic way. Whatever the construct might be that we’re studying, we have to treat it as a “thing” and operationalize it such a way that we can quantify it through some form of data generation mechanism. We leave it to the hardcore specialists in each of the domains to worry about the details about what the different “pieces” of something like happiness might be. Life is too short to worry about the little nuances that might change our fundamental interpretation of what we mean by “happiness” — we’re trying to operate at a functional, human level, right?

Zooming out to the human level, Robert Kurzban uses a “smartphone apps” metaphor to describe human psychology. To paraphrase, he describes the human mind as a smartphone that is running lots of apps concurrently. Some of them may interact, but really, there is no top-level “governor” that ensures that they are logically unified in a purposeful way, all of them behaving in a fashion to accomplish a unitary goal. The human mind is just a collection of… apps, each performing different operations and functions to accomplish different goals.

Zooming back in to the level of individual constructs, then — can we describe things like “emotions” or “social connections” or “decision-making” in a similar way? If we aren’t too picky about perfectly translating the metaphor to this level, I’d say that the answer is a clear “yes.” When measuring a construct using various methods like self-report questionnaires, behavioral measures, or text, it genuinely matters which “mental app” we’re capturing. Different methods offer diverse information about the construct, and they might not integrate seamlessly.

Let’s drop the metaphors and state it plainly. There’s a substantial body of work demonstrating that different measurement methods for a construct often correlate poorly or not at all. This goes beyond issues like “shared method variance” and underscores the fact that the constructs we aim to study are often inadequate abstractions of objective reality.

This is one reason, among many, why the details matter. Our intuitions about concepts like “happiness” or “health” are often incorrect. Even the most accomplished scientists are frequently wrong, highlighting the importance of collaborating with specialists who have dedicated their lives to studying specific constructs. They possess valuable insights into when, why, and how different aspects of a construct may be captured and how all the different pieces fit together.

At its core, these insights have profound implications for the very essence of how we conduct research on human-level factors. The traditional approach of treating abstract psychological constructs as uniform entities for measurement oversimplifies the intricate nature of the human mind. Recognizing that these constructs are more like a collection of distinct “apps” running concurrently forces a reevaluation of research methodologies.

Embracing this complexity challenges the status quo and urges us to delve into the intricacies of individual constructs, considering the specific nuances captured by diverse measurement tools. And that, my friends, is the topic of today’s RIP. Today is a two-fer, as they say — two papers for the price of one. These are both hugely important papers that highlight how various measures of a single “construct” often don’t fit together in any clear, coherent way. Sometimes it’s an issue with the tools we’re using, and sometimes it’s far bigger and more important — the non-uniformity of a construct itself.

These are lofty topics, and ones that even the best psychological researchers struggle to come to terms with — but hey, this is why studying the human condition is so freaking hard. But don’t be discouraged — if this stuff was easy, we’d have answered these questions thousands of years ago. The complexity is what makes this stuff so darn fascinating — and what keeps giving us new surprises at every turn.

And, with that, please do enjoy reading not one, but two wonderful papers about the lack of convergence between measurement methods:

Bosson, J. K., Swann, W. B., & Pennebaker, J. W. (2000). Stalking the perfect measure of implicit self-esteem: The blind men and the elephant revisited? Journal of Personality and Social Psychology, 79(4), 631–643. https://doi.org/10.1037/0022-3514.79.4.631

Mauss, I. B., & Robinson, M. D. (2009). Measures of emotion: A review. Cognition & Emotion, 23(2), 209–237. https://doi.org/10.1080/02699930802204677

Leave a Reply

Your email address will not be published. Required fields are marked *