Chair for Methods and Psychological Assessment

Research on Methods and Response Biases

The reliable and valid assessment of psychological characteristics is a cornerstone of both research and practice in psychology. Often, an assessment is based on self-reports of individuals that may be collected in oral form in the context of an interview or in written form as responses to rating scales, for example. The use of self-reports via rating scales is a particularly efficient method to collect information on various states, traits, attitudes, and similar individual characteristics. Alas, it has been consistently shown over many decades of research that rating scales are susceptible to a plethora of response biases, like extreme responding, socially desirable responding, among others.

Socially Desirable Responding & Faking

Socially desirable responding (SDR) is presumably one of the most widespread response biases in psychological assessment. While all response biases can have a significant impact on the quality of responses, SDR is believed to have a stronger impact due to its more intentional nature. A subcategory of SDR is faking. This form of bias is considered to be activated by situational demands (e.g., high stakes) and personal characteristics. It manifests itself in alterations of a self-presentation in such a way that personal goals are expected to be supported. Faking can occur in two directions: faking good (self-presentation as better than real, i.e., simulation) and faking bad (self-presentation as worse than real, i.e., dissimulation). The faking directions usually occur under different situational characteristics but both have detrimental effects to the validity of decisions based on psychological measurements.

In our research activities comparisons are made between the standard administration of an assessment and faking conditions (good and bad) for a multitude of traits. These include, but are not limited to, dark personality traits, general personality traits (Big five, HEXACO), and characteristics targeted with neuropsychological assessments. In addition to estimate the size of faking effects under different conditions as well as its covariates, we study and design approaches to reduce the possibility to fake in psychological assessments or to reduce the impact that faking has on responses. Moreover, approaches to coach faking behavior effectively are also under scrutiny.

Interested in collaborating with us? Considering a Bachelor or Master Thesis in Psychology or an internship in this area? Great! Please contact us -> Prof. Dr. Ralf Schulze & Dr. Markus Jansen

Forced-Choice Modeling

The forced-choice (FC) format in psychological assessment is an alternative to other more widespread response types, like rating scales in particular. The resurgence of the FC format in the last decade is related to the promise of being able to reduce the impact of response biases on the validity of responses when using FC instead of ratings.

Instead of asking respondents to state the extent of (dis)agreement with isolated statements as is typically done with ratings, respondents must compare a set of at least two statements with respect to their fit as a description of oneself, for example. Despite the fact that the FC format has a long history in psychological assessment, its usefulness for interindividual comparisons has been severely restricted by the limited interpretation of its ipsative scores. Methodological advancements with modeling data from FC assessments in the last decade enabled interpretations to go beyond the confinements associated with ipsative scores. At the center of these developments is Thurstonian Forced-Choice Measurement.

Our research activities focus on the improvement of Thurstonian FC Measurement. This includes the extension of the modeling approach via confirmatory factor analysis and item response theory. Our research efforts also address the technical applicability of the FC method and its usability in practical applications. In the latter context, FC questionnaires are designed for the assessment of various traits, including dark personality and general personality factors.

Interested in collaborating with us? Considering a Bachelor or Master Thesis in Psychology or an internship in this area? Great! Please contact us -> Prof. Dr. Ralf Schulze & Dr. Markus Jansen

Meta-Analysis

Meta-analysis is a systematic method to summarize and integrate empirical findings. This is usually done with the goals to arrive at statements about a field of study that mark the state-of-the-art and to provide overall effect estimates. Hence, meta-analyses usually consist of both a systematic literature review on topics of interest, as well as methods of (re)analyzing the many primary results that research articles report on the specific topic. Such syntheses often allow for a broader discussion of effects reported in the scientific literature and their potential moderators in the absence of the primary data. Additionally, an analysis of bias among the publications can be a valuable byproduct of meta-analyses. Overall, meta-analyses play a central role in scientific evaluation of psychological research but also as an scientific empirical foundation for decision makers to arrive at reasonable practical policy decisions.

There is not a single meta-analytical approach, though. Meta-analyses include procedures to aggregate and estimate effect sizes, quantify the heterogeneity of the results from primary studies, and may also analyze the latent structure based on the meta data. Methods differ, for example, depending on the intended inference, the effect sizes (e.g., r and d), assumption on the predictor (fixed, random, or mixed effects), and the statistical approach to effect size aggregation. Hence, a very large number of decisions have to be made along the way to arrive at empirically founded summary statements for a field of study.

Our activities include research about different methods and procedures that are part of the decision making process in meta-analyses. This includes questions on how to best aggregate effect sizes, the question about the importance of methodological quality of primary studies included in a meta-analysis, and comparisons of different way to estimate and analyze the heterogeneity between studies. Of course, we also conduct meta-analyses on different topics.

Interested in collaborating with us? Considering a Bachelor or Master Thesis in Psychology or an internship in this area? Great! Please contact us -> Prof. Dr. Ralf Schulze, Dr. Markus Jansen & Dr. Maike Pisters

More information about #UniWuppertal: