Upon arriving at Missouri State University, I founded the Deciphering Outrageous Observations and Modeling (DOOM) lab which has included more than ten graduate and thirty undergraduate students. My research mission has been in two primary domains described in detail below and includes many collaborative efforts throughout the years.
Psycholinguistics. My cognitive research focuses broadly on psycholinguistics and memory, particularly on the statistical properties of word relationships. Overall, I seek to understand how language is represented in memory by adding to and examining available linguistic database information (i.e. the Semantic Priming Project, English Lexicon Project, SUBTL, etc.), exploring judgments of memory relations, and adapting traditional linguistic techniques to applied psychological questions in real-world contexts (such as blog writing or political speeches). I have worked to refine these freely available data repositories through publishing an update to the semantic feature production norms and conducting several follow-up studies on a second set of linguistic variables. Participation in multi-site, multi-lab projects such as these has influenced my impressions of the applicability of psycholinguistics and the nature of free resources for facilitating scientific advancements. To that end, I have created a free tool to search standardized, freely available stimuli and promote consistency of their use in research studies. My goal is to foster the formation of more cogent theory concerning the structure of memory, semantic networks, metacognition, and linguistic judgments. In addition to these areas, I focus on the nature of relationship between language and other psychological constructs (e.g., psychopathology, meaning in life, moral foundations theory). My combination of psycholinguistic research and statistics allows me to emphasize language and memory as a complex system using big data techniques.
Applied Statistics. Tukey said, “the best thing about being a statistician is that you get to play in everyone else’s backyard”. I whole-heartedly embrace that philosophy in my professional efforts. My passion for quantitative appraisals of psycholinguistics has led me to applied study in statistics more generally. In particular, I am interested in scale development (including clinical measurements), actuarial evidence for alternatives to null hypothesis testing, standards for statistical reporting, and the utility and metric of various indices for effect sizes. I am passionate about my work in scale development because the emerging techniques in advanced exploratory and confirmatory factor analysis allow for study beyond traditional reliability coefficients into determination scale structure, internal consistency, and invariance across groups. This work has led to numerous fruitful collaborations and grant opportunities. These results have important and interesting applications to not only structural equation modeling as a statistical tool but also to the fields that use these questionnaires as diagnostic tools.
I am also interested in the dissemination of statistical techniques and reporting standards for the field. These interests transcend any particular approach and are related to consensus determinations for what constitutes acceptable evidence in any particular context. The APA Task Force call for effect size reporting is over 15 years old, yet report rates are still very low for most journals and there does not seem to be a standard metric for such. The speed at which paradigm shifts occur, particularly in complex statistical reporting, has become fascinating to me and shaped my thought about how my work fits into the broader framework of psychological research. I have concluded that change is not imminent simply due to the presence of superior techniques (which we certainly possess in comparison to null hypothesis methods). Rather, it is necessary to provide freely accessible, reliable, simple interface tools to allow movement to more advanced methods.
To that end, I have conducted grant-funded Monte Carlo comparisons of various methods of evaluating statistical significance and calculation of effect size, which I built into a simple web application, as well as R package, for use by researchers who may be less familiar with these techniques. This work illustrated that there is not a “magic line in the sand” for what connotes statistical evidence (a practice I strongly emphasize in my teaching). These issues are of great significance to the field, particularly in terms of synthesizing large bodies of research (i.e., meta-analysis or other compilation methodologies) and replication, and I intend to continue work in this domain by exploring ways to improve statistical calculation, reporting, and alternatives that will elucidate the importance of our research to outside audiences.