Abstracts
Kate Anderson
Visiting Assistant Professor, Department of Informatics and Network Systems, University of Pittsburgh
Thursday, February 25, 2021, 9:20 a.m.
“SKILL NETWORKS AND MEASURES OF COMPLEX HUMAN CAPITAL”
Traditional economic models are born out of the manufacturing industry, measuring worker skills on a one-dimensional scale (e.g. speed or years of training). However, these models are less helpful for understanding knowledge-based production, where skills are used and valued in combination. Here, I present a network-based method for characterizing worker skills. I construct a human capital network in which nodes are skills and two skills are connected if a worker has both. A worker’s human capital can be measured according to the position of her skills on the network. Using data gathered from an online freelance labor market, I illustrate how network-based measures can be used to understand the role of skill diversity and synergy in wages and show that network-based measures of human capital capture variation in wages beyond that captured by the skills individually.
Biography
Kate Anderson is a visiting assistant professor in the Department of Informatics and Network Systems at the University of Pittsburgh. Her research is interdisciplinary and lies at the intersection of economics, network science and computational modeling. In her work, she looks at how skills are used and valued in our modern, knowledge-based economy. She uses networks to capture the interactions between skills in a labor market and create measures of worker human capital specific to a particular labor market. In so-doing, she addresses a persistent problem: How do we measure human capital in a world where skills are used in combination?
Margaret Beier
Professor, Department of Psychological Sciences, Rice University
Wednesday, February 24, 2021, 11:20 a.m.
“BIG DATA AND WORKFORCE TRAINING AND DEVELOPMENT”
The aging of the global workforce, extended life expectancies and labor shortages will necessitate that many workers remain engaged at work long past the years normally slated for retirement. Moreover, as people age, they develop idiosyncratic knowledge profiles and changes in abilities that can make learning new information difficult. Technological affordances such as those available in interactive and adaptive training environments, promise to help workers acquire the skills necessary to remain engaged in the workforce. This talk will discuss how big data can facilitate the development of customized training and development activities (both informal and formal) from needs assessment through evaluation.
Biography
Margaret Beier is a professor of industrial and organizational psychology at Rice University. She received her B.A. from Colby College and her M.S. and Ph.D. degrees from the Georgia Institute of Technology. Beier’s research examines the influence of individual differences in age, gender, abilities, and motivation as related to success in educational and organizational environments. In particular, her work examines the cognitive, attitudinal and motivational determinants of job and training performance, as well as the influence of these factors on lifelong development and learning. Her work has been funded by the National Science Foundation. She is a fellow of the Society for Industrial and Organizational Psychologists and the Association for Psychological Science.
Frank A. Bosco
Director, metaBUS.org, Associate Professor, School of Business, Department of Management and Entrepreneurship, Virginia Commonwealth University
Wednesday, February 24, 2021, 9 a.m.
“USING TAXONOMIES TO CURATE PSYCHOLOGY’S RESEARCH FINDINGS”
Bosco will discuss the development of a hierarchical taxonomy of nearly 5,000 constructs/variables reported in the I-O psychology scientific space. The taxonomy begins with very broad branches (e.g., attitudes; behaviors), which unfold into finer levels (e.g., behaviors includes turnover, performance, absenteeism, and many others). He will also discuss how the taxonomy overcomes the “vocabulary problem,” whereby many distinct terms refer to the same construct (e.g., turnover intention,conviction of decision to quit, perceived chances of leaving). He will describe the taxonomy’s role in constructing a database of over 1.1 million I-O psychology findings, each of which is manually classified by experts according to the taxonomy. He will then show how one may query the database (called metaBUS–a portmanteau of meta-analysis and omnibus) using the taxonomic classifiers to generate instant meta-analyses and generate visualizations of meta-analytic networks. He will conclude with limitations and future directions.
Biography
Frank Bosco is a member of the Department of Management and Entrepreneurship at Virginia Commonwealth University. His research spans the areas of human resource management, organizational behavior and organizational research methods. He is especially interested in employee staffing (e.g., employee selection), cognitive ability testing, meta-analysis, big data, open science and approaches for summarizing entire scientific literatures. His research appears in outlets such as Journal of Applied Psychology, Journal of Management, Organizational Research Methods, Personnel Psychology, and Science. Bosco is Director of metaBUS.org, a winner of the 2013 National Endowment for the Humanities’ Digging into Data Challenge and funded by the National Science Foundation, SHRM Foundation, VCU Presidential Research Quest Fund, and others. The project enables over 1,000 users (researchers and practitioners) to make sense of more than 1,100,000 research findings by navigating an easy-to-understand map of constructs that conducts instant meta-analyses on virtually any topic in the scientific space.
Rodica Damian
Assistant Professor of Social Psychology, University of Houston
Wednesday, February 24, 2021, 9:20 a.m
“PERSONALITY DYNAMICS IN THE WORK CONTEXT”
Personality traits are strong predictors of a wide range of work outcomes. However, they are not as fixed as previously thought and can change across the lifespan, both slowly (via long-term developmental processes) and quickly (via intervention). Work context plays a central role in personality change across the lifespan and, in turn, personality change has downstream consequences for work outcomes. To better understand personality dynamics and person-environment transactions in the work context, big data are necessary, including numerous within-person assessments across time and multimethod assessments. Understanding personality dynamics in the work context using big data is important because we now know that personality traits are actionable and relatively cheap interventions or training programs might be feasible. Thus, both employees and employers stand to greatly benefit from obtaining a deeper understanding of the dynamic interplay between personality traits, work contexts and outcomes.
Biography
Rodica Damian is a professor of psychology at the University of Houston and director of the Personality Development and Success Lab. For more information, visit https://www.damianlab.com. Damian’s research program is dedicated to understanding the role of personality dynamics on career success and well-being. In recognition of her research, she has been awarded the Rising Star designation from the Association for Psychological Science, the Frank Barron Award from Division 10 of the American Psychological Association and the Best Paper Award from the Journal of Research in Personality. Her research has been covered in national and international media, including Good Morning America, The Washington Post, Forbes, TIME, Scientific American, BBC, The Atlantic, and The Guardian. She serves on three editorial boards including Perspectives in Psychological Science, on the Professional Development Committee of the Society for Personality and Social Psychology, and on the executive board of the Association for Research in Personality.
James A. Grand
Assistant Professor, Social, Decision and Organizational Sciences Program, Department of Psychology, University of Maryland
Friday, February 26, 2021, 9 a.m.
“WHITHER THEORY IN A BIG DATA WORLD?”
As all the sciences continue hurtling through the big data era, it seems the accumulation of more and better theory has been left in the dust. Indeed, one could argue that much of the theory in the organizational sciences appears antiquated and slow compared to the insights generated through big data analytics. Yet the unfortunate irony is that without theory, big data insights run the risk of simply becoming big data blips; patterns without meaning, generalizability, and limited utility to improving and understanding psychological phenomena in organizations. In this talk, Grand will consider ways in which theory and big data can and should inform one another. Important synergies between developing better theory and big data analytics will be explored as well as possible ways to change our practices in these areas to advance workforce science.
Biography
James A. Grand is an assistant professor in the Social, Decision and Organizational Sciences Program of the Department of Psychology at the University of Maryland. His primary interests concern knowledge-building, decisionmaking, collaboration and performance at the individual and team levels. He also conducts research on the interaction between situational factors, judgment/decisionmaking and information processing mechanisms on personnel training and testing/assessment outcomes. A significant focus of his work involves the use of computational and experimental methods to explore how these dynamic processes operate within and interact between individuals to create, change, and/or maintain emergent outcomes at both individual and collective levels over time. In addition to his substantive interests, he has methodological interests in computational modeling, modeling longitudinal processes and Bayesian statistics.
Kevin J. Grimm
Professor of Psychology, Department of Psychology, Arizona State University
Thursday, February 25, 2021, 9:40 a.m.
“ON THE ISOLATION OF LAGGED WITHIN-PERSON EFFECTS WITH LONGITUDINAL PANEL DATA”
The cross-lag panel model (CLPM) is often fit to longitudinal panel data to examine lead-lag associations; however, its utility has been called into question because of its inability to distinguish between-person associations from within-person effects. This has led researchers to propose alternative specifications, including the Random-Intercept CLPM (Hamaker et al., 2015) and the Latent Curve Model with Structured Residuals (Curran, Howard, Bainter, Lane, & McGinley, 2014). These models, in addition to the First Difference Model, a model from economics, are applied to longitudinal panel data to examine lead-lag associations between reading and mathematics. The results from the models suggest reciprocal associations; however, the strength of these lagged associations depended on the modeling approach. Monte Carlo simulation studies were then conducted to investigate how well these models capture within-person lead-lag associations when between-person associations are present. Results highlight the challenges all models face when attempting to isolate within-person effects.
Biography
Kevin J. Grimm, Ph.D., is Professor of Psychology at Arizona State University. He received his B.A. in Mathematics and Psychology with a concentration in Education from Gettysburg College in 2000, and his M.A. (2003) and Ph.D. (2006) in Psychology at the University of Virginia. Grimm’s research interests include multivariate methods for the analysis of change, multiple group and latent class models for understanding divergent developmental processes, nonlinearity in development, and machine learning techniques for psychological data. Grimm is an author of Growth Modeling: Structural Equation and Multilevel Modeling Approaches with Nilam Ram and Ryne Estabrook. At Arizona State University, Grimm teaches undergraduate and graduate quantitative courses, including Longitudinal Growth Modeling, Machine Learning in the Psychological Sciences, and Structural Equation Modeling. Grimm also teaches workshops sponsored by the American Psychological Association’s Advanced Training Institutes, the Longitudinal Research Institute, and Statistical Horizons.
Joshua Jackson
Rosenzweig Associate Professor in Personality Science
Department of Psychological & Brain Sciences, Washington University in St. Louis
Wednesday, February 24, 2021, 9:40 a.m.
“IDIOGRAPHIC ASSESSMENT AS A BRIDGE BETWEEN BOUTIQUE AND BIG DATA”
The utility tradeoff between big data and smaller boutique data mirror classic debates in personality psychology. Relatively simplistic trait models of individual differences parallel the advantages of boutique data through emphasizing interpretability and validity. In contrast, more dynamic, social-cognitive models of individual differences underscore the need to incorporate broader and bigger data sources that include time and context. While both perspectives have their advocates and advantages, few theoretical or statistical models attempt to integrate these two sides. We propose that idiographic assessment is a framework that integrates the two perspectives, allowing for bigger, more dynamic data while providing valid and interpretable construct assessment. Through example data, we discuss how these models can be compared between and within people (and groups), how to ascertain whether modalities of data provide noise or utility and the interpretability of predictions.
Biography
Joshua Jackson is currently the Saul and Louise Rosenzweig Associate Professor of Personality Science at Washington University in St. Louis. He earned a bachelor’s degree in psychology and philosophy at the University of Wisconsin–Madison in 2005, after which he received a doctorate in personality psychology from the University of Illinois in 2011, with a minor in quantitative psychology. Jackson currently directs the Personality Measurement and Development Lab which studies why personality has wide-ranging, long-term effects on important life domains like health, wealth and happiness. He has received grant funding from sources such as the National Institute on Aging, the National Science Foundation, the John Templeton Foundation, the National Institute of Mental Health and the National Institute of Neurological Disorders and Stroke.
Patrick C. Kyllonen
Distinguished Presidential Appointee (R&D), Educational Testing Service
Wednesday, February 24, 2021, 11 a.m.
“PREDICTION, EXPLANATION AND CAUSALITY IN PERSONNEL SELECTION RESEARCH”
Cognitive and noncognitive applicant assessment scores are commonly used in workforce personnel selection and in educational admissions decisionmaking. This use is typically justified on the basis of the accuracy of assessment scores in predicting workforce and educational outcomes. However, our ability to predict outcomes often exceeds our ability to explain the prediction. Outcome prediction has steadily risen with big data and statistical and machine learning methodology; our ability to explain predictions has not kept pace. This can be shown by examining the winning models in the recent SIOP machine learning competitions and the challenges of explaining automatically generated employability scores from automated interviews. Kyllonen also illustrates the phenomenon with some college admissions data. Even when we can explain a prediction there still is a difference between that and demonstrating causality, which might be practically limited to showing that changing the level of a predictor variable results in changed outcomes.
Biography
Patrick Kyllonen is distinguished presidential appointee at Educational Testing Service in Princeton, New Jersey. Kyllonen received a B.A. from St. John’s University, and a Ph.D. from Stanford University and authored “Generating Items for Cognitive Tests” (with S. Irvine, 2001); “Learning and Individual Differences” (with P. L. Ackerman & R.D. Roberts, 1999); “Extending Intelligence: Enhancement and New Constructs” (with R. Roberts and L. Stankov, 2008); and “Innovative Assessment of Collaboration” (with A. von Davier and M. Zhu, 2017). He is a fellow of American Psychological Association and American Educational Research Association and has coauthored several National Academy of Sciences reports. Kyllonen directed the Center for Academic and Workforce Readiness and Success at ETS, which identified and measured new constructs for applications in K–12, higher education and the workforce. The work led to several transitions to commercial use, including WorkFORCE Assessment for Job Fit, SuccessNavigator, Mission Skills Assessment, Personal Potential Index and FACETS.
Richard N. Landers
Associate Professor of Psychology and the John P. Campbell Distinguished Professorship of Industrial and Organizational Psychology, University of Minnesota
Thursday, February 25, 2021, 11 a.m.
“GAME-BASED ASSESSMENT: CONTRASTING PURPOSIVE AND TRACE DATA FOR PSYCHOMETRIC ASSESSMENT”
Game-based assessment (GBA) is growing in popularity as a way to obtain information about individual psychometric traits without the disadvantages of traditional testing. Among the many promises of GBAs are increased candidate engagement, decreased faking and richer data from which to draw conclusions about individual differences. Despite these promises, research has lagged. In this presentation, I will explore one GBA that has been designed and developed to assess general cognitive ability. Using this GBA, I will describe two studies. First, I will describe our validation of the GBA’s purpose data (i.e., the scores the study was designed to output as cognitive ability indicators) as measures of cognitive ability. Second, I will describe how we have explored the use of the trace data that the GBA produces to draw conclusions about both cognitive ability and noncognitive traits.
Biography
Richard Landers is an associate professor of psychology at the University of Minnesota and holds the John P. Campbell Distinguished Professorship of Industrial and Organizational Psychology. His research concerns the use of innovative technologies in psychometric assessment, employee selection, adult learning and research methods, and his recent focus has been on game-based assessment, gamification, artificial intelligence, unproctored Internet-based testing, mobile devices, virtual reality, and online social media. His work has been published in the Journal of Applied Psychology, Computers in Human Behavior, and Psychological Methods, among numerous others, and his work has been featured in popular outlets such as Forbes, Business Insider, and Popular Science. In addition to his associate editorships at various journals, he is also author of a statistics textbook, “A Step-by-Step Introduction to Statistics for Business,” and he has developed two edited volumes: “Social Media in Employee Selection” and the 2019 “Cambridge Handbook of Technology and Employee Behavior.”
Robert J. Mislevy
Frederic M. Lord Chair in Measurement & Statistics, Educational Testing Service
Thursday, February 25, 2021, 11:20 a.m.
“ADVANCES IN THE SCIENCE OF MEASUREMENT AND COGNITION”
Situative, sociocognitive (SC) psychology is forcing a reconception of educational assessment. The SC perspective emphasizes the interplay between across-person linguistic, cultural and substantive patterns that human activity is organized around, and within-person cognitive resources that individuals develop to participate in activities. Rather than seeing assessment primarily as measurement, we are increasingly seeing it as an evidentiary argument, situated in social contexts, shaped by purposes and centered on students’ developing capabilities for valued activities. Developments in technology and analytic methods support new practices and familiar practices as reconceived. Implications follow for current challenges such as assessing higher-order skills, performance in digital environments and diverse student populations.
Biography
Robert J. Mislevy is the Frederic M. Lord Chair in Measurement and Statistics at Educational Testing Service and professor emeritus at the University of Maryland. His research applies developments in technology, statistics and cognitive science to practical problems in educational assessment. His work includes an evidence-centered assessment design framework, simulation-based assessment of network engineering with Cisco Systems and game-based assessment of systems thinking in the GlassLab project. His publications include the books “Sociocognitive Foundations of Educational Measurement, Bayesian Psychometric Modeling,” and “Bayesian Networks in Educational Assessment” and some 200 journal articles and chapters in edited volumes, including “Cognitive Psychology and Educational Assessment” in “Educational Measurement” (Fourth ed.). He has received career awards from the AERA and the National Council on Measurement in Education (NCME), the NCME Award for Technical Contributions to Measurement (four times), was president of the Psychometric Society, and is a member of the National Academy of Education.
Dan J. Putka
Principal Scientist, Human Resources Research Organization
Thursday, February 25, 2021, 9 a.m.
“EXPLORING LANGUAGE-BASED APPROACHES TO UNDERSTANDING THE WORLD OF WORK”
Over the past decade, psychologists have grown increasingly interested in understanding linkages between language usage and individual differences. From research on prediction of sentiment from text, to studies of relations between social media posts and personality; interest in the connections between the language one’s uses and their underlying characteristics abounds. What has received less systematic attention from researchers is the potential for analyses of language usage to further our understanding of the world of work. What connections exist between the language used to describe the work and the characteristics required to perform such work effectively? In this presentation, Putka will talk about some opportunities that arise from confluence of technology and analytic methods pertaining to the analysis of unstructured data on jobs that can facilitate new forms of systematic examination of the world of work and its human requirements.
Biography
Dan J. Putka is a principal scientist at HumRRO in Alexandria, Virginia. Over the past 18 years, Putka has helped numerous organizations develop, evaluate and implement assessments to enhance their hiring and promotion processes and guide individuals to career and job opportunities that fit them well. Complementing his client-centered work, Putka has maintained an active presence in the I-O psychology scientific community, focusing on advancing psychometric and analytic methods that are sensitive to the demands of applied research and practice. Along these lines, he has presented and published his work widely and serves on the editorial board of multiple scientific journals. Putka is a fellow of APA and three of its divisions to include SIOP (Division 14), APA’s Quantitative and Qualitative Methods Division (Division 5) and the Society for Military Psychology (Division 19). Dan holds a Ph.D. in I-O psychology, with a specialization in quantitative methods, from Ohio University.
Akane Sano
Assistant Professor of Electrical and Computer Engineering, Rice University
Wednesday, February 24, 2021, 11:40 a.m.
“MEASURING MOOD, STRESS AND PERFORMANCE WITH MOBILE SENSORS AND MACHINE LEARNING”
This talk highlights lessons learned from a series of ambulatory study, developed to measure mood, stress and performance, which were run in cohorts of college students, office workers, and shift workers collecting continuous wearable and mobile phone data. This talk overviews the objectives and methods of these studies, challenges faced, and some key findings measuring and forecasting mood changes and stress.
Biography
Akane Sano is an assistant professor at Rice University in the Department of Electrical and Computer Engineering. She directs the Computational Wellbeing Group. Her research focuses on affective computing and mobile health: human sensing, data analysis and application development for health, wellbeing and cognitive performance. She has worked on measuring and predicting stress, mental health, sleep and performance and designing systems to help people to reduce their stress and improve their mental health, sleep and performance including NSF future of work project (intelligent cognitive assistant for shift workers), NIH-funded SNAPSHOT study project, Eureka project (symptom prediction and digital phenotyping in schizophrenia using phone data) and IARPA mPerf project (using mobile sensors to support productivity and employee well-being). She obtained her Ph.D. at MIT and her M.Eng. and B.Eng. at Keio University, Japan. Before she joined Rice University, she was a Research Scientist in Affective Computing Group at MIT Media Lab, and a visiting scientist/lecturer at People-Aware Computing Lab, Cornell University. Before she came to the U.S., she was a researcher/engineer at Sony Corporation and worked on affective/wearable computing, intelligent systems and human computer interaction from 2005–2010. Recent awards include Microsoft Productivity Research Award in 2019, the Best Paper Award at IEEE BHI 2019 conference, the Best Paper Award at the NIPS 2016 Workshop on Machine Learning for Health and the 2014 AAAI Spring Symposium Best Presentation Award.
Nancy T. Tippins
Principal, The Nancy T. Tippins Group, LLC
Friday, February 26, 2021, 9:20 a.m.
“TESTING STANDARDS AND INNOVATIVE APPROACHES TO EMPLOYMENT TESTING”
Recently, test developers have harnessed the power of technology to create innovative types of tests used in employment settings and new ways to develop scoring procedures, using different, nontraditional methods to develop scoring algorithms and validate the inferences made from these new selection procedures. These new forms of testing include, but are not limited to, games, evaluation of facial features, voice qualities and algorithms that combine data scraped from applications, resumes and other sources. This presentation will compare these new practices and techniques to existing professional and legal standards for employment testing and highlight areas of concerns for testing professionals and employers.
Biography
Nancy Tippins has spent her career assisting organizations in the development, validation and use of employment tests. Her work includes creating strategies for workforce planning, talent acquisition, competency identification, employee and leadership development and litigation support. Active in professional affairs, Tippins has a long-standing involvement with the Society for Industrial and Organizational Psychology (SIOP) and served as president 2000–2001. In addition, she served on the committee to revise the “Principles for the Validation and Use of Personnel Selection Procedures” (1999) and co-chaired the committee for the 2018 revision of the “Principles.” She served on the committees to establish international testing standards (ISO 10667) and to revise the “Standards for Educational and Psychological Tests” (2014). She is a fellow of the American Psychological Association (APA), SIOP (Division 14 of the APA), Quantitative and Qualitative Methods Division (Division 5 of the APA) and the American Psychological Society.
Sang Eun Woo
Associate Professor, Industrial and Organizational Psychology, Purdue University
Thursday, February 25, 2021, 11:40 a.m.
“GROUND TRUTHS IN ALGORITHM-BASED MEASUREMENT FOR SELECTION”
The goal of this presentation is to explicate the “ground truth” problem in measurement practices that utilize machine learning (ML) scoring algorithms, and how it relates to discussions of validity and biases in the selection context. For example, when developing ML algorithms to score video interviews for selection, there are multiple options as to what could serve as ground truth, e.g., self-report personality, interview performance rated by managers, supervisory ratings of job performance. Such choices in training and validating ML algorithms have significant implications for the specific process in which ML-based measurement should be validated. I will discuss pros and cons of choosing different sources of information as ground truth in this context and highlight a few areas in which future research efforts will be most beneficial.
Biography
Sang Eun Woo is an associate professor in the Department of Psychological Sciences at Purdue University. Broadly put, her research addresses how personality and motivation can help explain various psychological phenomena in the workplace, such as employee turnover and interpersonal relationships. Woo also has a keen interest in innovative approaches to psychological measurement and research in general. Her focal expertise lies in developing and validating techniques of assessing personality and individual differences for various organizational and educational purposes (e.g., selection, retention), as well as in clarifying the theoretical underpinnings and implications of such techniques. Woo recently co-edited a book, “Big Data in Psychological Research” (with APA Books, expected in June 2020). She is serving on the editorial board for Organizational Research Methods, Journal of Applied Psychology, Journal of Management, Journal of Business and Psychology, and Human Resource Management Review and on the APA Committee on Psychological Tests and Assessment (2019–2021).