INSIGHT Lab
Interpretable Systems for Inductive Generation of Human-Centered Theories
Theories reflect scientists’ understanding of phenomena. Ideally, they are unambiguous, and continuously updated based on new findings. However, many social scientific theories lack formalization and are flexible enough to accommodate contradictory findings. This limits scientific progress. The INSIGHT lab asks the question: How can we construct better theories?
One place we look for inspiration is patterns in data, just like observing a falling apple inspired Newton’s gravitational theory. We translate these data patterns, detected with machine learning, into unambiguous formal theories, and make those theories shareable so they can be updated (by others).
Another place we look for inspiration is published research: even if authors are not explicit about their theoretical assumptions, their writing may reveal latent causal assumptions. We detect such causal claims, parse them, and reassemble the latent theory in a field via text mining research synthesis methods.
We further explore the integration of formal theories into scientific workflows, whether for sample size estimation, simulation/what-if studies, and theory-driven interpretation of effect sizes. We welcome a holistic approach, as complex phenomena benefit from triangulation via quantitative and qualitative methods, automated synthesis and expert knowledge.
Our different approaches are unified by attention to open science practices, computational tools, and a-priori interpretable methods over black box approaches to ensure rigor and transparency in our pursuit of cumulative knowledge acquisition.
- Want to collaborate
- Have an interesting challenge that could benefit from our approaches
- Have a rich dataset we can mine for theoretical insights
- Want to do a research visit at our lab
- Want to invite a lab member for a talk
SAFE labs

The INSIGHT lab subscribes to the SAFE Labs Handbook to ensure a fair, transparent, and inclusive research environment, see the Lab Manual.