That's a learning science question, and I've spent my career building solutions for it. I'm Cat Hicks, PhD. I study how developers learn, grow, and scaffold their innovation with collective intelligence. I help engineering organizations protect and measure what matters most about their people while everything changes around them. With Catharsis, I bring Software Science & Strategy to leaders and technical teams facing their biggest transformation challenge yet.
Continuing to treat software engineering like it's "black box" is no way to face the AI coding era. Right now, tech leaders are buying AI tools and hoping they work, clinging to old software metrics without context, and wondering why some developers are struggling while others take off. We're watching junior developers struggle and senior developers burn out, and when someone in leadership finally asks "do we really know whether this is working," the answer is a dashboard that tells them nothing actionable.
Getting the real ROI from AI in software isn't about whether AI makes your org produce more code, faster. True, lasting ROI that benefits your org *and* your developers comes from knowing what happens to human capability when AI changes how we do the work. Leaders who ensure developers are able to use these tools as a cognitive scaffold so they can take on doing more complex work, better, will build the right tooling and workflows to unlock our problem-solving minds in this era. Your developers are already running a million of their own experiments, and it's your job to listen. That's a developer learning science question, and that's what I've spent my career on. In three years, the juniors who never learned to debug because AI did it for them will be your mid-levels. The mid-levels who stopped deepening their architectural thinking because AI generated the boilerplate will be your seniors. If you're not designing for expertise & developer growth in how you use AI right now, you're building on a foundation that's quietly eroding.
I'm a learning scientist and a psychologist for the humans of tech. I've spent years studying why some teams innovate and overcome together when they face new challenges, and what makes others fail. And I've designed successful behavioral interventions that measurably shift software teams into new, more adaptive strategies. I've led large-scale industry-changing research at organizations like Google, Khan Academy, and Pluralsight that's directly helped thousands of technologists change how they work. Now I'm focusing on how software teams can build collective intelligence in this era. I'm here to help you get this right.
Engagements built on peer-reviewed methodology. Every offering is designed to give you evidence you can defend and strategy you can act on. For larger engagements, Catharsis conducts an intentional intake process to co-design the proposal, matching the science and strategy to your unique situation.
You bring the problem — AI evaluation, measurement strategy, developer experience, "my VP asked me to prove this works and I have two weeks" — and we hash out a plan together in a ninety minute semi-structured design session.
Because I genuinely enjoy learning with you, these are priced deliberately to be easy to expense and hard to regret. Many clients use this to scope a larger engagement, but it's designed to be genuinely valuable on its own.
Book a Session →This is the heart of what Catharsis does. You get a learning scientist embedded in your organization's hardest questions for weeks or months, and a fully-worked original investigation for your context with methodological rigor your board can't poke holes in. Full research projects are scoped from design to delivery of unique insights, and these engagements are therefore highly bespoke.
Transparency makes our research far higher impact with technical communities. Most research partnerships include a publishing clause: anonymized findings become open science. Your organization gets the tailored results. The developer community gets the generalizable science. You get to be part of advancing what we know, and you get a significant recruiting and reputation asset in a rapidly-changing field where developer trust is key.
Drawing on 15+ years leading complex real-world causal inference and evidence science work, I help your teams assess how your organization is measuring what matters, making their success legible, and accounting for where you need to create guardrails and avoid pitfalls. You get an honest evaluation of your current approach, consultative and skill-building support for your teams' evidence strategies, plus a roadmap for building evidence practices that actually work. Applicable to AI adoption, developer experience, culture initiatives, infrastructure teams growing their scope of responsibility and impact, and time-bound tooling investment decisions.
Design for an intervention and evaluation plan that helps you achieve lasting success on a strategic priority such as developer learning & technical expertise, software best practices, or innovative AI workflows. Includes a targeted behavioral intervention designed to create durable change in your engineering organization, gounded in robustly-tested empirical science and adapted to your organization's unique context. Qualitative interviews, validated surveys, and trace metrics woven into a measurement practice built for implementation. If you're an evidence-based engineering leader, you will want to use this approach for the rest of your career.
Catharsis is constantly experimenting and always working at the edge of what's possible in the science of software teams. Unusual problems, novel partnerships, interesting data, genuinely hard questions, or you want to fund an open science software study? If you have a reason to think we should work together, I want to hear it.
Driven by curiosity and a commitment to building, just like my technologist community.
dr.cat.hicks@gmail.com →Most consultancies gatekeep their frameworks because that's their product. Catharsis operates with a different vision. I give science away because I know open science lifts all of us: every validated instrument, every research report, every finding I surface in my work across technical communities. I believe good science should belong to the community that needs it, and I'd rather compete on insight and implementation than on artificial scarcity. My developer science is shared under a Creative Commons Attribution 4.0 International Public License, meaning you are free to use it in your organization. Let me know how you use it!
The flagship study of developer thriving across 1,282 professional software developers in 12+ industries. Established the LABS model — learning culture, agency, belonging and self-efficacy — as productivity predictors and tested instruments across diverse developer populations, including rigorous testing to ensure equitable findings and usage across race, gender, country and industry. Validated the initial instruments now used in Catharsis engagements worldwide.
Read the studyFoundational research on the crucial psychological factors that predict team success during agentic coding across 3,000+ developers and software managers. Documents the psychological mechanism by which AI tools trigger skill threat in developers — reducing engagement, learning behavior, and creative problem-solving. Identifies learning culture and belonging cultures as significantly buffering this threat, cutting AI Skill Threat rates in half and increasing team effectiveness. The research your organization needs before setting AI adoption policy.
Read the studyThe first empirically-tested model for reducing code review anxiety in real software contexts. A brief, scalable training grounded in developer science produced statistically significant reductions in anxiety and avoidance, and meaningful increases in developer self-efficacy. Code review anxiety drives a 40% increase in avoidance of technical problem-solving.
Read the studyEngineering manager experiences provide a unique and important window into how software teams overcome failure and achieve lasting change. I studied the strategies of 465 experienced engineering managers while they pushed for change and advocated for best practices for their teams' learning and upskilling.
Read the studyCycle time analysis across 55,000+ observations from 216 organizations — one of the largest empirical productivity datasets in the developer research literature. Demonstrates why simplistic benchmarks actively mislead, and what rigorous measurement reveals about what actually drives output variation across teams.
Read the studyQualitative research with twenty-five full-time developers exploring how learning environments shape code writers' creativity and knowledge-sharing. Introduced the concept of Learning Debt — how environments that discourage learning compound into collective capability loss. Includes practical recommendations for cultivating learning culture.
Read the studyProposes that the psychological affordances around developers provide a "missing layer" to developer experience — an overlooked explanatory lens for why interventions to improve developer experience take hold or fail. Drawing from the "seed and soil" metaphor, interventions are more likely to stick when they are congruent with the systemic social messages, opportunities, and culture of the organization.
Read the paperAn alternative to individualistic explanations for developer problem-solving. Introduces underappreciated elements of developers' communal, social cognition — the collective intelligence that empowers or constrains the solutions teams can access. The theoretical foundation for everything I do.
Read the paperA Claude Code skill that demonstrates an adaptive "dynamic textbook" approach to help developers integrate evidence-based expertise-building exercises into agentic coding workflows. When you complete architectural work, the skill offers optional 10–15 minute exercises using prediction, retrieval practice, and spaced repetition drawn from your own project.
Get the skill on GitHubParticipate in active research studies on developer experience, AI tool adoption, and team dynamics. Real science, open enrollment, community-owned findings.
Downloadable, anonymized datasets from our research programs. Play with real developer data. Run your own analyses. Build on the science. We believe data is for everyone.
Over fifteen years of original research, including forging my own paths where none existed, I've cultivated an enduring love for science of human performance and achievement. Moving from foundational cognitive science into software teams, from academia into industry, from hands-on research expert to founder, every stop has shaped both how I understand what actually makes people thrive in technical work and the empathy I bring to your corner to fight for your good work to be seen and heard. I came up through cognitive science and the psychological sciences — studying how people actually learn complex skills, how beliefs shape our behavior, what makes expertise stick, and what different environments do to human capability over time. I took that into tech at organizations like Google and Khan Academy researching how people learn to code, and as a founder for a devtools startup (Travrse, funded by Precursor Ventures) that used cognitive science to design a tool that helped developers ramp up more quickly on unfamiliar code. At Pluralsight, I took on a bold new mission and built a research team from the ground up that served rigorous open science to a global developer community. I served as VP of Research studying hundreds of thousands of technical learners and using that science to drive progress for tech teams on two fronts: bringing a human-centered approach to global clients on their toughest engineering challenges, while simultaneously lifting the state of the art for public methodologies in software research. Now, I'm leading Catharsis to put answers behind the questions everyone is asking and no one else is answering.
I'm a maker, not just a thinker. I'm actively experimenting in the developer cognition space, building speculative tooling that puts evidence-based learning science directly in developers' hands during their agentic coding sessions, and experimenting with new interfaces that help developers supercharge their tech skill building. This gives me a unique window and connection to developer communities which I've spent years earning: I spend hundreds of hours interviewing practitioners, listening to their stories and pain points, and designing both science and solutions from the ground-up in dialogue with software communities.
Every measurement project is an opportunity to make an argument to your organization about what you should care about. I think we should care about people.
Read My Latest →
Peer-reviewed methodology. Validated instruments. Rigorous models tested in science, not industry buzzwords. I combine sharp computational social science, qualitative interviews, developer-co-designed surveys, and large-scale software metrics statistical analysis because complex real problems need multiple forms of evidence. Your leadership gets findings they can defend and your developers are treated as collaborators.
I see Developer Experience as a function of environment, not personality or fixed immutable traits. I believe in the potential of individuals to grow and change, and I measure the psychological affordances your culture provides to help you change the structural conditions that actually move outcomes.
Good evidence requires earning trust — asking why people should share their honest experience, what happens to that data, and what changes afterward. I think about projects as implementation science first, and design for community-based action research. This approach shapes projects developers will advocate for their orgs to invest in.
Cat Hicks is one of the most brilliant people I have met in this industry. She sees things in a truly unique way, and brings a multidisciplinary approach to everything she does.
Dr. Hicks' research leadership continues to center and inform key conversations around how to support and enrich developers as they learn, build, and grow together. Her work is thorough, sound, and sets the bar for how to move applied research into a global ecosystem.
Dr. Cat Hicks is a master of exploring the human side of technology.
If you're a software engineering manager, "It's Like Coding in the Dark" by Cat Hicks is the most useful thing you could be reading right now.
With touching compassion for software developers as people with individual needs and essential social connections, Cat Hicks brings research and insight to explode the tired debates about "developer productivity," and provides real answers and practical approaches for improvement.
Past & present collaborators include
The Psychology of Software Teams is the evidence-based case for why investing in developer psychology is the thing that makes everything else work. Based on years of research, modern psychological science, and original empirical data from 10,000+ real developers, this is a working, human-centered field guide to what developers and their leaders need to know to create innovative teams, thriving careers, and lasting capability in the AI era.
Join the Newsletter →Whether you need to evaluate what AI is actually doing to your developers, diagnose where your learning culture is breaking down, build the measurement practice that outlasts your current crisis, or just have a very good 90-minute conversation about evidence — I'd love to figure out what you need. When my schedule allows, I offer 90 minute 1:1 design sessions for engineering leaders who need a secret weapon at the right time.
Book a Strategy Session — $575Catharsis is proud to be a woman-owned and LGBT-owned business based in San Diego, CA, that contributes volunteer data and research support to our local community where possible, including hosting learner events for building technical skills and providing pro bono research design consulting each quarter to organizations working on equity in education and healthcare. Catharsis also maintains a regular onsite presence in San Francisco, CA.