Empirical science that fights for software teams.

What happens to human capability when AI changes how we do the work?

That's a learning science question, and I've spent my career building solutions for it. I'm Cat Hicks, PhD. I study how developers learn, grow, and scaffold their innovation with collective intelligence. I help engineering organizations protect and measure what matters most about their people while everything changes around them. With Catharsis, I bring Software Science & Strategy to leaders and technical teams facing their biggest transformation challenge yet.

PhD · Computational Social Science VP of Research Ex: Google · Khan Academy · Pluralsight · UC San Diego 10,000+ developers studied Peer-reviewed · Open source instruments · DevTool builder
25–30%
empirically measured difference in team effectiveness during AI adoption for teams that invest in a shared learning culture, tested across 12+ industries. It's one of the most powerful levers your org has, and the one most leaders aren't pulling.
Hicks et al., 2025 · n = 3,267 developers
40–50%
reductions in AI skill threat and code review anxiety when teams work through evidence-based behavioral interventions.
This isn't the time for boring trainings and tired pizza parties. I design, test, and empirically prove real change strategies embedded directly into developers' working days — in partnership with working technical teams who end up advocating for their organizations to keep going, because it actually helps.
Hicks et al., 2025 · Lee & Hicks, 2024
100%
of my validated research instruments are free and open source. Science should belong to the people it's built to help. When you partner with Catharsis, you're joining a mission to rehumanize tech for everyone.
30%+
of participants in my research programs are women, non-binary developers, and developers of color, versus a typical industry baseline of ~2% for large developer surveys. I recruit where others can't by designing intentional research experiences that give developers immediate benefits and protect their agency in the stories we tell together. Participants want to be there. That means better data, richer samples, and findings that actually generalize.

The agentic coding era is the biggest learning challenge your engineering org has ever faced.

Continuing to treat software engineering like it's "black box" is no way to face the AI coding era. Right now, tech leaders are buying AI tools and hoping they work, clinging to old software metrics without context, and wondering why some developers are struggling while others take off. We're watching junior developers struggle and senior developers burn out, and when someone in leadership finally asks "do we really know whether this is working," the answer is a dashboard that tells them nothing actionable.

Getting the real ROI from AI in software isn't about whether AI makes your org produce more code, faster. True, lasting ROI that benefits your org *and* your developers comes from knowing what happens to human capability when AI changes how we do the work. Leaders who ensure developers are able to use these tools as a cognitive scaffold so they can take on doing more complex work, better, will build the right tooling and workflows to unlock our problem-solving minds in this era. Your developers are already running a million of their own experiments, and it's your job to listen. That's a developer learning science question, and that's what I've spent my career on. In three years, the juniors who never learned to debug because AI did it for them will be your mid-levels. The mid-levels who stopped deepening their architectural thinking because AI generated the boilerplate will be your seniors. If you're not designing for expertise & developer growth in how you use AI right now, you're building on a foundation that's quietly eroding.

I'm a learning scientist and a psychologist for the humans of tech. I've spent years studying why some teams innovate and overcome together when they face new challenges, and what makes others fail. And I've designed successful behavioral interventions that measurably shift software teams into new, more adaptive strategies. I've led large-scale industry-changing research at organizations like Google, Khan Academy, and Pluralsight that's directly helped thousands of technologists change how they work. Now I'm focusing on how software teams can build collective intelligence in this era. I'm here to help you get this right.

Pick your adventure

Engagements built on peer-reviewed methodology. Every offering is designed to give you evidence you can defend and strategy you can act on. For larger engagements, Catharsis conducts an intentional intake process to co-design the proposal, matching the science and strategy to your unique situation.

Game-changing Developer Research

Research Partnerships

Scoped per project

This is the heart of what Catharsis does. You get a learning scientist embedded in your organization's hardest questions for weeks or months, and a fully-worked original investigation for your context with methodological rigor your board can't poke holes in. Full research projects are scoped from design to delivery of unique insights, and these engagements are therefore highly bespoke.

Transparency makes our research far higher impact with technical communities. Most research partnerships include a publishing clause: anonymized findings become open science. Your organization gets the tailored results. The developer community gets the generalizable science. You get to be part of advancing what we know, and you get a significant recruiting and reputation asset in a rapidly-changing field where developer trust is key.

If the question keeps you up at night and you think answering it could be a true industry-changer, it's probably this one. I take on a limited number per year because they deserve the depth.
The diagnostic

Evidence Audit

Typically 3–4 weeks

Drawing on 15+ years leading complex real-world causal inference and evidence science work, I help your teams assess how your organization is measuring what matters, making their success legible, and accounting for where you need to create guardrails and avoid pitfalls. You get an honest evaluation of your current approach, consultative and skill-building support for your teams' evidence strategies, plus a roadmap for building evidence practices that actually work. Applicable to AI adoption, developer experience, culture initiatives, infrastructure teams growing their scope of responsibility and impact, and time-bound tooling investment decisions.

For teams that sense they're measuring the wrong things but aren't sure what the right things are yet.
The build

Intervention Design & Measurement Framework

Typically 6–10 weeks

Design for an intervention and evaluation plan that helps you achieve lasting success on a strategic priority such as developer learning & technical expertise, software best practices, or innovative AI workflows. Includes a targeted behavioral intervention designed to create durable change in your engineering organization, gounded in robustly-tested empirical science and adapted to your organization's unique context. Qualitative interviews, validated surveys, and trace metrics woven into a measurement practice built for implementation. If you're an evidence-based engineering leader, you will want to use this approach for the rest of your career.

For organizations ready to invest in a durable evidence practice that also gives you a recipe for behavior change, not a one-time report you'll forget about.
Something else

Not seeing the right fit? Reach out anyway.

Catharsis is constantly experimenting and always working at the edge of what's possible in the science of software teams. Unusual problems, novel partnerships, interesting data, genuinely hard questions, or you want to fund an open science software study? If you have a reason to think we should work together, I want to hear it.

Driven by curiosity and a commitment to building, just like my technologist community.

dr.cat.hicks@gmail.com →

Developers deserve science. Here's mine.

Most consultancies gatekeep their frameworks because that's their product. Catharsis operates with a different vision. I give science away because I know open science lifts all of us: every validated instrument, every research report, every finding I surface in my work across technical communities. I believe good science should belong to the community that needs it, and I'd rather compete on insight and implementation than on artificial scarcity. My developer science is shared under a Creative Commons Attribution 4.0 International Public License, meaning you are free to use it in your organization. Let me know how you use it!

Diagram: Collective Social Learning drives transmission of solutions, scaffolding of individual productivity, and velocity of innovation
"Collective social learning (rather than solitary and isolated genius) plays a key role in the transmission of solutions, the scaffolding of individual productivity, and the overall velocity of innovation." - A Cumulative Culture Theory of Developer Problem-Solving (Hicks & Hevesi, 2024)
Peer-Reviewed Study

Developer Thriving at Scale

The flagship study of developer thriving across 1,282 professional software developers in 12+ industries. Established the LABS model — learning culture, agency, belonging and self-efficacy — as productivity predictors and tested instruments across diverse developer populations, including rigorous testing to ensure equitable findings and usage across race, gender, country and industry. Validated the initial instruments now used in Catharsis engagements worldwide.

Read the study
Research Study

AI Skill Threat in the transition to Agentic AI

Foundational research on the crucial psychological factors that predict team success during agentic coding across 3,000+ developers and software managers. Documents the psychological mechanism by which AI tools trigger skill threat in developers — reducing engagement, learning behavior, and creative problem-solving. Identifies learning culture and belonging cultures as significantly buffering this threat, cutting AI Skill Threat rates in half and increasing team effectiveness. The research your organization needs before setting AI adoption policy.

Read the study
Peer-Reviewed Intervention

Code Review Anxiety: A Behavioral Intervention

The first empirically-tested model for reducing code review anxiety in real software contexts. A brief, scalable training grounded in developer science produced statistically significant reductions in anxiety and avoidance, and meaningful increases in developer self-efficacy. Code review anxiety drives a 40% increase in avoidance of technical problem-solving.

Read the study
Research Study

Change Agents

Engineering manager experiences provide a unique and important window into how software teams overcome failure and achieve lasting change. I studied the strategies of 465 experienced engineering managers while they pushed for change and advocated for best practices for their teams' learning and upskilling.

Read the study
Peer-Reviewed Study

No Silver Bullets: Measuring Developer Productivity at Scale

Cycle time analysis across 55,000+ observations from 216 organizations — one of the largest empirical productivity datasets in the developer research literature. Demonstrates why simplistic benchmarks actively mislead, and what rigorous measurement reveals about what actually drives output variation across teams.

Read the study
Research Study · Report

It's Like Coding in the Dark: The Learning Debt Project

Qualitative research with twenty-five full-time developers exploring how learning environments shape code writers' creativity and knowledge-sharing. Introduced the concept of Learning Debt — how environments that discourage learning compound into collective capability loss. Includes practical recommendations for cultivating learning culture.

Read the study
Framework Paper

Psychological Affordances and Developer Experience

Proposes that the psychological affordances around developers provide a "missing layer" to developer experience — an overlooked explanatory lens for why interventions to improve developer experience take hold or fail. Drawing from the "seed and soil" metaphor, interventions are more likely to stick when they are congruent with the systemic social messages, opportunities, and culture of the organization.

Read the paper
Framework Paper

A Cumulative Culture Theory for Developer Problem-Solving

An alternative to individualistic explanations for developer problem-solving. Introduces underappreciated elements of developers' communal, social cognition — the collective intelligence that empowers or constrains the solutions teams can access. The theoretical foundation for everything I do.

Read the paper
Claude Skill · Developer Tool

The Learning-Opportunities Skill

A Claude Code skill that demonstrates an adaptive "dynamic textbook" approach to help developers integrate evidence-based expertise-building exercises into agentic coding workflows. When you complete architectural work, the skill offers optional 10–15 minute exercises using prediction, retrieval practice, and spaced repetition drawn from your own project.

Get the skill on GitHub
Coming soon

Open Research Calls

Participate in active research studies on developer experience, AI tool adoption, and team dynamics. Real science, open enrollment, community-owned findings.

Coming soon

Open Developer Data Hub

Downloadable, anonymized datasets from our research programs. Play with real developer data. Run your own analyses. Build on the science. We believe data is for everyone.

Cat Hicks, PhD

Over fifteen years of original research, including forging my own paths where none existed, I've cultivated an enduring love for science of human performance and achievement. Moving from foundational cognitive science into software teams, from academia into industry, from hands-on research expert to founder, every stop has shaped both how I understand what actually makes people thrive in technical work and the empathy I bring to your corner to fight for your good work to be seen and heard. I came up through cognitive science and the psychological sciences — studying how people actually learn complex skills, how beliefs shape our behavior, what makes expertise stick, and what different environments do to human capability over time. I took that into tech at organizations like Google and Khan Academy researching how people learn to code, and as a founder for a devtools startup (Travrse, funded by Precursor Ventures) that used cognitive science to design a tool that helped developers ramp up more quickly on unfamiliar code. At Pluralsight, I took on a bold new mission and built a research team from the ground up that served rigorous open science to a global developer community. I served as VP of Research studying hundreds of thousands of technical learners and using that science to drive progress for tech teams on two fronts: bringing a human-centered approach to global clients on their toughest engineering challenges, while simultaneously lifting the state of the art for public methodologies in software research. Now, I'm leading Catharsis to put answers behind the questions everyone is asking and no one else is answering.

I'm a maker, not just a thinker. I'm actively experimenting in the developer cognition space, building speculative tooling that puts evidence-based learning science directly in developers' hands during their agentic coding sessions, and experimenting with new interfaces that help developers supercharge their tech skill building. This gives me a unique window and connection to developer communities which I've spent years earning: I spend hundreds of hours interviewing practitioners, listening to their stories and pain points, and designing both science and solutions from the ground-up in dialogue with software communities.

Cat Hicks, PhD — founder of Catharsis
700+
Research participants studied in multinational research collaborations
2009–2014
PhD in Empirical Psychology
UC San Diego
300K+
Employees whose skill growth I measured and published peer-reviewed research with in Google technical learning programs
2014–2017
Research Fellow in CSE & Cog Sci → Learning Science @ Google
Design Lab · Google
$500K
Seed-funded. Co-founded a devtools startup using cognitive science to help developers ramp faster on unfamiliar code
2017–2018
Co-Founder & Head of Research
Travrse
3M+
Learners in Gates Foundation-funded research I published on topics such as effective best practice for Official SAT Practice and learning outcomes
2020–2021
Sr Research Scientist
Khan Academy
$10M+
Research-driven sales and renewal value. 10K+ developers studied. Lab built from zero to world-class impact.
2022–2025
VP of Research
Pluralsight
Now
Putting answers to the developer science questions everyone is asking.
2019–Present
Founder & Principal Researcher
Catharsis

Let's Fight for the Human

Every measurement project is an opportunity to make an argument to your organization about what you should care about. I think we should care about people.

Read My Latest →
Illustration of Cat Hicks, PhD

Evidence over agendas

Peer-reviewed methodology. Validated instruments. Rigorous models tested in science, not industry buzzwords. I combine sharp computational social science, qualitative interviews, developer-co-designed surveys, and large-scale software metrics statistical analysis because complex real problems need multiple forms of evidence. Your leadership gets findings they can defend and your developers are treated as collaborators.

Systems over blame

I see Developer Experience as a function of environment, not personality or fixed immutable traits. I believe in the potential of individuals to grow and change, and I measure the psychological affordances your culture provides to help you change the structural conditions that actually move outcomes.

Trust over measurement mandates

Good evidence requires earning trust — asking why people should share their honest experience, what happens to that data, and what changes afterward. I think about projects as implementation science first, and design for community-based action research. This approach shapes projects developers will advocate for their orgs to invest in.

Cat Hicks is one of the most brilliant people I have met in this industry. She sees things in a truly unique way, and brings a multidisciplinary approach to everything she does.

James Governor
Co-founder, RedMonk

Dr. Hicks' research leadership continues to center and inform key conversations around how to support and enrich developers as they learn, build, and grow together. Her work is thorough, sound, and sets the bar for how to move applied research into a global ecosystem.

Amanda Casari
Staff Developer Relations Engineer, Google Open Source

Dr. Cat Hicks is a master of exploring the human side of technology.

Hazel Weakly
Architect V / Lead of Guardrails, ING

If you're a software engineering manager, "It's Like Coding in the Dark" by Cat Hicks is the most useful thing you could be reading right now.

Greg Wilson
Co-founder & first Executive Director of Software Carpentry · ACM SIGSOFT Influential Educator of the Year · Author of 14 books on programming

With touching compassion for software developers as people with individual needs and essential social connections, Cat Hicks brings research and insight to explode the tired debates about "developer productivity," and provides real answers and practical approaches for improvement.

Eli Israel
Managing Partner, Gartner

Past & present collaborators include

Age of Learning interviewing.io Tetra Insights JMP Statistical Software Truity McClennan Design FreshForm Personal Touch
The Psychology of Software Teams
A Field Guide for the Humans Building Software
Cat Hicks, PhD

The book behind the practice

The Psychology of Software Teams is the evidence-based case for why investing in developer psychology is the thing that makes everything else work. Based on years of research, modern psychological science, and original empirical data from 10,000+ real developers, this is a working, human-centered field guide to what developers and their leaders need to know to create innovative teams, thriving careers, and lasting capability in the AI era.

Join the Newsletter →

You've read this far. Let's work.

Whether you need to evaluate what AI is actually doing to your developers, diagnose where your learning culture is breaking down, build the measurement practice that outlasts your current crisis, or just have a very good 90-minute conversation about evidence — I'd love to figure out what you need. When my schedule allows, I offer 90 minute 1:1 design sessions for engineering leaders who need a secret weapon at the right time.

Book a Strategy Session — $575

Explore a larger project, or just ask a question

Email dr.cat.hicks@gmail.com — I read everything and I reply to genuine questions.

Want the research first?

Read Fight for the Human, the newsletter where I break down developer psychology and learning science for people who give a damn.

Catharsis is proud to be a woman-owned and LGBT-owned business based in San Diego, CA, that contributes volunteer data and research support to our local community where possible, including hosting learner events for building technical skills and providing pro bono research design consulting each quarter to organizations working on equity in education and healthcare. Catharsis also maintains a regular onsite presence in San Francisco, CA.