Episode 56

Scaling Values Interviews

About this Episode

In this episode of Offer Accepted, Maggie Landers, VP of Talent Acquisition at Harvey, shares how her team built a values interview that can scale with the company’s rapid growth without losing clarity or consistency. As Harvey expands across regions and hiring accelerates, Maggie explains why a structured values interview became essential for creating predictable signal and helping teams make grounded decisions quickly. She walks through how the framework was drafted using AI, shaped through real interviewer feedback, and strengthened through a pilot that revealed where interviewers needed more clarity and support.

Maggie highlights how Harvey’s values already shaped how people worked, which made them a strong foundation for evaluation. She shares how examples, calibrated rubrics, and interviewer training helped the process work across orgs and levels. We also explore how Harvey’s value of “jobs not finished” guided the team’s mindset: build something simple, test it in real conversations, refine it through feedback, and scale it only once it proves reliable. Together, these principles show how teams can move fast without losing judgment, creating a hiring process that is both human and high impact.

Topics

Hiring

This Episode's Guest

Maggie Landers

VP, Talent @ Harvey

Maggie Landers is the VP of Talent Acquisition at Harvey, where she leads global hiring and builds scalable processes to support the company’s rapid growth. Before Harvey, she led recruiting at Intercom and other high-growth companies, bringing deep experience in designing hiring systems that balance speed, clarity, and strong candidate experience. Her work focuses on creating simple and effective frameworks that help teams make consistent decisions and stay aligned across regions.

Takeaway 1

Values Must Be Lived Before They Can Be Interviewed For 🌱


Maggie came into Harvey with a clear objective: accelerate hiring without losing the behaviors that had made the company successful. The values were already strong and deeply embedded in how teams worked, which meant the goal was not to redefine them but to create a structured and predictable way to assess them. Starting from what already worked allowed the talent team to move quickly while anchoring the interview process in something real and observable.

Why It Matters:
Values interviews only function when the values themselves are lived inside the business. When teams already practice the behaviors, language, and decision-making patterns those values represent, interviewers share a common understanding of what success looks like. This reduces noise, removes personal interpretation, and gives candidates a clearer view of the environment they are walking into. A values interview based on real behaviors becomes far more predictive than one based on aspirational ideals.

Quick Tips

  • Validate that the values are lived. Maggie emphasized that Harvey’s values already influenced decisions and ways of working across the company, which made them strong enough to evaluate consistently. If the values are not already visible in how people operate, a values interview will only amplify inconsistency.
  • Use AI to generate initial questions, then test them in real interviews. AI sped up early drafting, but the quality came from pressure testing those questions with real interviewers and candidates. Maggie removed questions that led to vague or repetitive answers and prioritized those that surfaced clear examples of behaviors tied to success.
  • Create probes that reveal ownership and authenticity. Early testing showed that candidates could describe a behavior without demonstrating how they actually operate. Maggie added probing questions focused on responsibility and honesty, which created a stronger signal and made it easier to compare candidates across teams and levels.

Takeaway 2

Simple Rubrics Create Far More Consistent Signal 📊


As Harvey tested the first round of values interviews, Maggie noticed that the biggest gap was not the questions themselves but how interviewers evaluated the answers. People were interpreting responses differently, and without a shared framework, the same candidate could be viewed through entirely different lenses. The team used AI to create a first draft of the scoring rubric, then refined it through pilot feedback by simplifying the scale and adding concrete examples for each value and role type. These shared markers helped interviewers evaluate behaviors more consistently across functions, levels, and regions.

Why It Matters:
A values interview is only as consistent as the rubric behind it. Without a clear scoring guide, interviewers rely on personal interpretation, which leads to noise, uneven expectations, and decisions that are harder to trust. Maggie’s goal was to create a rubric that helped interviewers evaluate candidates predictably without distracting them from the conversation. A simple structure, paired with examples, strengthened the reliability of signal across global teams.

Quick Tips

  • Start with a draft rubric and refine it through real interview feedback. The early version created with AI gave the team a starting point, but it was pilot feedback that revealed where definitions were unclear or too similar. This is also where the transition from a four-point scale to a three-point scale emerged, helping the team create clearer distinctions between strong, mixed, and weak responses.
  • Add concrete examples for each rating to guide interviewer judgment. When they moved to the three-point scale of aligned, partially aligned, and misaligned, interviewers needed to see examples of what responses looked like in practice. Maggie built these examples directly into Ashby, which made it easier for interviewers across regions to calibrate quickly and apply the rubric consistently in real time.
  • Keep the rubric simple enough that interviewers can stay focused on the conversation. Maggie stressed that a scoring system should never pull interviewers away from listening or probing deeper. Simplifying the scale reduced cognitive load and helped interviewers stay engaged while still generating reliable signal.

Takeaway 3

Pilots Build Trust and Reduce Risk Before Rollout 🧪


Before rolling out the new values interview across Harvey, Maggie designed a pilot that allowed the team to test the framework without disrupting hiring. Running it in parallel to the existing culture contribution interview ensured the pilot had no impact on decisions and gave interviewers room to learn and experiment productively. The goal was to understand how the framework performed in real conversations, where the rubric held up, and where interviewers needed more clarity or support. Maggie also brought in a mix of champions and skeptics so the pilot captured a full range of perspectives and surfaced practical concerns early.

Why It Matters:
Pilots create space for teams to experiment safely, remove pressure from interviewers, and reveal how a process works under real conditions. Running the new values interview alongside the existing one allowed Maggie to compare signals, catch gaps early, and identify which parts of the framework needed refinement before rollout. Participation was exceptionally strong, with one hundred percent of trained interviewers running a pilot session and submitting feedback. This gave the team a complete view of how the process performed across orgs and levels and built confidence that the framework could scale.

Quick Tips

  • Prepare interviewers with training that focuses on intent, not perfection. Before the pilot began, Maggie walked interviewers through the purpose of the values interview and how the rubric supported their judgment. This helped interviewers understand what they were evaluating and prevented the pilot from becoming a box-checking exercise.
  • Use the pilot to identify where interviewers misunderstand the values. As interviewers practiced the framework, Maggie watched for misinterpretations or inconsistent application. These insights guided clarifications in the rubric and strengthened shared understanding across teams.
  • Collect and synthesize feedback in a structured way so patterns become clear. Every pilot participant submitted written feedback, and Maggie used AI to analyze trends across orgs and levels. This helped separate isolated preferences from real gaps and guided updates to the rubric, training materials, and questions.

What Hiring Excellence Means to Maggie

For Maggie, Hiring Excellence is about impact, not perfection. She believes the most effective teams focus on doing work that meaningfully moves the business forward rather than trying to design a flawless process. This mindset connects directly to Harvey’s value of jobs not finished, where progress, learning, and iteration matter more than getting everything right the first time.

Excellence also means delivering that impact quickly and at scale. Maggie sees great hiring as the ability to make high-quality decisions in a fast-moving environment by using simple frameworks, clear expectations, and consistent evaluation. The goal is to enable speed without sacrificing judgment, so the team can hire effectively as the company grows.

Watch the clip >>>

Maggie's Recruiting Hot Take 🔥

Maggie believes the real art of interviewing has nothing to do with tools and everything to do with listening. In her view, teams often approach interviews too transactionally, which leads to surface-level conversations and weak signal. She argues that good judgment comes from being fully present, understanding a candidate’s motivations, and having honest, nuanced conversations about the role. No framework or AI tool can replace the critical thinking and deep listening required to do that well.

Watch the clip >>>

Timestamps

(00:00) Introduction

(00:24) Meet Maggie Landers

(02:08) Why Harvey scaled values interviews for quality

(04:39) Turning three simple values into a global hiring lens

(07:47) Building scoring rubrics that interviewers actually use

(10:29) Designing a low-risk pilot for values interviews

(13:33) Getting exec and board support for experimentation

(17:35) Standardization, simplicity, and training people leaders at scale

(22:59) Using AI to maintain values programs without burning out

(27:10) Why great interviewers are listeners, not checkbox operators

(30:14) Hiring excellence is focusing on impact over perfection

(31:29) Maggie’s recruiting hot take on active listening

(33:46) Learning to surf the waves of a noisy recruiting career

(36:56) Where to connect with Maggie

Hosted By

Shannon Ogborn

RecOps Consultant & Community Lead @ Ashby

Shannon Ogborn is a Recruiting Ops expert with nearly ten years of experience at companies from Google to Hired Inc and more. She’s shining a spotlight onto what makes a recruiting strategy one of a kind.

Other Episodes

Boosting Offer Acceptance Rates with Active Listening Strategies

Boosting Offer Acceptance Rates with Active Listening Strategies

In this episode, we chat with Reggie Williams, Head of Tech Talent at Bain Capital Ventures to discuss how active listening strategies can boost your offer acceptance rates.

Listen to this episode
Simplifying Recruiting Metrics to Improve Team Performance

Simplifying Recruiting Metrics to Improve Team Performance

In this episode of Offer Accepted, Bryan Power, Chief People Officer at Nextdoor, shares how recruiting teams can simplify their process using the AAA framework: Attraction, Assessment, and Acquisition. Shannon and Bryan discuss how breaking the funnel into distinct stages helps teams identify where things are going wrong and take action with more clarity. Whether you're leading a small team or scaling operations, this episode offers practical ways to use data, structure, and empathy to improve hiring outcomes.

Listen to this episode

Join the Hiring Excellence movement

New episodes every month - subscribe here so you never miss out.