Scaling the Interview Platform

Scaling the Interview Platform

Scaling the Interview Platform

Role

Lead UX Designer

Team

1 - 3 UXD • 1 UXR

Timeline

9/2020 - 12/2023

01

Background

A tool I designed during COVID to keep hiring alive became the infrastructure that 15 Indeed products depend on, handling 2M+ interviews a year. But it was never designed to be a platform, and the speed that made it successful was now creating the exact fragmentation I'd need to unravel.

In March 2020, Indeed needed to enable virtual hiring for employers who had never interviewed online. I built that first experience fast. It worked. Then product after product started integrating with it (SMB, Enterprise, Smart Sourcing, third-party ATS integrations, etc.), each one adding their own interview cards, status labels, and actions. What started as an emergency response evolved into a 3-year strategic initiative to transform a single tool into shared infrastructure.

The deeper strategic bet was this: interview signals are stronger hiring predictors than applications. A completed interview indicates genuine mutual interest. But Indeed's entire business was built around applications, monetizing job postings and candidate volume. I was building infrastructure around a different signal, one that was more predictive but less established in the business model. That meant I wasn't just designing a platform. I was designing the foundation for a shift from "we send you applicants" to "we facilitate hires."

The challenge was serving radically different use cases (1:1 scheduled interviews, high-volume hiring events, phone screens, and in-person interviews) while maintaining consistency for both employers and job seekers across every surface.

My role

Lead UX Designer and sole design owner for Indeed's Interview Platform from inception through scaling. I designed the interview layer that 15 Indeed products depend on.

Scope of influence: I owned the UX strategy to transform a single product into shared infrastructure, working across three layers:

  • Backend: Single source of truth for all scheduled and completed interviews, enabling consistent measurement and triggering follow-up actions

  • Frontend: Unified experience for scheduling, rescheduling, canceling, and managing interviews for both job seekers and employers, regardless of entry point

  • Communications: Centralized notification system (email + SMS) ensuring all interview communications have a consistent user experience

  • Team structure: Partnered with 3 UX Designers and 1 UX Researcher

I led end-to-end design for defining how employers communicate hiring intent through conversation and how AI agents optimize job performance through intelligent recommendations. I collaborated closely with cross-functional team mates including Content Design, UX RESEARCH, PM, Engineering, and Data Science—while partnering with senior leadership on product vision, market positioning, and establishing AI design patterns now used across Indeed's product ecosystem.

Impact Summary

2.1M+

interviews annually

15

products integrated

20%

shorter time-to-hire

02

Part 1: Nail the Core Experience (2020–2021)

Before scaling, I had to ensure the fundamental interview experience worked reliably. And the data said it didn't.

Challenge: Only 45% of scheduled interviews had both parties show up, and the reason was different for each side.

This wasn't a reminder problem. It was a two-sided marketplace problem. The same symptom (no-shows) had completely different root causes depending on which side of the interview you were on. And improving one side without the other doesn't actually solve anything, because an interview needs both people.

Job seeker no-shows stemmed from forgetfulness and low salience. Unlike employers who interview regularly, candidates might schedule an interview and not think about it until it's too late.

Employer no-shows stemmed from organizational complexity. The person conducting the interview often wasn't the person who posted the job. Hiring managers who weren't even set up on Indeed were expected to somehow find and join their interviews. The problem broke down further by segment:

  • Solo Hiring Manager (SMB): Small business owners juggling many hats with no ATS or hiring support. A no-show isn't just a missed interview. It's time carved out of running their business, with no one to absorb the rescheduling burden.

  • Collaborative Hiring Team (Enterprise): HR coordinates schedules across multiple interviewers, including hiring managers who took time from their "real job" to interview. A no-show wastes that coordination and erodes trust in the entire process.

The common thread: both segments experience no-shows as trust-eroding events that make them question whether Indeed interviews are worth the effort.

Employer persona and top pain points
Employer persona and top pain points

My Approach: Journey-Based Notification Architecture

Rather than treating this as "send more reminders," I mapped the full interview journey for both sides and identified where drop-off occurred and why. The solution required designing complementary interventions that worked in tandem, because an interview is only successful if both parties show up.

For job seekers: Multiple reinforcement touchpoints that meet candidates where they already are. Confirmation at scheduling, persistent visibility in My Jobs, integration with employer messaging, and behaviorally-triggered reminders (24-hour and "interviewer is ready"). The design principle was salience: make the upcoming interview impossible to forget without being annoying.

Interview tracking & reminders for job seekers
Interview tracking & reminders for job seekers

For employers: The interventions were different because the problems were different. For tracking and organization: a unified interview list for 1:1s, an event dashboard with RSVP trends for hiring events, and triggered reminders when candidates are waiting. For the access problem I didn't initially expect, self-service access requests, account-aware invitation emails, and lobby-level access recovery. I was surprised how often interviewers simply couldn't get into Indeed. This required designing an entirely separate onboarding flow within the interview experience itself.

The reframe turned the notification system from an employer efficiency feature into a two-sided marketplace value proposition. If we designed job seeker transparency so compelling that candidates preferred automated scheduling, employer adoption would follow, not because we convinced them to change processes, but because it became a competitive advantage.

Results: Interview reminder emails became the primary driver of attendance, with 64% of job seekers and 70% of employers joining interviews directly from email notifications. The Interview tab/list accounted for 35% of job seeker and 10% of employer joins.

Interview tracking & reminders for employers
Interview tracking & reminders for employers

03

Part 2: Scale the Platform (2022–2024)

With the core experience validated, I shifted focus to the consequences of our own success: the platform that 7 products depended on was never designed to be a platform. Speed of integration had led to fragmented experiences, and removing that fragmentation would prove harder than building the features in the first place.

Challenge: Same interview, three different stories

Here's what fragmentation actually looked like: an employer scheduling through Smart Sourcing saw one version of their interview card. The same employer checking Messaging saw a different card with different status labels. Their candidate, checking My Jobs on their phone, saw a third version with different actions available. Same interview, three different stories, and none of them wrong, exactly, but none of them building trust either.

As the platform scaled across 7 products, we'd prioritized speed of integration over consistency. Each team built their own interview representation. The audit revealed inconsistencies in what we showed (some surfaces included interviewer names, others didn't) and how we showed it (status labels varied across products). This created cognitive load and eroded confidence in interview information.

My Approach: Audit, Define, Modularize

1. Journey & Component Audit I mapped every interview touchpoint across both employer and job seeker journeys, from scheduling through completion, and catalogued which information appeared where. This produced the evidence that made the inconsistency problem undeniable: a visual inventory showing divergent representations of the same data.

Auditing interview components through employer and job seeker journey
Auditing interview components through employer and job seeker journey

2. Define Key Components I identified the core questions each interview card must answer:

  • Employers: Who are we interviewing? When/where? What's the status? Who's on the interview team? What documents do we need?

  • Job seekers: Which job is this for? When/where? What's the status? What are my next actions?

Depending on context, some dimensions could be skipped. An email doesn't need to repeat information already in the subject line. This framework gave teams a decision model, not just a component library.

Defining interview components using atomic design framework
Defining interview components using atomic design framework

3. Modular System Design Using atomic design principles, I created a flexible module system:

  • Molecules: Status, Date, Time, Duration, Format, Location, CTA, Candidate, Job title, Interviewer, Attachments

  • Module sizes: Small (candidate list nav), Medium (messaging, email), Large (candidate details), List (interview management)

Each module size defines which molecules appear and how they're arranged, so teams pull a pre-configured module rather than assembling components ad hoc.

Interview module components
Interview module components
Interview cards
Interview cards

The Outcome: Teams now integrate interview information by selecting a module size rather than building custom cards. This reduced integration time, eliminated visual inconsistencies, and ensured interview status logic stayed synchronized across surfaces.

Challenge: Removing a feature was 10x harder than building one

The interview lobby was designed for hiring events (managing queues of candidates, routing interviewers, handling group scheduling). But it had been forced onto 1:1 interviews, which drove the majority of interview volume. The result: every employer joining a simple scheduled interview first landed on an event management dashboard. They weren't managing a queue. They were trying to talk to one person. And the data showed the cost: a 3-minute delay when everyone was ready, direct customer complaints, and 2-5% dropoff at unnecessary waiting steps, multiplied across 101K monthly interviews.

This should have been a straightforward simplification. It wasn't.

The Resistance

Product initially didn't see this as a priority. Their concerns were valid:

  • "Interview completion is a company-level metric. We can't risk regression on millions of interviews."

  • "The effort to test pre-interview steps across all scenarios is massive."

  • "It's working well enough for hiring events. Why change it?"

These weren't unreasonable objections. Interview completion was literally in the company's OKRs. Proposing changes to that flow meant proposing risk to a metric that leadership tracked quarterly.

Building the Case

I had to make the problem undeniable before anyone would let me touch the lobby:

  • FullStory session analysis: I compiled recordings showing employers getting visibly lost the moment they landed on the lobby, a page designed for managing queues of candidates, not joining a single scheduled interview. These weren't edge cases. This was the modal experience.

  • Qualitative synthesis: Pulled feedback from user interviews where employers explicitly called out the confusion: expecting to join an interview but landing on an event management dashboard instead.

  • Quantitative framing: Translated friction into business language. Waiting pages caused 2% job seeker dropoff. Resume re-sharing caused 5% dropoff. Across 101K monthly interviews, that's thousands of lost interviews per month, the very metric Product was trying to protect.

The recordings were what shifted the conversation. Spreadsheets show a problem exists. Watching a confused employer click around a page that wasn't designed for them shows why it exists.

Why "10x Harder to Remove"

Once I had buy-in to explore the change, I had to prove we could execute safely. Simplification required coordinating changes across multiple teams and systems:

  • 4 coordinated experiments across Lobby + CONVO teams

  • Impact analysis on 4 different notification triggers. "Your candidate is waiting" email, "Your interviewer is ready" SMS, each with different trigger logic that would change when lobby states were removed

  • Requeue logic, multi-interviewer support, state machine changes

  • API/data model updates to pass a 1:1 interview flag

  • Cross-team coordination across Lobby, CONVO, Copilot, SMB, and IHP

I mapped control vs. test flows for both job seekers and employers, documenting every scenario: What if a job seeker joins 2 hours early? What if they fail AV testing? What if the employer leaves and needs to rejoin? This comprehensive scenario map became the alignment tool. Product and Engineering could see exactly what changed and what stayed the same, building confidence that we'd considered every edge case.

The Solution

I broke the initiative into 4 testable experiments, each isolating a specific piece of lobby complexity:

  1. Add staging room before each interview (not just first-time)

  2. Skip lobby waiting pages, go directly to interview room

  3. Replace requeue modal with leave/rejoin capability (6-hour window)

  4. Land on Interview List post-interview (not confusing lobby page)

Additional improvements included SMS recovery for early-leavers and a simplified end-call modal replacing the confusing requeue question.

The Outcome

  • User experience: Direct, faster access to interviews for both parties. Less confusion, fewer steps, and elimination of the 3-minute delay.

  • Technical: Reduced maintenance burden, fewer outages from legacy lobby logic, and a cleaner foundation for future 1:1 development.

  • Business: Improved interview completion rates, the very metric that had made stakeholders resistant to the change in the first place.

Impact Summary

3 steps

Removed for both parties

101K+

monthly interviews impacted

04

Reflection

The single biggest lesson from four years of platform work: platform leadership is mostly persuasion and documentation, not design.

With 7 consuming products and no formal authority, every change required building alignment through shared documentation, working groups, and demonstrated value. The lobby simplification required more coordination than any new feature I built, because it touched every team's assumptions about how interviews worked. Removing something that exists is inherently harder than adding something new. You're not just designing a solution, you're dismantling the logic that everyone has already built around.

The modular system worked not because it solved every edge case upfront, but because it gave teams a decision model simple enough to actually use. And the attendance work taught me that two-sided marketplace problems can't be solved from one side. You have to design complementary interventions that create value for both parties simultaneously.

What I'd do differently:

  • Start the 1:1 vs. 1:many bifurcation earlier. Supporting both patterns in one system caused ongoing friction that could have been avoided with earlier separation.

  • Establish platform governance sooner. Earlier documentation of shared patterns would have prevented some cross-team inconsistencies before they calcified.

  • Push harder on disposition as a product goal. The correlation between structured rubrics and hiring outcomes suggests disposition data should have been a core metric earlier. It would have strengthened the strategic case for interviews over applications.