Hiring at Mozilla: Beyond Resumés and Interview Panels
The standard tech hiring process is not good at selecting the best candidates, and introduces unconscious bias into the hiring process. The traditional resume screen, phone screen, and interview process is almost a dice-roll for a hiring manager. This year, my team has several open positions and we’re trying something different, both in the pre-interview screening process and in the interview process itself.
Hiring Firefox Platform Engineers now!
Earlier this year I attended a workshop for Mozilla managers by the Clayman Institute at Stanford. One of the key lessons is that when we (humans) don’t have clear criteria for making a choice, we tend alter our criteria to match subconscious preferences (see this article for some examples and more information). Another key lesson is that when humans lack information about a situation, our brain uses its subconscious associations to fill in the gaps.
Candidate Screening
I believe job descriptions are very important: not only do they help candidates decide whether they want a particular job, but they also serve as a guide to the kinds of things that will be required or important during the interview process. Please read the job description carefully before applying to any job!
In order to hire more fairly, I have changed the way I write job descriptions. Previously I mixed up job responsibilities and applicant requirements in one big bulleted list. Certainly every engineer on my team is going to eventually use C++ and JavaScript, and probably Python, and in the future Rust. But it isn’t a requirement that you know all of these coming into a job, especially as a junior engineer. It’s part of the job to work on a high-profile open-source project in a public setting. But that doesn’t mean you must have prior open-source experience. By separating out the job expectations and the applicant requirements, I was able to create a much clearer set of screening rules for incoming applications, and also clearer expectations for candidates.
Resumés are a poor tool for ranking candidates and deciding which candidates are worth the investment in a phone screen or interview. Resumés give facts about education or prior experience, but rarely make it clear whether somebody is an excellent engineer. To combat this, my team won’t be using only resumés as a screening tool. If a candidate matches basic criteria, such as living in a reasonable time zone and having demonstrated expertise in C++, JavaScript, or Python on their resumé or code samples, we will ask each candidate to submit a short written essay (like a blog post) describing their favorite debugging or profiling tool.
Why did I pick an essay about a debugging or profiling tool? In my experience, every good coder has a toolbox, and as coders gain experience they are naturally better toolsmiths. I hope that this essay requirement will be good way to screen for programmer competence and to gauge expertise.
With resumés, essays, and code samples in hand, Vladan and I will go through the applications and filter the applications. Each passing candidate will proceed to phone screens, to check for technical skill but more importantly to sell the candidate on the team and match them up with the best position. My goal is to exclude applications that don’t meet the requirements, not to rank candidates against each other. If there are too many qualified applicants, we will select a random sample for interviews. In order to make this possible, we will be evaluating applications in weekly batches.
Interview Process
To the extent possible, the interview format should line up with the job requirements. The typical Mozilla technical interview is five or six 45-minute 1:1 interview sessions. This format heavily favors people who can think quickly on their feet and who are personable. Since neither of those attributes is a requirement for this job, that format is a poor match. Here are the requirements in the job description that we need to evaluate during the interview:
- Experience writing code. A college degree is not necessary or sufficient.
- Expertise in any of C++, JavaScript, or Python.
- Ability to learn new skills and solve unfamiliar problems effectively.
- Experience debugging or profiling.
- Good written and verbal communication skills.
- Candidates must be located in North or South America, Europe, Africa, or the Middle East.
This is the interview format that we came up with to assess the requirements:
- A 15-minute prepared presentation on a topic related to the candidate’s prior experience and expertise. This will be done in front of a small group. A 30-minute question and answer session will follow. Assesses experience writing code and verbal communication skills.
- A two-hour mentoring session with two engineers from the team. The candidate will be working in a language they already know (C++/JS/Python), but will be solving an unfamiliar problem. Assesses experience writing code, language expertise, and ability to solve unfamiliar problems.
- A 45-minute 1:1 technical interview. This will assess some particular aspect of the candidate’s prior experience with technical questions, especially if that experience is related to optional/desired skills in the job description. Assesses specialist or general expertise and verbal communication.
- A 45-minute 1:1 interview with the hiring manager. This covers a wide range of topics including work location and hours, expectations about seniority, and to answer questions that the candidate has about Mozilla, the team, or the specific role they are interviewing for. Assesses candidate location and desire to be part of the team.
During the debrief and decision process, I intend to focus as much as possible on the job requirements. Rather than asking a simple “should we hire this person” question, I will ask interviewers to rate the candidate on each job requirement and responsibility, as well as any desired skillset. By orienting the feedback to the job description I hope that we can reduce the effects of unconscious bias and improve the overall hiring process.
Conclusion
This hiring procedure is experimental. My team and I have concerns about whether candidates will be put off by the essay requirement or an unusual interview format, and whether plagiarism will make the essay an ineffective screening tool. We’re concerned about keeping the hiring process moving and not introducing too much delay. After the first interview rounds, I plan on evaluating the process, and ask candidates to provide feedback about their experience.
If you’re interested, check out my prior post, How I Hire At Mozilla.
May 11th, 2015 at 10:42 am
I’ve done some similarly interesting things as a hiring manager, but typically for more ops/support related roles. If you have even an inkling that there’s plagiarism, you can search for phrases on google in quotes, it will bring up if that’s happening. It will happen, btw.
You may consider throwing them a question that can’t be plagiarized, that’s more specific to Mozilla. Then you don’t have to worry about whether they are willing to do the work. You will also see a drop in candidates, but it’s worth it because the ones who sick through are either desperate or really excited about what you’re doing and the difference between is super obvious.
Thank you for being willing to see past your own bias and changing it up! I’ve gotten flak for my style of interview from HR, but candidates love it!
May 11th, 2015 at 1:43 pm
This is really interesting, but it mostly looks just like a regular technical interview with maybe one differently-named technical interview in the middle of it. So I’d like to know more about how you’re going to implement these? Why is the ‘mentoring’ session 2-on-1? How is it different from the ‘technical interview’? Have you given any thought to how to make the candidates more comfortable during the process, so they can really perform at their best?
May 11th, 2015 at 9:43 pm
> Candidates must be located in North or South America, Europe, Africa, or the Middle East.
Is it because both Vladan and you are on the american east coast and SE-Asia and Oceania don’t quite work timezone-wise, for those? Wouldn’t it make sense to remove the geographic limitation, and have SE-Asia/Oceania candidates be handled by managers in those regions?
May 12th, 2015 at 5:06 am
It’s exciting that we’re trying to be better here!
When going through the resumes and essays do you plan to (get HR to) remove as much identifying information as possible before anyone making interview decisions sees them? There are some studies showing that people have unconcious biases that can be triggered by e.g. names, leading them to reject otherwise qualified candidates. Although this can be worked around by having a clear schema for marking candidates against the job requirements, it seems sensible to remove as much of that information as possible to make sure.
As an aside, I wonder whether the specific topic of tooling favours people with experience in languages where tools are more required / part of the community. I don’t know any C++ programmers who don’t have good knowledge of gdb of VS or similar, whereas I suspect there are a lot of very competent Python programmers who have only a passing knowledge of pdb. Indeed for python programmers in specific I wonder if this introduces a bias to people who have the right background to attend lots of conferences where people tend to talk about tooling. Similarly applicants with a mainly-js background could feel that they are “expected” to write about firefox devtools even if that’s not what they normally use.
In terms of the in-person parts of the process, it sounds like there is not much there that’s standardised or reproducable. Do we have a plan to tell whether this hiring process is working e.g. by correlating people’s performance to later on-job assessments? That’s very hard to do if everything is qualitative.
May 12th, 2015 at 8:22 am
Oh, and one more thing whilst I think of it. We should try to ensure that the range of people that assess candidates reflects the diversity that we would like to have. For — a deliberately extreme — example if someone comes to give a presentation and finds that the four people watching it are all twenty-something men that went to Stanford we should expect candidates that aren’t part of that same demographic to perform comparatively worse, and have a poorer impression of us as an organisation, than those who are. In that scenario we should also expect the people doing the hiring to (subconsciously) be more inclined to select for people like themselves than they would as a member of a more diverse selection panel.
May 12th, 2015 at 10:12 am
glandium it’s both about our location as managers and the time zone of everyone on the team. I want the people working together closely to be able to have at least some time they can talk synchronously. I think it’s great for different teams to have different time zone requirements, but I don’t think that should change this position.
May 12th, 2015 at 10:23 am
jgraham, I’m working with Mozilla recruiting to test a new name-blinding system, but I don’t know whether it will be ready in time. It’s certainly somewhere we’d like to end up, but it’s hard to do well.
As for python programmers and tooling, I’m perfectly fine with this requirement discriminating against people who have only programmed in python and therefore never used or built up a collection of tools. That’s a sign that you’re not prepared for this position.
I agree that the in-person interview isn’t very quantitative. I don’t think I can solve that problem, since I’d need many candidates or an entire lifetime of hiring to calibrate a quantitative model.
May 12th, 2015 at 11:57 am
> I’m working with Mozilla recruiting to test a new name-blinding system, but I don’t know whether it will be ready in time
Excellent!
> I’m perfectly fine with this requirement discriminating against people who have only programmed in python
That’s fine, but if we don’t want people who have only programmed in Python, the job advert probably should make it more clear. I realise that both C++ and JS are listed under “desired skills” whereas Python isn’t, but it seems simpler all round if those were listed as requirements. We should also be careful that we don’t blindly adopt this procedure for other positions with different requirements.
> […] in-person interview isn’t very quantitative. I don’t think I can solve that problem
I don’t see why we can’t do much better. I mean obviously it doesn’t help the first time, but the point about having data is that it helps you make better decisions in two years and then better decisions again in another two years. There are a bunch of things we could do to improve this. For example we could make the interview questions standardised (c.f. [1]). We could make the problem solving exercise standardised (maybe that was already your plan; it’s not clear from the above). We could introduce a test component to the interview (either written or on a computer). All of these things have various advantages and disadvantages of course, but they all have the property that the results are meaningfully comparable across candidates over time. I think that’s a great property to have both for us and for candidates (who understand that the process is *fair* even if they don’t agree with it).
[1] http://sockpuppet.org/blog/2015/03/06/the-hiring-post/