Skip to content

Hiring and Interviewing Data Scientists

Back in 2021, I hired two new teammates for my home team, the Data Science and Artificial Intelligence (Research) team at Moderna. As indicated by the "(Research)" label, our main collaborators are other research scientists within the company. In 2023, I will be potentially hiring for four more positions (as long as the business situation doesn't change). While I had helped hire data scientists in the past, 2021 was the first time I was primarily responsible for the role. So when I first joined, I thought I would have some time, say, half a year or so, to ease into the team lead part of the role. But by day 2, the 21st of July, my manager Andrew sent me an MS Teams message saying, "Hey, two positions for you are approved. Merry Christmas!" I was both surprised and excited at the time.

Criteria for Hiring

During this time, I had to develop the criteria for hiring. Thankfully, our Talent Acquisition (TA) team had a process in place, so I didn't have to design that half. Our department head, Dave Johnson, emphasized keeping the technical bar high because we want to hire people who can hit the ground running straight away. I also knew from previous experience that I would enjoy working with bright individuals who were:

  1. quick to learn,
  2. thorough in thinking through problems,
  3. capable of going deep into technical details,
  4. sufficiently humble to accept feedback, and
  5. courageous enough to challenge my views.

I also had to specify the exact skill requirements for the new roles. I came up with a list of skills that I believed were needed for the job. One thing our Talent Acquisition team suggested surprised me. They suggested that we shouldn't hire someone who hits all of the required technical skills for that level. Instead, we should hire someone who has a majority of the skills necessary and has shown the potential to learn the rest.

Why? It's related to our psychology. Most of us are motivated when we see a positive delta in our abilities, receive affirmation that we are improving, and can see the fruits of our labor in a finished product. Hence, this advice ensures that our new hires can hit the ground running while staying motivated longer.

Hiring takes time and is expensive; the longer we can retain high-quality individuals within the team, the better it is for continuity, morale, productivity, and the bottom line. This news should actually encourage those who think they're not a perfect fit for the role!

What about skills and knowledge, though? What are we looking for in our candidates? Well, here are the broad categories that I assess:

  1. People skills
  2. Communication skills
  3. Scientific knowledge
  4. Coding skills
  5. Modeling skills

Later on, I will go through them in more detail. Before we discuss those, though, let's talk about the hiring process.

Team Interviews

We interview candidates in a "team-based" interview; this aligns with my experiences interviewing at Verily and the Novartis Institutes for BioMedical Research (NIBR) when I interviewed for my first data science role. As the hiring manager, I get to assemble the committee and decide what aspect each hiring committee member will be interviewing for. Usually, I would pick potential close collaborators as interviewers, which makes sense given the nature of our team's work being collaborative.

Interviewing as a team helps ensure that we cover a wide range of perspectives and a rich set of questions covering both the technical and people dimensions.

I was most comfortable interviewing for technical skills, so I assigned the people dimension to my colleagues. Because we are a team that works with scientists, I also asked my colleagues to assess the candidates' scientific knowledge within the interviewers' domain. The hiring committee makes a go/no-go recommendation to the hiring manager, who makes the final call on whether to hire a candidate or to continue searching. (Other companies may have a different practice.)

With the hiring process described, let's explore those five (broad) criteria in more detail below.

People Skills

In assessing the people dimension, we were looking for stories highlighting a candidate's ability to handle difficult interpersonal situations and, more importantly, how they turned these testing situations around for the better. Taken together, I'm always curious to hear what lessons candidates have learned for the future. When doing the interview, I would ask the candidate to tell me about a time when they were in a difficult situation and how they handled it.

Here, then, is a set of questions that I would use to evaluate the candidate.

People Skills Rubric

Was the situation sufficiently non-trivial to handle?

Trivial situations are neither interesting nor revealing. If the candidate can offer up a non-trivial situation they handled and explain how they handled it, that is a positive sign. On the other hand, if they can only offer up trivial situations, then there is a chance they may not have sufficient battle scars to act maturely when faced with a non-trivial situation.

How much verifiable detail did the candidate provide in the story?

Details indicate ownership over the situation, and verifiability means we can check the details for truthfulness. Therefore, it's a positive sign if the candidate can offer up a story with sufficient details, such as the nature of the situation and how the people involved would be affected by their actions and decisions. Additionally, if the details are self-consistent, that's a plus sign too!

If applicable, how did the candidate turn around the situation for the better?

We are looking for tangible "how"s here. I would pay attention to how the candidate re-established rapport, worked to restore trust, and guide the team towards a productive outcome. With these details, we can get a sense of a candidate's toolbox for handling tough situations. Equipped with this people skills toolbox, I have a strong prior that they would also handle day-to-day situations productively.

Did the candidate reveal generally desirable qualities, such as humility, courage, resilience, and empathy? Or did they reveal other positive traits? Which specific details revealed these qualities?

We can gain an accurate picture of the candidate's character as long as the candidate provides details. Hence, details matter! Additionally, the qualities mentioned above are what we want to see in our teammates.

Did they reveal negative qualities instead that may turn out to be orange or red flags? What specific details revealed these qualities?

We need to be on the lookout for potentially negative qualities. If the candidate reveals negative attributes, we need to assess whether they are red or orange flags or whether cultural differences led to the negative perception. This is another reason why I believe a hiring committee is worth the time tradeoff; we can reduce the bias from our own cultural upbringing when interpreting a candidate's story.

Communication Skills

Communication skills matter. Our work is irrelevant if our collaborators do not understand it. Hence, we need to ensure that our candidates can communicate effectively to a wide range of audiences - from the technical to the non-technical, from junior members to senior leadership, and from the scientific to the business. Additionally, communication skills are essential in a senior-level candidate's ability to mentor junior team members, which is important for developing the team's capabilities.

To assess communication skills, we often use a seminar format. Here, we ask the candidate to present a data science project they have worked on. Sometimes it is their thesis work; at other times, it is a project they have worked on in their current role. Usually, there is a 30-40 minute presentation followed by additional time, interleaved or at the end, for Q&A. When assessing a candidate's communication skills, this is the rubric that I usually go for.

Communication Skills Rubric

Based on the presented material, am I able to summarize the candidate's problem statement, methodology, and key findings in 3 bullet points?

This question places the onus on me to put in a best-faith effort to understand the candidate's presentation. If I can summarize the candidate's presentation in 3 bullet points, that's an immediate positive sign. That said, even if I can't summarize the presentation in 3 bullet points, I can use follow-up questions to attempt to understand the work better. If, after a good-faith follow-up, the candidate still cannot explain their work in a way that I can understand, only then do we have a red flag.

How engaged was the audience during the candidate's presentation?

High engagement is a positive aggregate sign; while it doesn't necessarily reveal specific traits, it usually means the candidate is able to hold the audience's attention through a combination of:

  1. A clear and compelling problem statement that resonates,
  2. A methodology that is logical and, even better, exciting to the audience,
  3. A conclusion that is interesting and potentially relevant to the audience, and
  4. Effective visual storytelling employed by the candidate.
Did I learn something new that was technical? Alternatively, did I see a new way of explaining an existing thing I already understood?

This is especially important for senior candidates and speaks to the candidate's potential to mentor junior team members. If the candidate can teach me something new or present it in a novel way that is also effective, they likely can teach others too.

What were the aesthetically pleasing aspects of the presentation, if any?

This question is subjective but important to me. I want to work with individuals who have a good sense of design and, more importantly, can execute and bring that design to life. Good aesthetics is a marker of excellence! A pleasant presentation aesthetic, such as pixel-perfectly aligned shapes, harmonized colors across all figures, and a consistent font and font size, are second-order indicators of attention to detail, which is a general quality we want in our teammates.

Scientific Knowledge

Because we are a team that works with scientists, we also need to ensure that our candidates can communicate with scientists, which heavily implies that they need a solid scientific knowledge base.

For junior candidates, such as those fresh from a Master's or Bachelor's program, this means they need to have sufficient knowledge over at least one domain of science, such as biochemistry, analytical chemistry, or immunology, depending on the expectations of the role.

Senior candidates, such as those who have finished a Ph.D. or post-doc training, should know more than one domain of science. Both levels should also be able to follow along in scientific discussions and be courageous enough to raise questions when they don't understand something.

Because our work involves working with experimental scientists, familiarity with the experiment design process is also essential. Hence, we place a premium on knowing how to design an experiment and institute appropriate controls, all while articulating the limitations of that experiment. A data scientist in our domain who has those qualities will be able to win the trust of our wet lab scientist colleagues.

Alternatively, if the candidate has not worked on experiments before, they should be able to articulate how they would work within the "design-make-test-analyze" cycle of machine-learning-guided experimentation. Prior experience using machine learning methods to impact any of these steps would significantly enhance their candidacy.

To assess scientific knowledge, if the candidate presents a seminar with a strong science flavor, we will typically use the seminar topic to evaluate their scientific knowledge. Usually, we do this by asking detailed follow-up questions. If not, I would ask them to teach me a scientific topic of their choosing. To assess experimental design knowledge, I would ask the candidate to describe a scientific experiment they helped design and see how well they could articulate the details of the experiment. Then, I would use the following questions to evaluate their response.

Scientific Domain Knowledge

While I write extensively about the need for scientific training here, that is borne out of our role as a data science team working with scientists and engineers. For other data science roles at Moderna that don't work closely with scientists and engineers, this is not as important. The same may hold true at other companies. As a hiring manager, I believe it's vital for us to ask ourselves critically how much domain knowledge we expect a candidate to come in with and how much we are willing to coach. Some domains are easier to pick up through shallow osmosis; others need deep immersion in the field.

Scientific Knowledge Rubric

How much scientific detail can the candidate delve into?

The candidate should be able to provide a high-level overview of their scientific domain and be ready to field questions on any relevant details.

For example, if the candidate is speaking on finetuning AlphaFold for protein design, they should be able to answer questions about how proteins fold, the different levels of structure, and common patterns of how mutations affect a protein's structure, and so on. If the candidate does not have a biochemistry background, that's even better! Being able to field such questions indicates their ability to learn quickly.

As another example, if the candidate is speaking on the use of machine learning models to prioritize molecule candidates based on an assay and can talk fluently about the experiment design, caveats, and even go as far as to identify where the assay is particularly laborious, then that is an immensely positive sign of a candidate's ability to learn and to empathize with experimental scientists.

How much detail did the candidate provide about an experiment they designed? Did they highlight any limitations of the experiment?

This question checks a candidate's proficiency in experiment design. The presence of detail helps us build confidence that the candidate actually participated in the experiment design. They should be able to describe the experimental controls and what they control for. They should be familiar with the various axes of variation in the experiment (such as time or temperature) and how we expect these axes to affect the outcome of the experiment.

No experiment will give us the answers we need for scientific inquiry. As such, the candidate should offer up the limitations of the experiment. For example, did they highlight limitations of the controls, where physical limitations may occur in the experiment, and how they would address those limitations in a future experiment?

Coding Skills

Coding skill is another axis along which we assess candidates. Because "data science can't be done in a GUI", we need to ensure that our candidates can write both readable and maintainable code. After all, even though our primary value-add is our modeling skills, we still need to write code that surrounds the model so that we can make the model useful. Hence, it is crucial to assess a candidate's coding skills.

How do we assess coding skills? If a candidate interests me, I will proactively look out for their profile on GitHub to see what kind of code they've put out there. One well-maintained codebase is a positive sign; not having one is not a negative, as I understand not everybody has the privilege of time to maintain a codebase.

I have found one way to deeply evaluate a candidate's coding skills, which I would like to share here. In this interview question, I would ask the candidate to bring a piece of code they are particularly proud of to the interview. We would then go through the code together, code-review style.

We intentionally ask candidates to pick the code they are proud of, which gives them a home-court advantage. They should be able to explain their code better than anyone else. They also won't need to live up to an imagined standard of excellence. Additionally, because this is their best work, we gain a glimpse into what standards of excellence they hold themselves to.

During the code review, I would ask the candidate questions about the code. These are some of the questions that I would cover:

  1. What's the purpose of the code?
  2. How is it organized? Are there other plausible ways of organizing the code?
  3. What tradeoffs did the candidate make in choosing that particular code organization?
  4. Are there parts of the code that remain dissatisfactory, and if so, how would you improve it in the future?

That 4th question is particularly revealing. No code is going to be perfect, even my own. As such, if a candidate answers "no" to that question, then I would be wary of their coding skills. A "no" answer usually betrays a Dunning-Kruger effect, where the candidate thinks they are better than they actually are.

That said, even a "yes" answer with superficial or scant details betrays a lack of thought into the code. Even if the code is, in my own eyes, very well-written, I would still expect the candidate to have some ideas on where the code could be extended for a logical expansion of use cases, refactored, or better tested. If the candidate cannot provide details on these ideas, it would betray their shallow thinking about the problem for which they wrote the code.

In the next section, I'll describe my rubric for assessing coding skills.

Coding Skills Rubric

Did the candidate offer the details mentioned above without prompting?

This is a sign of experience; they know how to handle a code review, which we often do, and are usually confident in their knowledge of their code's strengths and weaknesses.

How well-organized was the code? Does it reflect idiomatic domain knowledge, and if so, how?

Organizing code logically is a sign of thoroughly thinking through the problem domain. Conversely, messy code usually suggests that the candidate is not well-versed in the problem domain and hence does not have a well-formed opinion on how they can organize code to match their domain knowledge. Additionally, because code is read more than written, a well-organized library of code will be easily reusable by others, thereby giving our team a leverage multiplier in the long run.

How well-documented was the code? In what ways does the documentation enable reusability of the code?

Documentation is a sign of thoughtfulness. In executing our projects, we consider the problem at hand and the reusability of the code in adjacent problem spaces. Documentation is crucial here. Without good documentation, future colleagues would have difficulty understanding the code and how to use it.

Did the candidate exhibit defensive behaviors during the code review?

A positive answer to this question is a red flag for us. We want to hire people who are confident in their skills but humble enough to accept feedback. Defensive behaviors shut down feedback, leading to a poor environment for learning and improvement.

How strong were reasons for certain design choices over others?

This question gives us a sense of the candidate's thought process. Are they thorough in their thinking, or do they rush to a solution? Because our team is building out machine learning systems, we must be careful about our design choices at each step. Therefore, if a candidate does not demonstrate thinking thoroughly through their design choices, then it means we will need to spend time and effort coaching this habit.

Modeling Skills

The final aspect of our roles as data scientists is picking and building computational or mathematical models that help us solve the problems we are working on. By their very nature, models are an abstract simplification of the real world. Then, modeling skill is the artful ability to figure out what needs to be abstracted while translating that into code.

What do I mean by modeling skills?

I'd like to disambiguate what I don't mean by modeling skills. If you're used to building predictive models of the world, you might say something along the lines of, "XGBoost is all you need for tabular data, variants of UNet for image machine learning, and a Transformer for sequence data." Or you might be used to trying several scikit-learn models and deciding on one based on cross-validation. In my mind, this is not skillful modeling; it's just following a recipe. By modeling skill, I am referring to the ability to do physics-style mechanistic modeling.

As a team embedded in a larger scientific research organization, there is something deeply dissatisfying with only being able to make predictive models. Being able to build explicit mathematical models is what I mean by modeling skill, whether that involves the use of differential equations, state space models, or other forms of mechanistic modeling.

Now, how do we assess modeling skills? Usually, we do this through their seminar or presentation, which is a standard part of our interview process. Usually, a candidate will present a problem they are working on, how they solved it, and its impact on their colleagues and the business they are part of. During the seminar, we would ask questions about the modeling process and use the candidates' answers to assess their modeling skills. While handy, I think the seminars may be insufficient alone; often, I have found that we need to dig deeper during the 1:1 sessions in order to tease out the necessary details to assess modeling skills.

Modeling Skills Rubric

What alternative models or modeling approaches did the candidate consider, and why were they not chosen?

If the candidate can provide a solid and logical reason for their final model choice while contrasting it against other choices, then it means they are familiar with a broad range of approaches, which is a sign of a well-rounded modeling skillset.

I would expect this trait in someone who is a Data Scientist-level hire. For a Research Associate-level hire, this would be less of an expectation, but if the candidate possesses this trait, I would be impressed.

Someone who can only come up with one model option to solve a problem needs more training in modeling.

Finally, if the candidate can't explain why they chose a particular model, that is a red flag, and I would be wary of their modeling skills.

If applicable, how clear was the candidate in mapping the parameters of the model onto explainable aspects of the problem?

If the model involved was an explicit model of a system, such as a hidden Markov model or a linear model, I would expect the candidate to explain how the parameters of the model map onto the key questions being answered. For example, what does the slope parameter in a linear model mean for this problem? How do we interpret the transition matrix in an HMM?

If the candidate presented a neural network model, can the candidate explain the model architecture in detail without jargon? Can they articulate the inductive biases of the model and contrast those biases' suitability for the problem at hand?

In particular, I would expect the candidate to explain how they arrived at their preferred model architecture and how they ruled out other architectures. Too often, I have seen candidates pick an architecture they are familiar with or feel is popular or hot without thinking through their choice's pros and cons. (The worst I saw was using a convolutional neural network on tabular data for no good reason -- in a journal article that I reviewed.) A strong candidate would compare and contrast the inductive biases of various architectures.

Final Evaluation

As hiring managers, we have tradeoffs to consider when evaluating the candidate along the dimensions above. It is exceedingly rare to find a candidate who is strong in all dimensions. Beyond raw execution ability, complementarity with the rest of the team also matters. Therefore, we must decide which dimensions we are most willing to coach on and in which dimensions we expect a candidate to be well-equipped.

Here is an example of a set of questions I would ask myself about a candidate.

Am I capable of coaching them something new?

I'm more inclined to coach on coding skills than modeling skills, so I would expect a candidate to come in with a robust modeling skillset; my role would be to refine their ability to write software. Hence, I would prioritize candidates according to that criteria. (Other hiring managers would prioritize differently.) Additionally, I am willing to coach on communication skills but less on domain knowledge, so I expect a candidate to come in knowing their science fundamentals well.

Does the candidate come equipped with special modeling skills that are complementary to the team's current skill sets?

Beyond the role's requirements, we would also like to bring additional skill sets to the team. Doing so lets us tackle a broader range of problems (and therefore deliver more value), foster a learning environment with a diverse breadth of skills, and crucially, build a more resilient team, especially if one of our team members leaves.

Will the candidate work well with other teammates and collaborators?

For example, are they able to communicate well with others? Do they leave others feeling more confused than clear? If they display a sense of humor, does it mesh well with the team, or does it come off as abrasive?

Do they have the confidence to execute their ideas in a group setting with a track record of successful execution to back it up?

General Lessons Learned

Here are some general lessons that I learned from the hiring process. These should be beneficial to both interviewers and interviewees.

Details matter

Firstly, details matter! A pattern you may see above is that I expect to see many details relevant to the skill under assessment. An apocryphal story I read before stated that details are how Elon Musk tells if a candidate is faking it. While I disagree with many of Elon Musk's ways, this is a handy lesson for interviewers and interviewees.

For interviewers, it is vital to dig into details. The more information you can fish out, the better handle you'll have on your candidate's capabilities, and you'll be less likely to make a poor hire. For interviewees, conversely, it is important to be forthcoming with details. The more you're able to provide details to your interviewer, especially the ones critical to the job, the more accurately your interviewer will be able to evaluate you. Your challenge, as a candidate, is to be concise and detailed! You don't want to overwhelm your interviewer with irrelevant details. For both parties, the best way to avoid doing so is to treat the interview as a conversation rather than a presentation, where you share information in a back-and-forth manner.

Local context matters

Secondly, the team's local context matters! I see teams as continual works-in-progress, puzzles with missing pieces. The missing piece has to fit and enhance a team. And by that fit, I don't just mean the fuzzy cultural fit that can be used as a smokescreen for rejecting a validly great candidate.

We need to consider technical skills and what the hiring manager and the rest of the team are willing to coach. Ideally, these should be complementary for the benefit of the team and new hire.

We also need to consider whether the candidate's mix of skills can bring a new dimension to the team's capabilities. For example, if the team needs someone with deep learning experience and a candidate also brings probabilistic modeling skills that the team currently lacks, that candidate would bring a unique new capability to the team. Therefore, we would favor that candidate over another who only brings deep learning experience.

What does this mean for candidates? First, it can mean that even though you're thoroughly qualified for a role, you might not be the final choice because of the local context of a team.

For example, a hiring manager may be willing to coach you on skills you're already strong in, which means they would not be as great for you as a professional development coach. (This is an essential expectation for you to have for your manager!) Therefore, from this perspective only, it would be to your benefit not to be accepted for the role and to continue searching elsewhere.

Or it could mean that your skillsets were an excellent match, but another candidate came in with an enhancer skillset that complemented what the team already had. Of course, from this perspective, that other candidate only wowed the hiring manager and the team with their extra skillsets that you didn't have, and they will be valued accordingly. Nonetheless, you should still be confident in your capabilities and continue to search for a team to which you can be a special sauce contributor. That will give you the confidence to know that you possess something special for the team and will be valued accordingly.

Expectations mapped to technical level

Where I work, we have three broad levels of technical folks, in increasing order:

  1. Research Associates (3 levels, including Senior and Principal)
  2. Data Scientists (3 levels, including Senior and Principal)
  3. Fellows (3 levels, including Senior and Distinguished)

Research Associate ranks are usually for fresh graduates out of Bachelor's or Master's programs, with the Senior and Principal ranks for longer industry experiences. Data Scientists are for those newly graduated from Ph.D. programs or just finishing their post-doc training without industry experience or Master's graduates with prior industry experience in a related role. Here, the Senior and Principal ranks are generally for existing Data Scientists with a track record of accomplishment in a prior industry role, regardless of prior educational training. "Fellow" ranks are for highly skilled and experienced individuals who bring technical, domain, and people leadership skills to the table, with a focus on technical strategy. Think "Staff Engineer" for an analogy. (Thought leadership is the summary term for all three.)

We calibrate our expectations for each level. For example, our expectations of Data Scientists, who either have extensive industry experience or have completed doctoral training, are much higher than that of a Research Associate. We expect them to be much more mature in their people skills, more polished in their communication skills, and possess greater depth and breadth of domain knowledge. On the other hand, Fellows need to be concerned with technical leadership, partner with our Digital Business Partners to align the team's work with research needs, have a firm grasp of our work's impact to research productivity, and be able to mentor and coach others on effective work practices. Their credibility for such leadership comes from their battle-tested experience.

Summarizing my take on the expectations in a table:

Level Research Associate Data Scientist Fellow
People Skills Good Excellent Excellent
Communication Skills Great Polished TED-style :)
Scientific Knowledge Single discipline Multi-disciplinary Multi-disciplinary & tied to business
Coding Skills Writes organized code that works Develops & maintains project codebases Sets technical standards for team
Modeling Skills Knows how to apply models Can develop new mathematical models Sets modeling strategy for the team, invents methods
Leadership Owns individual problems Owns and directs a related suite of projects Evangelizes new technologies across business

When it comes to coding skills, I think we recognize that most of our candidates fresh from academia would not be as polished as those with industry experience. My experience played out uniquely; I picked up software development skills during my graduate school days, thanks to my involvement in the open-source community. However, most of my peers did not have the same experience, and I am seeing the same situation six years after graduation. Therefore, at the time of writing (January 2023), I consider any demonstrated software development skill to be a differentiating attribute for a candidate. Finally, with increasing seniority, we expect stronger opinions on structuring and organizing code, more extensive experience with code review and a more nuanced understanding of software development best practices.

When it comes to modeling skills, I would expect a Research Associate-level candidate to be well-trained in a few modeling techniques (especially those taught in regular university curricula), know where to apply those models, and compare and contrast their use cases. On the other hand, Data Scientists should be able to apply a broader range of modeling techniques (covering both mechanistic and data-driven models), and they should be able to dive into the math behind them. With increasing seniority, candidates should also have a more opinionated philosophy on modeling strategy. For example, where to use a mechanistic model vs. a data-driven model, or, my favorite, being able to cast any modeling problem through the lens of probabilistic modeling.

Continual Challenges when Interviewing

There is still one area that I feel is challenging to assess: how fast a candidate can learn a new thing. I have tried asking questions, such as:

  • How do you learn a new technical skill?
  • What is your process for learning a new topic?

However, I tend to get generic answers, such as:

  • Asking other people.
  • Reading papers.
  • Trying out an example with code.

None of those answers really get to the heart of what I'm trying to ask: how does the candidate master a new skill quickly? I probably need a better way to ask the question.

Q&A

How does conducting a simulated code review compare against other common data science hiring practices, such as live coding, completing a data challenge or a Leetcode challenge?

Leetcode challenges are (1) gameable through practice and (2) don't correspond to a real day-to-day activity in our work.

Data challenges, on the other hand, are (1) potentially unfair to candidates who have families, and (2) take a lot of effort to evaluate.

While I have successfully run a data challenge and recruited an intern candidate before, I don't think it's a scalable way to hire full-timers.

Live coding also does not reflect how we work - we usually do things independently first and then get the work reviewed later. Hence, live coding is irrelevant to us, and I choose not to evaluate candidates based on it. Moreover, for candidates who have never done live coding before, it can be stressful and may lead to a poor candidate interview experience, which will reflect poorly on us and impact our ability to recruit talented individuals.

Are interviews done remotely? If so, how do you assess engagement?

Interviews can be done remotely or in person. I usually assess audience engagement by looking at the quality of questions that the audience asks. If the audience is engaged, you'll hear lots of questions that probe the speaker's thought process. It's an aggregate sign that the candidate is matching their message to the audience well. On the other hand, lots of basic questions means that the candidate is missing crucial information in their presentation. Finally, a low-engagement audience will usually be bored by the presentation and not too many questions.

Conclusions

This essay is the culmination of reflecting on my hiring experiences since I joined the industry in 2017. I wrote it down with three goals in mind:

  1. To record what I've learned from my hiring experiences for future reference,
  2. To help others who may find the hiring process to be opaque or mystical, and
  3. To serve as a conveniently sharable record of my hiring thought process for my colleagues.

If you've found this essay useful as a hiring manager, please consider doing two things:

  1. Sharing it with your peers and colleagues so that they may benefit, and
  2. Leave me a note on the repository's discussion forum to let me know what you think.

If, as an interviewee, you are starting to feel intimidated at the process, please reconsider your feelings! I've peeled back the curtain on what I look for in a candidate precisely to help you, the interviewee, understand better what we're looking for. It should help you better prepare for an interview, at least, for an interview with me and my team. Likewise, if you've found the essay useful, please consider sharing it with your peers, and let me know your thoughts and questions on the discussion forum.

Acknowledgments

I want to thank my NIBR colleague William J. Godinez and pyjanitor core developer Hector E. Muñoz for their feedback on the essay.

I would also like to thank my Patreon sponsors for their support! If you find my work valuable, please consider sponsoring here!

Sponsors

Thank you, Ahmed, Daniel, Sat, Robert, Rafael, Fazal, Hector, Carol, and Eddie!