The truth is, there are typically a bunch of good candidates that apply for a job. There are also not-so-great candidates. As long as a company hires one of the good ones, they don’t really care if they lose all the rest of the good ones. They just need to make sure they don’t hire one of the no-so-great ones.
That’s actually a pretty bad thing. Like you could say the same thing about rejecting applicants who didn’t go to a certain set of schools, or submit a non-PDF resume, or who claims to have experince with a library/language that you don’t like (I had a colleague who said that he’d reject anyone with significant PHP experience because they probably learned “bad habits”) or any number of arbitrary filters.
If “good at leetcode” was a decent proxy for “knows how to build and scale accessible web UIs” or whatever, then okay great… But it’s not, as the author admits in the conclusion:
Coding interviews are far from perfect. They’re a terrible simulation of actual working conditions. They favor individuals who have time to do the prep work (e.g., grind leetcode). They’re subject to myriad biases of the interviewer. But there’s a reason companies still use them: they’re effective in minimizing hiring risk for the company. And to them, that’s the ball game.
So it’s unclear to me what they mean by “effective.” Are they good at evaluating how good a candidate will be at the job? No. Are they good at identifying talent that hiring teams might otherwise overlook? No. They are good at “minimizing hiring risk” by setting up another arbitrary hoop to jump through.
Let’s just call a spade a spade and admit that our hiring processes are so bad at evaluating talent that we settle for making candidates “audition” to prove that they can code at all, and then decide based on whatever entrenched biases we’ve decided constitute “culture fit.” Then the title could be “Coding interviews are the most effective tool we have, and that’s kind of a disaster.”
Thank you for reading my rant. I am available for podcasts and motivational speaking appearances.
Pretty questionable take IMO:
That’s actually a pretty bad thing. Like you could say the same thing about rejecting applicants who didn’t go to a certain set of schools, or submit a non-PDF resume, or who claims to have experince with a library/language that you don’t like (I had a colleague who said that he’d reject anyone with significant PHP experience because they probably learned “bad habits”) or any number of arbitrary filters.
If “good at leetcode” was a decent proxy for “knows how to build and scale accessible web UIs” or whatever, then okay great… But it’s not, as the author admits in the conclusion:
So it’s unclear to me what they mean by “effective.” Are they good at evaluating how good a candidate will be at the job? No. Are they good at identifying talent that hiring teams might otherwise overlook? No. They are good at “minimizing hiring risk” by setting up another arbitrary hoop to jump through.
Let’s just call a spade a spade and admit that our hiring processes are so bad at evaluating talent that we settle for making candidates “audition” to prove that they can code at all, and then decide based on whatever entrenched biases we’ve decided constitute “culture fit.” Then the title could be “Coding interviews are the most effective tool we have, and that’s kind of a disaster.”
Thank you for reading my rant. I am available for podcasts and motivational speaking appearances.