A Hot Hand In Hiring? Big Differences from Small Correlations

In HR Analytics, strong statistical relationships are the name of the game.

This usually means a search for high correlations and high R-squared values.

In today’s tutorial, however, I will show you how a really strong relationship, one that would be practically impactful can be overlooked by the typical “high correlation or high R-squared” evaluation standard.

The culprit? Binary (0/1) data.

The Situation

Kristen is an in-house recruiter who loves her system. She’s carefully developed her technique and her checklist over the course of five years and just KNOWS that the people she brings in for these roles tend to be better hires. She’s not claiming it’s perfect, but there’s value there and she knows it.

She feels so strongly about her system in fact that she constantly brings it up during the quarterly team meeting.

After six consecutive quarters of assertions and borderline bragging, the director of talent acquisition relents and says she’ll look into it. If there is meaningful evidence that her system is indeed better, she’ll think about implementing it across the team.

The director turns to her analytics pro to take a crack.

Setup and Method

To keep things simple, the analytics pro looks at just two things:

  1. Who selected the new hire (Kristen = 1, Others = 0)
  2. Whether the hire was a success (for example, stayed a full year; Yes = 1, No =0).

For this example, we say that Kristen’s hires will be successful 60% of the time while those of the rest of the group will be succcessful at a 50% rate.

Those differences are not enormous, but they are certainly meaningful to any business. Improving performance by 10% in absolute terms and the relative probability of success by 20% (10%/50%) will get you a raise and probably a promotion in most situations.

In addition, let’s specify that 15% of the hires are based on Kristen’s selection while the remaining hires are based on the other recruiters.

Weak Correlation…

With that as a setup, let’s run some simulations and see what happens with our resulting correlations.

set.seed(43) # setting the seed
n <- 10000 
good_prob <- rep(c(.60,.5),c(.15,.85)*n) # proportion of good hires for for K v. others. 
k_pick <- round(good_prob) #setting the corresponding k v. others value
job_outcome <- rbinom(n,1,good_prob) #determining success/ failure for a given hire from a given pool
cor(k_pick, job_outcome) # correlation of the pick v. success.
## [1] 0.06910685

When our analyst runs the correlation, the relationship is pretty darn small: a mere .07.

Squaring that value gives us an R-squared value of just 0.5%. This means that knowing who did the hiring (Kristen or the other members of the team) accounted for less than 1% of the total variance.

By the common correlation and R-squared evaluation, there is nothing to see here.

But BIG Impact…

BUT WAIT! We know for sure that there is fact a substantial difference in the success rate because we set it up that way.

In real life we won’t have access to that true value, of course, but we can use the aggregate function to get the mean percentage of success for Kristen versus the others. Obviously, the differences are substantial.

round(aggregate(job_outcome, by = list(k_pick), mean),2)
##   Group.1    x
## 1       0 0.50
## 2       1 0.59

If we feel compelled to substantiate our claim that there is actually a meaningful difference here, we could run a basic regression analysis or a t-test:

summary(lm(job_outcome ~ k_pick)) # linear regression
## Call:
## lm(formula = job_outcome ~ k_pick)
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -0.5933 -0.4966  0.4067  0.5034  0.5034 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  0.49659    0.00541  91.799  < 2e-16 ***
## k_pick       0.09674    0.01397   6.927 4.58e-12 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## Residual standard error: 0.4987 on 9998 degrees of freedom
## Multiple R-squared:  0.004776,   Adjusted R-squared:  0.004676 
## F-statistic: 47.98 on 1 and 9998 DF,  p-value: 4.578e-12
t.test(x = job_outcome[k_pick == 1], y = job_outcome[k_pick == 0])
##  Welch Two Sample t-test
## data:  job_outcome[k_pick == 1] and job_outcome[k_pick == 0]
## t = 7.0116, df = 2084.6, p-value = 3.17e-12
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  0.06968612 0.12380408
## sample estimates:
## mean of x mean of y 
## 0.5933333 0.4965882

In practical terms, this is exactly what you want to do:

Show that the mean difference is practically meaningful and, if so, then reinforce that claim by noting the significance

What Have We Learned?

Even tiny correlations can turn out to have big implications. In this case, we saw how a miniscule correlation could still result from practical, meaningful difference in the success rate of a hiring pick. Flipped around, you just saw how a big, meaningful difference can be overshadowed by a deceptively small correlation. Never be fooled again!

There are two lessons: 1. Do not automatically dismiss a finding because of a small correlation ro R-squared. 2. Be very careful when using 0/1 data because they are “relatively impoverished” (quoting Gelman who was quoting Korb and Stillwell)

Looking only at the correlation would suggest there is little difference between Kristen’s picks and the others. The means and the supporting statistical tests tell us a different story however.

When it comes to new hires, it appears that Lisa has a hot hand.


This post was inspired by one from Columbia Univeristy statistician (and blogger extraordinaire) Andrew Gelman. Read (his original post)[# http://andrewgelman.com/2016/12/19/30759/] or, better yet, just go to (his site)[http://andrewgelman.com] and read his blog everyday.

Like this post?

Get our FREE Turnover Mini Course!

You’ll get 5 insight-rich daily lessons delivered right to your inbox.

In this series you’ll discover:

  • How to calculate this critical HR metric
  • How turnover can actually be a GOOD thing for your organization
  • How to develop your own LEADING INDICATORS
  • Other insightful workforce metrics to use today

There’s a bunch more too. All free. All digestible. Right to your inbox.

Yes! Sign Me Up!

Comments or Questions?

Add your comments OR just send me an email: john@hranalytics101.com

I would be happy to answer them!

Contact Us

Yes, I would like to receive newsletters from HR Analytics 101.